Cloud Text-to-Speech API . text

Instance Methods

synthesize(body, x__xgafv=None)

Synthesizes speech synchronously: receive results after all text input

Method Details

synthesize(body, x__xgafv=None)
Synthesizes speech synchronously: receive results after all text input
has been processed.

Args:
  body: object, The request body. (required)
    The object takes the form of:

{ # The top-level message sent by the client for the `SynthesizeSpeech` method.
    "input": { # Contains text input to be synthesized. Either `text` or `ssml` must be # Required. The Synthesizer requires either plain text or SSML as input.
        # supplied. Supplying both or neither returns
        # google.rpc.Code.INVALID_ARGUMENT. The input size is limited to 5000
        # characters.
      "text": "A String", # The raw text to be synthesized.
      "ssml": "A String", # The SSML document to be synthesized. The SSML document must be valid
          # and well-formed. Otherwise the RPC will fail and return
          # google.rpc.Code.INVALID_ARGUMENT. For more information, see
          # [SSML](/speech/text-to-speech/docs/ssml).
    },
    "voice": { # Description of which voice to use for a synthesis request. # Required. The desired voice of the synthesized audio.
      "ssmlGender": "A String", # The preferred gender of the voice. Optional; if not set, the service will
          # choose a voice based on the other parameters such as language_code and
          # name. Note that this is only a preference, not requirement; if a
          # voice of the appropriate gender is not available, the synthesizer should
          # substitute a voice with a different gender rather than failing the request.
      "languageCode": "A String", # The language (and optionally also the region) of the voice expressed as a
          # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag, e.g.
          # "en-US". Required. This should not include a script tag (e.g. use
          # "cmn-cn" rather than "cmn-Hant-cn"), because the script will be inferred
          # from the input provided in the SynthesisInput.  The TTS service
          # will use this parameter to help choose an appropriate voice.  Note that
          # the TTS service may choose a voice with a slightly different language code
          # than the one selected; it may substitute a different region
          # (e.g. using en-US rather than en-CA if there isn't a Canadian voice
          # available), or even a different language, e.g. using "nb" (Norwegian
          # Bokmal) instead of "no" (Norwegian)".
      "name": "A String", # The name of the voice. Optional; if not set, the service will choose a
          # voice based on the other parameters such as language_code and gender.
    },
    "audioConfig": { # Description of audio data to be synthesized. # Required. The configuration of the synthesized audio.
      "audioEncoding": "A String", # Required. The format of the requested audio byte stream.
      "effectsProfileId": [ # An identifier which selects 'audio effects' profiles that are applied on
          # (post synthesized) text to speech.
          # Effects are applied on top of each other in the order they are given.
          # See
          #
          # [audio-profiles](https:
          # //cloud.google.com/text-to-speech/docs/audio-profiles)
          # for current supported profile ids.
        "A String",
      ],
      "sampleRateHertz": 42, # The synthesis sample rate (in hertz) for this audio. Optional.  If this is
          # different from the voice's natural sample rate, then the synthesizer will
          # honor this request by converting to the desired sample rate (which might
          # result in worse audio quality), unless the specified sample rate is not
          # supported for the encoding chosen, in which case it will fail the request
          # and return google.rpc.Code.INVALID_ARGUMENT.
      "pitch": 3.14, # Optional speaking pitch, in the range [-20.0, 20.0]. 20 means increase 20
          # semitones from the original pitch. -20 means decrease 20 semitones from the
          # original pitch.
      "speakingRate": 3.14, # Optional speaking rate/speed, in the range [0.25, 4.0]. 1.0 is the normal
          # native speed supported by the specific voice. 2.0 is twice as fast, and
          # 0.5 is half as fast. If unset(0.0), defaults to the native 1.0 speed. Any
          # other values < 0.25 or > 4.0 will return an error.
      "volumeGainDb": 3.14, # Optional volume gain (in dB) of the normal native volume supported by the
          # specific voice, in the range [-96.0, 16.0]. If unset, or set to a value of
          # 0.0 (dB), will play at normal native signal amplitude. A value of -6.0 (dB)
          # will play at approximately half the amplitude of the normal native signal
          # amplitude. A value of +6.0 (dB) will play at approximately twice the
          # amplitude of the normal native signal amplitude. Strongly recommend not to
          # exceed +10 (dB) as there's usually no effective increase in loudness for
          # any value greater than that.
    },
  }

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # The message returned to the client by the `SynthesizeSpeech` method.
    "audioContent": "A String", # The audio data bytes encoded as specified in the request, including the
        # header (For LINEAR16 audio, we include the WAV header). Note: as
        # with all bytes fields, protobuffers use a pure binary representation,
        # whereas JSON representations use base64.
  }