Swiftpack.co - Package - watson-developer-cloud/swift-sdk

Watson Developer Cloud Swift SDK

Build Status Carthage Compatible Documentation CLA assistant

Overview

The Watson Developer Cloud Swift SDK makes it easy for mobile developers to build Watson-powered applications. With the Swift SDK you can leverage the power of Watson's advanced artificial intelligence, machine learning, and deep learning techniques to understand unstructured data and engage with mobile users in new ways.

There are many resources to help you build your first cognitive application with the Swift SDK:

Contents

General

Services

This SDK provides classes and methods to access the following Watson services.

Before you begin

Requirements

  • Xcode 9.3+
  • Swift 4.1+
  • iOS 10.0+

Installation

The IBM Watson Swift SDK can be installed with Cocoapods, Carthage, or Swift Package Manager.

Cocoapods

You can install Cocoapods with RubyGems:

$ sudo gem install cocoapods

If your project does not yet have a Podfile, use the pod init command in the root directory of your project. To install the Swift SDK using Cocoapods, add the services you will be using to your Podfile as demonstrated below (substituting MyApp with the name of your app). The example below shows all of the currently available services; your Podfile should only include the services that your app will use.

use_frameworks!

target 'MyApp' do
    pod 'IBMWatsonAssistantV1', '~> 1.3.1'
    pod 'IBMWatsonAssistantV2', '~> 1.3.1'
    pod 'IBMWatsonCompareComplyV1', '~> 1.3.1'
    pod 'IBMWatsonDiscoveryV1', '~> 1.3.1'
    pod 'IBMWatsonLanguageTranslatorV3', '~> 1.3.1'
    pod 'IBMWatsonNaturalLanguageClassifierV1', '~> 1.3.1'
    pod 'IBMWatsonNaturalLanguageUnderstandingV1', '~> 1.3.1'
    pod 'IBMWatsonPersonalityInsightsV3', '~> 1.3.1'
    pod 'IBMWatsonSpeechToTextV1', '~> 1.3.1'
    pod 'IBMWatsonTextToSpeechV1', '~> 1.3.1'
    pod 'IBMWatsonToneAnalyzerV3', '~> 1.3.1'
    pod 'IBMWatsonVisualRecognitionV3', '~> 1.3.1'
end

Run the pod install command, and open the generated .xcworkspace file. To update to newer releases, use pod update.

When importing the frameworks in source files, exclude the IBMWatson prefix and the version suffix. For example, after installing IBMWatsonAssistantV1, import it in your source files as import Assistant.

For more information on using Cocoapods, refer to the Cocoapods Guides.

Carthage

You can install Carthage with Homebrew:

$ brew update
$ brew install carthage

If your project does not have a Cartfile yet, use the touch Cartfile command in the root directory of your project. To install the IBM Watson Swift SDK using Carthage, add the following to your Cartfile.

github "watson-developer-cloud/swift-sdk" ~> 1.3.1

Then run the following command to build the dependencies and frameworks:

$ carthage update --platform iOS

Follow the remaining Carthage installation instructions here. Note that the above command will download and build all of the services in the IBM Watson Swift SDK. Make sure to drag-and-drop the built frameworks (only for the services your app requires) into your Xcode project and import them in the source files that require them. The following frameworks need to be added to your app:

  1. RestKit.framework
  2. Whichever services your app will be using (AssistantV1.framework, DiscoveryV1.framework, etc.)
  3. (Speech to Text only) Starscream.framework

If your app fails to build because it is built with a different version of Swift than the downloaded SDK, then re-run the carthage update command with the --no-use-binaries flag added.

Swift Package Manager

Add the following to your Package.swift file to identify the IBM Watson Swift SDK as a dependency. The package manager will clone the Swift SDK when you build your project with swift build.

dependencies: [
    .package(url: "https://github.com/watson-developer-cloud/swift-sdk", from: "1.3.1")
]

Authentication

Watson services are migrating to token-based Identity and Access Management (IAM) authentication.

  • With some service instances, you authenticate to the API by using IAM.
  • In other instances, you authenticate by providing the username and password for the service instance.
  • Visual Recognition uses a form of API key only with instances created before May 23, 2018. Newer instances of Visual Recognition use IAM.

Getting credentials

To find out which authentication to use, view the service credentials. You find the service credentials for authentication the same way for all Watson services:

  1. Go to the IBM Cloud Dashboard page.
  2. Either click an existing Watson service instance or click Create resource > AI and create a service instance.
  3. Click Show to view your service credentials.
  4. Copy the url and either apikey or username and password.

IAM

Some services use token-based Identity and Access Management (IAM) authentication. IAM authentication uses a service API key to get an access token that is passed with the call. Access tokens are valid for approximately one hour and must be regenerated.

You supply either an IAM service API key or an access token:

  • Use the API key to have the SDK manage the lifecycle of the access token. The SDK requests an access token, ensures that the access token is valid, and refreshes it if necessary.
  • Use the access token if you want to manage the lifecycle yourself. For details, see Authenticating with IAM tokens. If you want to switch to API key, override your stored IAM credentials with an IAM API key.

Supplying the IAM API key

let discovery = Discovery(version: "your-version", apiKey: "your-apikey")

If you are supplying an API key for IBM Cloud Private (ICP), use basic authentication instead, with "apikey" for the username and the api key (prefixed with icp-) for the password. See the Username and Password section.

Supplying the accessToken

let discovery = Discovery(version: "your-version", accessToken: "your-accessToken")

Updating the accessToken

discovery.accessToken("new-accessToken")

Username and Password

let discovery = Discovery(username: "your-username", password: "your-password", version: "your-version")

Custom Service URLs

You can set a custom service URL by modifying the serviceURL property. A custom service URL may be required when running an instance in a particular region or connecting through a proxy.

For example, here is how to connect to a Tone Analyzer instance that is hosted in Germany:

let toneAnalyzer = ToneAnalyzer(
    username: "your-username",
    password: "your-password",
    version: "yyyy-mm-dd"
)
toneAnalyzer.serviceURL = "https://gateway-fra.watsonplatform.net/tone-analyzer/api"

Custom Headers

There are different headers that can be sent to the Watson services. For example, Watson services log requests and their results for the purpose of improving the services, but you can include the X-Watson-Learning-Opt-Out header to opt out of this.

We have exposed a defaultHeaders public property in each class to allow users to easily customize their headers:

let naturalLanguageClassifier = NaturalLanguageClassifier(username: username, password: password)
naturalLanguageClassifier.defaultHeaders = ["X-Watson-Learning-Opt-Out": "true"]

Each service method also accepts an optional headers parameter which is a dictionary of request headers to be sent with the request.

Sample Applications

Synchronous Execution

By default, the SDK executes all networking operations asynchronously. If your application requires synchronous execution, you can use a DispatchGroup. For example:

let dispatchGroup = DispatchGroup()
dispatchGroup.enter()
assistant.message(workspaceID: workspaceID) { response, error in
	if let error = error {
        print(error)
    }
    if let message = response?.result else {
        print(message.output.text)
    }
    dispatchGroup.leave()
}
dispatchGroup.wait(timeout: .distantFuture)

Objective-C Compatibility

Please see this tutorial for more information about consuming the Watson Developer Cloud Swift SDK in an Objective-C application.

Linux Compatibility

To use the Watson SDK in your Linux project, please follow the Swift Package Manager instructions.. Note that Speech to Text and Text to Speech are not supported because they rely on frameworks that are unavailable on Linux.

Contributing

We would love any and all help! If you would like to contribute, please read our CONTRIBUTING documentation with information on getting started.

License

This library is licensed under Apache 2.0. Full license text is available in LICENSE.

This SDK is intended for use with an Apple iOS product and intended to be used in conjunction with officially licensed Apple development tools.

Github

link
Stars: 806
Help us keep the lights on

Releases

1.3.1 - Jan 18, 2019

1.3.1 (2019-01-18)

Bug Fixes

  • SpeechToTextV1: Fix grammarName and redaction parameters in recognize websocket methods (64b116c)

1.3.0 - Jan 18, 2019

1.3.0 (2019-01-18)

Bug Fixes

  • SpeechToTextV1: Change contentType parameter to optional in certain methods (e033cff)

Features

  • DiscoveryV1: Add support for custom stopword lists (915ce68)
  • DiscoveryV1: Add support for gateways (39393fa)
  • DiscoveryV1: Add web crawlers to the list of possible sources (5a4a62e)
  • SpeechToTextV1: Add new options to acoustic models and language models (3345b46)
  • SpeechToTextV1: Add the ability to specify grammars in recognition requests (7edcdf4)
  • VisualRecognitionV3: Add acceptLanguage parameter to detectFaces() (a260a9c)
  • VisualRecognitionV3: Add genderLabel property to FaceGender model (a00f3c6)

1.2.0 - Jan 11, 2019

1.2.0 (2019-01-11)

Bug Fixes

  • CompareComplyV1: Change Location properties to optional (2e66ac5)
  • CompareComplyV1: Fix incorrect parameter types (4cfa292)
  • CompareComplyV1: Give more appropriate types to model properties (4b1af08)

Features

  • CompareComplyV1: Add properties to AlignedElements and Attribute (0fbeb6d)
  • CompareComplyV1: New framework for Compare & Comply service (482444a)

1.1.1 - Jan 10, 2019

1.1.0 (2019-01-10)

Bug Fixes

  • AssistantV1: Add missing "disabled" field to DialogNode (e45de83)
  • AssistantV2: Add missing userDefined field to MessageOutput (f65cafc)

1.1.0 - Dec 11, 2018

1.1.0 (2018-12-11)

Features

  • AssistantV1: Add metadata field to Context model (13a90c1)
  • AssistantV1: Add option to sort results in getWorkspace() (5cefc7b)
  • DiscoveryV1: Add new concepts property to NluEnrichmentFeatures model (80258db)
  • DiscoveryV1: Add retrievalDetails property to QueryResponse model (631affc)
  • NaturalLanguageUnderstandingV1: Add 4 new properties to the Model model (53fe057)
  • NaturalLanguageUnderstandingV1: Add new count property to KeywordsResult model (ab9a339)
  • NaturalLanguageUnderstandingV1: Add new limit property to CategoriesOptions model (5bf6637)