Swiftpack.co - sciasxp/LNSimpleOCRKit as Swift Package

Swiftpack.co is a collection of thousands of indexed Swift packages. Search packages.
See all packages published by sciasxp.
sciasxp/LNSimpleOCRKit 1.2.1
With this project I tried to simplify, as much em possible, the use of Apple's OCR.
⭐️ 1
🕓 1 year ago
iOS macOS
.package(url: "https://github.com/sciasxp/LNSimpleOCRKit.git", from: "1.2.1")

LNSimpleOCRKit

Version

With this project I tried to simplify, as much em possible, the use of Apple's OCR.

Minimum Requirements

This framework will be usable by iOS 13 and above or MacOS 10.15 and above.

Installing

SPM

LNSimpleOCRKit is available via SwiftPackage.

Add the following to you Package.swift file's dependencies:

.package(url: "https://github.com/sciasxp/LNSimpleOCRKit.git", from: "1.1.0"),

CocoaPods

To integrate LNSimpleOCRKit into your project using CocoaPods, specify it in your Podfile:

pod 'LNSimpleOCRKit'

How to Use

import LNSimpleOCRKit
let ocrKit = LNSimpleOCRKit()
ocrKit.detectText(for: <#Your UIImage Here#>) { result in
    switch result {
    case .success(let text):
        print(text)
        
    case .failure(let error):
        print(error.localizedDescription)
    }
}

Done!

That's it, simple as that.

Advanced Usage

There are some callbacks and configurations you can apply if you choose so.

Configuration

Basically you can configura three elements of your OCR:

  1. Accuracy
  2. Language
  3. Language Correction

To keep it simple, here goes an exemple:

let ocrConfiguration = OCRConfiguration(language: .english, type: .accurate, languageCorrection: true)
let ocrKit = LNSimpleOCRKit(configuration: ocrConfiguration)

Callbacks

You are able to preprocess your image via closure if you so want.

Here how you do it:

let ocrConfiguration = OCRConfiguration(language: .english, type: .accurate, languageCorrection: true)
let ocrKit = LNSimpleOCRKit(preprocessor: { image in
        return image.resizableImage(withCapInsets: UIEdgeInsets(top: 10, left: 10, bottom: 10, right: 10))
    }, 
    configuration: ocrConfiguration)

Also post-process a detected text and display detection progress.

let ocrKit = LNSimpleOCRKit()
ocrKit.detectText(for: image) { progress in
    print("Detection progress: \(progress * 100)%")
} postprocessor: { text in
    var processed = text
    processed = processed.trimmingCharacters(in: .whitespacesAndNewlines)
    return processed
} result: { result in
    switch result {
    case .success(let text):
        print(text)
        
    case .failure(let error):
        print(error.localizedDescription)
    }
}

Other Public API

There is a method to receive all the raw observations ([VNRecognizedTextObservation]).

let ocrKit = LNSimpleOCRKit()
ocrKit.recognizedObservations(for: image) { result in
    switch result {
    case .success(let observations):
        return observations
        
    case .failure(let error):
        print(error.localizedDescription)
    }
}

Future Work

  1. Improve unit tests.
  2. Improve documentation.

Contributing

You are most welcome in contributing to this project with either new features (maybe one mentioned from the future work or anything else), refactoring and improving the code. Also, feel free to give suggestions and feedbacks.

Created with ❤️ by Luciano Nunes.

Get in touch on [Email](https://raw.github.com/sciasxp/LNSimpleOCRKit/main/mailto: [email protected]) Visit: LinkdIn

GitHub

link
Stars: 1
Last commit: 1 year ago
Advertisement: IndiePitcher.com - Cold Email Software for Startups

Release Notes

1.1.0
1 year ago

Compatibility to cocoapods

Swiftpack is being maintained by Petr Pavlik | @ptrpavlik | @swiftpackco | API | Analytics