Merged master and fixed integrationtests

feature/image-source
Christoph Oberhofer 8 years ago
commit c64e85046b

@ -1,22 +1,24 @@
quaggaJS
========
- [Changelog](#changelog) (2017-01-08)
- [Changelog](#changelog) (2017-06-07)
- [Browser Support](#browser-support)
- [Installing](#installing)
- [Getting Started](#gettingstarted)
- [API](#api)
- [Configuration](#configobject)
- [Tips & Tricks](#tipsandtricks)
- [Sponsors](#sponsors)
## What is QuaggaJS?
QuaggaJS is a barcode-scanner entirely written in JavaScript supporting real-
time localization and decoding of various types of barcodes such as __EAN__,
__CODE 128__, __CODE 39__, __EAN 8__, __UPC-A__, __UPC-C__, __I2of5__ and
__CODABAR__. The library is also capable of using `getUserMedia` to get direct
access to the user's camera stream. Although the code relies on heavy image-
processing even recent smartphones are capable of locating and decoding
barcodes in real-time.
__CODE 128__, __CODE 39__, __EAN 8__, __UPC-A__, __UPC-C__, __I2of5__,
__2of5__, __CODE 93__ and __CODABAR__. The library is also capable of using
`getUserMedia` to get direct access to the user's camera stream. Although the
code relies on heavy image-processing even recent smartphones are capable of
locating and decoding barcodes in real-time.
Try some [examples](https://serratus.github.io/quaggaJS/examples) and check out
the blog post ([How barcode-localization works in QuaggaJS][oberhofer_co_how])
@ -443,6 +445,8 @@ barcodes which should be decoded during the session. Possible values are:
- upc_reader
- upc_e_reader
- i2of5_reader
- 2of5_reader
- code_93_reader
Why are not all types activated by default? Simply because one should
explicitly define the set of barcodes for their use-case. More decoders means
@ -587,6 +591,36 @@ Quagga.decodeSingle({
});
```
## <a name="tipsandtricks">Tips & Tricks</a>
A growing collection of tips & tricks to improve the various aspects of Quagga.
### Barcodes too small?
Barcodes too far away from the camera, or a lens too close to the object
result in poor recognition rates and Quagga might respond with a lot of
false-positives.
Starting in Chrome 59 you can now make use of `capabilities` and directly
control the zoom of the camera. Head over to the
[web-cam demo](https://serratus.github.io/quaggaJS/examples/live_w_locator.html)
and check out the __Zoom__ feature.
You can read more about those `capabilities` in
[Let's light a torch and explore MediaStreamTrack's capabilities](https://www.oberhofer.co/mediastreamtrack-and-its-capabilities)
### Video too dark?
Dark environments usually result in noisy images and therefore mess with the
recognition logic.
Since Chrome 59 you can turn on/off the __Torch__ of our device and vastly
improve the quality of the images. Head over to the
[web-cam demo](https://serratus.github.io/quaggaJS/examples/live_w_locator.html)
and check out the __Torch__ feature.
To find out more about this feature [read on](https://www.oberhofer.co/mediastreamtrack-and-its-capabilities).
## Tests
Unit Tests can be run with [Karma][karmaUrl] and written using
@ -663,8 +697,32 @@ calling ``decodeSingle`` with the same configuration as used during recording
. In order to reproduce the exact same result, you have to make sure to turn
on the ``singleChannel`` flag in the configuration when using ``decodeSingle``.
## <a name="sponsors">Sponsors</a>
- [Maintenance Connection Canada (Asset Pro Solutions Inc.](http://maintenanceconnection.ca/)
## <a name="changelog">Changelog</a>
### 2017-06-07
- Improvements
- added `muted` and `playsinline` to `<video/>` to make it work for Safari 11
Beta (even iOS)
- Fixes
- Fixed [example/live_w_locator.js](https://github.com/serratus/quaggaJS/blob/master/example/live_w_locator.js)
### 2017-06-06
- Features
- Support for Standard 2of5 barcodes (See
[\#194](https://github.com/serratus/quaggaJS/issues/194))
- Support for Code 93 barcodes (See
[\#194](https://github.com/serratus/quaggaJS/issues/195))
- Exposing `Quagga.CameraAccess.getActiveTrack()` to get access to the
currently used `MediaStreamTrack`
- Example can be viewed here: [example/live_w_locator.js](https://github.com/serratus/quaggaJS/blob/master/example/live_w_locator.js) and a [demo](https://serratus.github.io/quaggaJS/examples/live_w_locator.html)
Take a look at the release-notes (
[0.12.0](https://github.com/serratus/quaggaJS/releases/tag/v0.12.0))
### 2017-01-08
- Improvements
- Exposing `CameraAccess` module to get access to methods like

32
dist/quagga.js vendored

@ -3475,7 +3475,9 @@ function initCamera(video, constraints) {
return __webpack_require__.i(__WEBPACK_IMPORTED_MODULE_1_mediaDevices__["a" /* getUserMedia */])(constraints).then(function (stream) {
return new Promise(function (resolve) {
streamRef = stream;
video.setAttribute("autoplay", 'true');
video.setAttribute("autoplay", true);
video.setAttribute('muted', true);
video.setAttribute('playsinline', true);
video.srcObject = stream;
video.addEventListener('loadedmetadata', function () {
video.play();
@ -3519,6 +3521,15 @@ function enumerateVideoDevices() {
});
}
function getActiveTrack() {
if (streamRef) {
var tracks = streamRef.getVideoTracks();
if (tracks && tracks.length) {
return tracks[0];
}
}
}
/* harmony default export */ __webpack_exports__["a"] = {
request: function request(video, videoConstraints) {
return pickConstraints(videoConstraints).then(initCamera.bind(null, video));
@ -3532,13 +3543,10 @@ function enumerateVideoDevices() {
},
enumerateVideoDevices: enumerateVideoDevices,
getActiveStreamLabel: function getActiveStreamLabel() {
if (streamRef) {
var tracks = streamRef.getVideoTracks();
if (tracks && tracks.length) {
return tracks[0].label;
}
}
}
var track = getActiveTrack();
return track ? track.label : '';
},
getActiveTrack: getActiveTrack
};
/***/ }),
@ -9756,12 +9764,20 @@ function createScanner(pixelCapturer) {
}
function calculateClipping(canvasSize) {
if (_config.detector && _config.detector.area) {
var area = _config.detector.area;
var patchSize = _config.locator.patchSize || "medium";
var halfSample = _config.locator.halfSample || true;
return _checkImageConstraints({ area: area, patchSize: patchSize, canvasSize: canvasSize, halfSample: halfSample });
}
return {
x: 0,
y: 0,
width: canvasSize.width,
height: canvasSize.height
};
}
function update() {
var availableWorker;

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

@ -1,6 +1,7 @@
<!DOCTYPE html>
<html lang="en">
<head>
<head>
<meta charset="utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" />
@ -8,11 +9,11 @@
<meta name="description" content="" />
<meta name="author" content="Christoph Oberhofer" />
<meta name="viewport" content="width=device-width; initial-scale=1.0" />
<meta name="viewport" content="width=device-width; initial-scale=1.0; user-scalable=no" />
<link rel="stylesheet" type="text/css" href="css/styles.css" />
</head>
</head>
<body>
<body>
<header>
<div class="headline">
<h1>QuaggaJS</h1>
@ -21,10 +22,8 @@
</header>
<section id="container" class="container">
<h3>The user's camera</h3>
<p>If your platform supports the <strong>getUserMedia</strong> API call, you can try the real-time locating and decoding features.
Simply allow the page to access your web-cam and point it to a barcode. You can switch between <strong>Code128</strong>
and <strong>EAN</strong> to test different scenarios.
It works best if your camera has built-in auto-focus.
<p>If your platform supports the <strong>getUserMedia</strong> API call, you can try the real-time locating and decoding
features. Simply allow the page to access your web-cam and point it to a barcode. You can switch between <strong>Code128</strong> and <strong>EAN</strong> to test different scenarios. It works best if your camera has built-in auto-focus.
</p>
<div class="controls">
<fieldset class="input-group">
@ -49,7 +48,7 @@
</select>
</label>
<label>
<span>Resolution (long side)</span>
<span>Resolution (width)</span>
<select name="input-stream_constraints">
<option value="320x240">320px</option>
<option selected="selected" value="640x480">640px</option>
@ -96,6 +95,14 @@
<option value="2">2x</option>
</select>
</label>
<label style="display: none">
<span>Zoom</span>
<select name="settings_zoom"></select>
</label>
<label style="display: none">
<span>Torch</span>
<input type="checkbox" name="settings_torch" />
</label>
</fieldset>
</div>
<div id="result_strip">
@ -116,5 +123,6 @@
<script src="//webrtc.github.io/adapter/adapter-latest.js" type="text/javascript"></script>
<script src="../dist/quagga.js" type="text/javascript"></script>
<script src="live_w_locator.js" type="text/javascript"></script>
</body>
</body>
</html>

@ -22,7 +22,7 @@ $(function() {
}
});
var App = {
init : function() {
init: function() {
this.overlay = document.querySelector('#interactive canvas.drawing');
Quagga.fromCamera({
@ -45,10 +45,66 @@ $(function() {
console.error(err);
});
}.bind(this));
Quagga.init(this.state, function(err) {
if (err) {
return self.handleError(err);
}
//Quagga.registerResultCollector(resultCollector);
App.attachListeners();
App.checkCapabilities();
Quagga.start();
});
},
handleError: function(err) {
console.log(err);
},
checkCapabilities: function() {
var track = Quagga.CameraAccess.getActiveTrack();
var capabilities = {};
if (typeof track.getCapabilities === 'function') {
capabilities = track.getCapabilities();
}
this.applySettingsVisibility('zoom', capabilities.zoom);
this.applySettingsVisibility('torch', capabilities.torch);
},
updateOptionsForMediaRange: function(node, range) {
console.log('updateOptionsForMediaRange', node, range);
var NUM_STEPS = 6;
var stepSize = (range.max - range.min) / NUM_STEPS;
var option;
var value;
while (node.firstChild) {
node.removeChild(node.firstChild);
}
for (var i = 0; i <= NUM_STEPS; i++) {
value = range.min + (stepSize * i);
option = document.createElement('option');
option.value = value;
option.innerHTML = value;
node.appendChild(option);
}
},
applySettingsVisibility: function(setting, capability) {
// depending on type of capability
if (typeof capability === 'boolean') {
var node = document.querySelector('input[name="settings_' + setting + '"]');
if (node) {
node.parentNode.style.display = capability ? 'block' : 'none';
}
return;
}
if (window.MediaSettingsRange && capability instanceof window.MediaSettingsRange) {
var node = document.querySelector('select[name="settings_' + setting + '"]');
if (node) {
this.updateOptionsForMediaRange(node, capability);
node.parentNode.style.display = 'block';
}
return;
}
},
initCameraSelection: function() {
var streamLabel = this.scanner.getSource().getLabel();
return Quagga.CameraAccess.enumerateVideoDevices()
.then(function(devices) {
function pruneText(text) {
@ -114,14 +170,27 @@ $(function() {
$(".controls").off("click", "button.stop");
$(".controls .reader-config-group").off("change", "input, select");
},
applySetting: function(setting, value) {
var track = Quagga.CameraAccess.getActiveTrack();
if (track && typeof track.getCapabilities === 'function') {
switch (setting) {
case 'zoom':
return track.applyConstraints({advanced: [{zoom: parseFloat(value)}]});
case 'torch':
return track.applyConstraints({advanced: [{torch: !!value}]});
}
}
},
setState: function(path, value) {
if (typeof this._accessByPath(this.inputMapper, path) === "function") {
value = this._accessByPath(this.inputMapper, path)(value, this.state);
}
this._accessByPath(this.state, path, value);
console.log(JSON.stringify(this.state));
if (path.startsWith('settings.')) {
var setting = path.substring(9);
return self.applySetting(setting, value);
}
self._accessByPath(self.state, path, value);
this.scanner
.applyConfig({

@ -3535,7 +3535,9 @@ function initCamera(video, constraints) {
return (0, _mediaDevices.getUserMedia)(constraints).then(function (stream) {
return new Promise(function (resolve) {
streamRef = stream;
video.setAttribute("autoplay", 'true');
video.setAttribute("autoplay", true);
video.setAttribute('muted', true);
video.setAttribute('playsinline', true);
video.srcObject = stream;
video.addEventListener('loadedmetadata', function () {
video.play();
@ -3579,6 +3581,15 @@ function enumerateVideoDevices() {
});
}
function getActiveTrack() {
if (streamRef) {
var tracks = streamRef.getVideoTracks();
if (tracks && tracks.length) {
return tracks[0];
}
}
}
exports.default = {
request: function request(video, videoConstraints) {
return pickConstraints(videoConstraints).then(initCamera.bind(null, video));
@ -3592,13 +3603,10 @@ exports.default = {
},
enumerateVideoDevices: enumerateVideoDevices,
getActiveStreamLabel: function getActiveStreamLabel() {
if (streamRef) {
var tracks = streamRef.getVideoTracks();
if (tracks && tracks.length) {
return tracks[0].label;
}
}
}
var track = getActiveTrack();
return track ? track.label : '';
},
getActiveTrack: getActiveTrack
};
/***/ }),
@ -10053,12 +10061,20 @@ function createScanner(pixelCapturer) {
}
function calculateClipping(canvasSize) {
if (_config.detector && _config.detector.area) {
var area = _config.detector.area;
var patchSize = _config.locator.patchSize || "medium";
var halfSample = _config.locator.halfSample || true;
return _checkImageConstraints({ area: area, patchSize: patchSize, canvasSize: canvasSize, halfSample: halfSample });
}
return {
x: 0,
y: 0,
width: canvasSize.width,
height: canvasSize.height
};
}
function update() {
var availableWorker;

File diff suppressed because one or more lines are too long

@ -42,7 +42,9 @@ function initCamera(video, constraints) {
.then((stream) => {
return new Promise((resolve) => {
streamRef = stream;
video.setAttribute("autoplay", 'true');
video.setAttribute("autoplay", true);
video.setAttribute('muted', true);
video.setAttribute('playsinline', true);
video.srcObject = stream;
video.addEventListener('loadedmetadata', () => {
video.play();
@ -87,6 +89,15 @@ function enumerateVideoDevices() {
.then(devices => devices.filter(device => device.kind === 'videoinput'));
}
function getActiveTrack() {
if (streamRef) {
const tracks = streamRef.getVideoTracks();
if (tracks && tracks.length) {
return tracks[0];
}
}
}
export default {
request: function(video, videoConstraints) {
return pickConstraints(videoConstraints)
@ -101,11 +112,8 @@ export default {
},
enumerateVideoDevices,
getActiveStreamLabel: function() {
if (streamRef) {
const tracks = streamRef.getVideoTracks();
if (tracks && tracks.length) {
return tracks[0].label;
}
}
}
const track = getActiveTrack();
return track ? track.label : '';
},
getActiveTrack
};

@ -212,12 +212,20 @@ function createScanner(pixelCapturer) {
}
function calculateClipping(canvasSize) {
if (_config.detector && _config.detector.area) {
const area = _config.detector.area;
const patchSize = _config.locator.patchSize || "medium";
const halfSample = _config.locator.halfSample || true;
return _checkImageConstraints({area, patchSize, canvasSize, halfSample});
}
return {
x: 0,
y: 0,
width: canvasSize.width,
height: canvasSize.height,
};
}
function update() {
var availableWorker;

@ -22,7 +22,7 @@ describe('decodeSingle', function () {
};
}
this.timeout(10000);
this.timeout(5000);
function _runTestSet(testSet, config) {
var readers = config.decoder.readers.slice(),
@ -43,18 +43,32 @@ describe('decodeSingle', function () {
it('should decode ' + folder + " correctly", function(done) {
async.eachSeries(testSet, function (sample, callback) {
config.src = folder + sample.name;
config.readers = readers;
Quagga
.config(config)
.fromSource(config.src)
.addEventListener('processed', function(result){
.fromImage({
constraints: {
src: folder + sample.name,
width: config.inputStream.size,
height: config.inputStream.size,
},
locator: config.locator,
decoder: {
readers: readers,
},
numOfWorkers: config.numOfWorkers,
})
.then(scanner => {
console.log('Scanner created', scanner);
scanner.detect()
.then((result) => {
console.log(sample.name);
expect(result.codeResult.code).to.equal(sample.result);
expect(result.codeResult.format).to.equal(sample.format);
callback();
})
.start();
.catch(err => {
console.log(sample.name, err);
})
.then(callback);
});
}, function() {
done();
});

Loading…
Cancel
Save