Merged master and fixed integrationtests

feature/image-source
Christoph Oberhofer 8 years ago
commit c64e85046b

@ -1,22 +1,24 @@
quaggaJS quaggaJS
======== ========
- [Changelog](#changelog) (2017-01-08) - [Changelog](#changelog) (2017-06-07)
- [Browser Support](#browser-support) - [Browser Support](#browser-support)
- [Installing](#installing) - [Installing](#installing)
- [Getting Started](#gettingstarted) - [Getting Started](#gettingstarted)
- [API](#api) - [API](#api)
- [Configuration](#configobject) - [Configuration](#configobject)
- [Tips & Tricks](#tipsandtricks)
- [Sponsors](#sponsors)
## What is QuaggaJS? ## What is QuaggaJS?
QuaggaJS is a barcode-scanner entirely written in JavaScript supporting real- QuaggaJS is a barcode-scanner entirely written in JavaScript supporting real-
time localization and decoding of various types of barcodes such as __EAN__, time localization and decoding of various types of barcodes such as __EAN__,
__CODE 128__, __CODE 39__, __EAN 8__, __UPC-A__, __UPC-C__, __I2of5__ and __CODE 128__, __CODE 39__, __EAN 8__, __UPC-A__, __UPC-C__, __I2of5__,
__CODABAR__. The library is also capable of using `getUserMedia` to get direct __2of5__, __CODE 93__ and __CODABAR__. The library is also capable of using
access to the user's camera stream. Although the code relies on heavy image- `getUserMedia` to get direct access to the user's camera stream. Although the
processing even recent smartphones are capable of locating and decoding code relies on heavy image-processing even recent smartphones are capable of
barcodes in real-time. locating and decoding barcodes in real-time.
Try some [examples](https://serratus.github.io/quaggaJS/examples) and check out Try some [examples](https://serratus.github.io/quaggaJS/examples) and check out
the blog post ([How barcode-localization works in QuaggaJS][oberhofer_co_how]) the blog post ([How barcode-localization works in QuaggaJS][oberhofer_co_how])
@ -443,6 +445,8 @@ barcodes which should be decoded during the session. Possible values are:
- upc_reader - upc_reader
- upc_e_reader - upc_e_reader
- i2of5_reader - i2of5_reader
- 2of5_reader
- code_93_reader
Why are not all types activated by default? Simply because one should Why are not all types activated by default? Simply because one should
explicitly define the set of barcodes for their use-case. More decoders means explicitly define the set of barcodes for their use-case. More decoders means
@ -587,6 +591,36 @@ Quagga.decodeSingle({
}); });
``` ```
## <a name="tipsandtricks">Tips & Tricks</a>
A growing collection of tips & tricks to improve the various aspects of Quagga.
### Barcodes too small?
Barcodes too far away from the camera, or a lens too close to the object
result in poor recognition rates and Quagga might respond with a lot of
false-positives.
Starting in Chrome 59 you can now make use of `capabilities` and directly
control the zoom of the camera. Head over to the
[web-cam demo](https://serratus.github.io/quaggaJS/examples/live_w_locator.html)
and check out the __Zoom__ feature.
You can read more about those `capabilities` in
[Let's light a torch and explore MediaStreamTrack's capabilities](https://www.oberhofer.co/mediastreamtrack-and-its-capabilities)
### Video too dark?
Dark environments usually result in noisy images and therefore mess with the
recognition logic.
Since Chrome 59 you can turn on/off the __Torch__ of our device and vastly
improve the quality of the images. Head over to the
[web-cam demo](https://serratus.github.io/quaggaJS/examples/live_w_locator.html)
and check out the __Torch__ feature.
To find out more about this feature [read on](https://www.oberhofer.co/mediastreamtrack-and-its-capabilities).
## Tests ## Tests
Unit Tests can be run with [Karma][karmaUrl] and written using Unit Tests can be run with [Karma][karmaUrl] and written using
@ -663,8 +697,32 @@ calling ``decodeSingle`` with the same configuration as used during recording
. In order to reproduce the exact same result, you have to make sure to turn . In order to reproduce the exact same result, you have to make sure to turn
on the ``singleChannel`` flag in the configuration when using ``decodeSingle``. on the ``singleChannel`` flag in the configuration when using ``decodeSingle``.
## <a name="sponsors">Sponsors</a>
- [Maintenance Connection Canada (Asset Pro Solutions Inc.](http://maintenanceconnection.ca/)
## <a name="changelog">Changelog</a> ## <a name="changelog">Changelog</a>
### 2017-06-07
- Improvements
- added `muted` and `playsinline` to `<video/>` to make it work for Safari 11
Beta (even iOS)
- Fixes
- Fixed [example/live_w_locator.js](https://github.com/serratus/quaggaJS/blob/master/example/live_w_locator.js)
### 2017-06-06
- Features
- Support for Standard 2of5 barcodes (See
[\#194](https://github.com/serratus/quaggaJS/issues/194))
- Support for Code 93 barcodes (See
[\#194](https://github.com/serratus/quaggaJS/issues/195))
- Exposing `Quagga.CameraAccess.getActiveTrack()` to get access to the
currently used `MediaStreamTrack`
- Example can be viewed here: [example/live_w_locator.js](https://github.com/serratus/quaggaJS/blob/master/example/live_w_locator.js) and a [demo](https://serratus.github.io/quaggaJS/examples/live_w_locator.html)
Take a look at the release-notes (
[0.12.0](https://github.com/serratus/quaggaJS/releases/tag/v0.12.0))
### 2017-01-08 ### 2017-01-08
- Improvements - Improvements
- Exposing `CameraAccess` module to get access to methods like - Exposing `CameraAccess` module to get access to methods like

40
dist/quagga.js vendored

@ -3475,7 +3475,9 @@ function initCamera(video, constraints) {
return __webpack_require__.i(__WEBPACK_IMPORTED_MODULE_1_mediaDevices__["a" /* getUserMedia */])(constraints).then(function (stream) { return __webpack_require__.i(__WEBPACK_IMPORTED_MODULE_1_mediaDevices__["a" /* getUserMedia */])(constraints).then(function (stream) {
return new Promise(function (resolve) { return new Promise(function (resolve) {
streamRef = stream; streamRef = stream;
video.setAttribute("autoplay", 'true'); video.setAttribute("autoplay", true);
video.setAttribute('muted', true);
video.setAttribute('playsinline', true);
video.srcObject = stream; video.srcObject = stream;
video.addEventListener('loadedmetadata', function () { video.addEventListener('loadedmetadata', function () {
video.play(); video.play();
@ -3519,6 +3521,15 @@ function enumerateVideoDevices() {
}); });
} }
function getActiveTrack() {
if (streamRef) {
var tracks = streamRef.getVideoTracks();
if (tracks && tracks.length) {
return tracks[0];
}
}
}
/* harmony default export */ __webpack_exports__["a"] = { /* harmony default export */ __webpack_exports__["a"] = {
request: function request(video, videoConstraints) { request: function request(video, videoConstraints) {
return pickConstraints(videoConstraints).then(initCamera.bind(null, video)); return pickConstraints(videoConstraints).then(initCamera.bind(null, video));
@ -3532,13 +3543,10 @@ function enumerateVideoDevices() {
}, },
enumerateVideoDevices: enumerateVideoDevices, enumerateVideoDevices: enumerateVideoDevices,
getActiveStreamLabel: function getActiveStreamLabel() { getActiveStreamLabel: function getActiveStreamLabel() {
if (streamRef) { var track = getActiveTrack();
var tracks = streamRef.getVideoTracks(); return track ? track.label : '';
if (tracks && tracks.length) { },
return tracks[0].label; getActiveTrack: getActiveTrack
}
}
}
}; };
/***/ }), /***/ }),
@ -9756,11 +9764,19 @@ function createScanner(pixelCapturer) {
} }
function calculateClipping(canvasSize) { function calculateClipping(canvasSize) {
var area = _config.detector.area; if (_config.detector && _config.detector.area) {
var patchSize = _config.locator.patchSize || "medium"; var area = _config.detector.area;
var halfSample = _config.locator.halfSample || true; var patchSize = _config.locator.patchSize || "medium";
var halfSample = _config.locator.halfSample || true;
return _checkImageConstraints({ area: area, patchSize: patchSize, canvasSize: canvasSize, halfSample: halfSample }); return _checkImageConstraints({ area: area, patchSize: patchSize, canvasSize: canvasSize, halfSample: halfSample });
}
return {
x: 0,
y: 0,
width: canvasSize.width,
height: canvasSize.height
};
} }
function update() { function update() {

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

@ -1,6 +1,7 @@
<!DOCTYPE html> <!DOCTYPE html>
<html lang="en"> <html lang="en">
<head>
<head>
<meta charset="utf-8" /> <meta charset="utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" /> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" />
@ -8,24 +9,22 @@
<meta name="description" content="" /> <meta name="description" content="" />
<meta name="author" content="Christoph Oberhofer" /> <meta name="author" content="Christoph Oberhofer" />
<meta name="viewport" content="width=device-width; initial-scale=1.0" /> <meta name="viewport" content="width=device-width; initial-scale=1.0; user-scalable=no" />
<link rel="stylesheet" type="text/css" href="css/styles.css" /> <link rel="stylesheet" type="text/css" href="css/styles.css" />
</head> </head>
<body> <body>
<header> <header>
<div class="headline"> <div class="headline">
<h1>QuaggaJS</h1> <h1>QuaggaJS</h1>
<h2>An advanced barcode-scanner written in JavaScript</h2> <h2>An advanced barcode-scanner written in JavaScript</h2>
</div> </div>
</header> </header>
<section id="container" class="container"> <section id="container" class="container">
<h3>The user's camera</h3> <h3>The user's camera</h3>
<p>If your platform supports the <strong>getUserMedia</strong> API call, you can try the real-time locating and decoding features. <p>If your platform supports the <strong>getUserMedia</strong> API call, you can try the real-time locating and decoding
Simply allow the page to access your web-cam and point it to a barcode. You can switch between <strong>Code128</strong> features. Simply allow the page to access your web-cam and point it to a barcode. You can switch between <strong>Code128</strong> and <strong>EAN</strong> to test different scenarios. It works best if your camera has built-in auto-focus.
and <strong>EAN</strong> to test different scenarios. </p>
It works best if your camera has built-in auto-focus.
</p>
<div class="controls"> <div class="controls">
<fieldset class="input-group"> <fieldset class="input-group">
<button class="stop">Stop</button> <button class="stop">Stop</button>
@ -49,7 +48,7 @@
</select> </select>
</label> </label>
<label> <label>
<span>Resolution (long side)</span> <span>Resolution (width)</span>
<select name="input-stream_constraints"> <select name="input-stream_constraints">
<option value="320x240">320px</option> <option value="320x240">320px</option>
<option selected="selected" value="640x480">640px</option> <option selected="selected" value="640x480">640px</option>
@ -96,25 +95,34 @@
<option value="2">2x</option> <option value="2">2x</option>
</select> </select>
</label> </label>
<label style="display: none">
<span>Zoom</span>
<select name="settings_zoom"></select>
</label>
<label style="display: none">
<span>Torch</span>
<input type="checkbox" name="settings_torch" />
</label>
</fieldset> </fieldset>
</div> </div>
<div id="result_strip"> <div id="result_strip">
<ul class="thumbnails"></ul> <ul class="thumbnails"></ul>
<ul class="collector"></ul> <ul class="collector"></ul>
</div> </div>
<div id="interactive" class="viewport"> <div id="interactive" class="viewport">
<canvas class="drawing" /> <canvas class="drawing" />
</div> </div>
</section> </section>
<footer> <footer>
<p> <p>
&copy; Made with ❤️ by Christoph Oberhofer &copy; Made with ❤️ by Christoph Oberhofer
</p> </p>
</footer> </footer>
<script src="vendor/jquery-1.9.0.min.js" type="text/javascript"></script> <script src="vendor/jquery-1.9.0.min.js" type="text/javascript"></script>
<script src="//webrtc.github.io/adapter/adapter-latest.js" type="text/javascript"></script> <script src="//webrtc.github.io/adapter/adapter-latest.js" type="text/javascript"></script>
<script src="../dist/quagga.js" type="text/javascript"></script> <script src="../dist/quagga.js" type="text/javascript"></script>
<script src="live_w_locator.js" type="text/javascript"></script> <script src="live_w_locator.js" type="text/javascript"></script>
</body> </body>
</html>
</html>

@ -22,7 +22,7 @@ $(function() {
} }
}); });
var App = { var App = {
init : function() { init: function() {
this.overlay = document.querySelector('#interactive canvas.drawing'); this.overlay = document.querySelector('#interactive canvas.drawing');
Quagga.fromCamera({ Quagga.fromCamera({
@ -45,10 +45,66 @@ $(function() {
console.error(err); console.error(err);
}); });
}.bind(this)); }.bind(this));
Quagga.init(this.state, function(err) {
if (err) {
return self.handleError(err);
}
//Quagga.registerResultCollector(resultCollector);
App.attachListeners();
App.checkCapabilities();
Quagga.start();
});
},
handleError: function(err) {
console.log(err);
},
checkCapabilities: function() {
var track = Quagga.CameraAccess.getActiveTrack();
var capabilities = {};
if (typeof track.getCapabilities === 'function') {
capabilities = track.getCapabilities();
}
this.applySettingsVisibility('zoom', capabilities.zoom);
this.applySettingsVisibility('torch', capabilities.torch);
},
updateOptionsForMediaRange: function(node, range) {
console.log('updateOptionsForMediaRange', node, range);
var NUM_STEPS = 6;
var stepSize = (range.max - range.min) / NUM_STEPS;
var option;
var value;
while (node.firstChild) {
node.removeChild(node.firstChild);
}
for (var i = 0; i <= NUM_STEPS; i++) {
value = range.min + (stepSize * i);
option = document.createElement('option');
option.value = value;
option.innerHTML = value;
node.appendChild(option);
}
},
applySettingsVisibility: function(setting, capability) {
// depending on type of capability
if (typeof capability === 'boolean') {
var node = document.querySelector('input[name="settings_' + setting + '"]');
if (node) {
node.parentNode.style.display = capability ? 'block' : 'none';
}
return;
}
if (window.MediaSettingsRange && capability instanceof window.MediaSettingsRange) {
var node = document.querySelector('select[name="settings_' + setting + '"]');
if (node) {
this.updateOptionsForMediaRange(node, capability);
node.parentNode.style.display = 'block';
}
return;
}
}, },
initCameraSelection: function() { initCameraSelection: function() {
var streamLabel = this.scanner.getSource().getLabel(); var streamLabel = this.scanner.getSource().getLabel();
return Quagga.CameraAccess.enumerateVideoDevices() return Quagga.CameraAccess.enumerateVideoDevices()
.then(function(devices) { .then(function(devices) {
function pruneText(text) { function pruneText(text) {
@ -114,14 +170,27 @@ $(function() {
$(".controls").off("click", "button.stop"); $(".controls").off("click", "button.stop");
$(".controls .reader-config-group").off("change", "input, select"); $(".controls .reader-config-group").off("change", "input, select");
}, },
applySetting: function(setting, value) {
var track = Quagga.CameraAccess.getActiveTrack();
if (track && typeof track.getCapabilities === 'function') {
switch (setting) {
case 'zoom':
return track.applyConstraints({advanced: [{zoom: parseFloat(value)}]});
case 'torch':
return track.applyConstraints({advanced: [{torch: !!value}]});
}
}
},
setState: function(path, value) { setState: function(path, value) {
if (typeof this._accessByPath(this.inputMapper, path) === "function") { if (typeof this._accessByPath(this.inputMapper, path) === "function") {
value = this._accessByPath(this.inputMapper, path)(value, this.state); value = this._accessByPath(this.inputMapper, path)(value, this.state);
} }
this._accessByPath(this.state, path, value); if (path.startsWith('settings.')) {
var setting = path.substring(9);
console.log(JSON.stringify(this.state)); return self.applySetting(setting, value);
}
self._accessByPath(self.state, path, value);
this.scanner this.scanner
.applyConfig({ .applyConfig({

@ -3535,7 +3535,9 @@ function initCamera(video, constraints) {
return (0, _mediaDevices.getUserMedia)(constraints).then(function (stream) { return (0, _mediaDevices.getUserMedia)(constraints).then(function (stream) {
return new Promise(function (resolve) { return new Promise(function (resolve) {
streamRef = stream; streamRef = stream;
video.setAttribute("autoplay", 'true'); video.setAttribute("autoplay", true);
video.setAttribute('muted', true);
video.setAttribute('playsinline', true);
video.srcObject = stream; video.srcObject = stream;
video.addEventListener('loadedmetadata', function () { video.addEventListener('loadedmetadata', function () {
video.play(); video.play();
@ -3579,6 +3581,15 @@ function enumerateVideoDevices() {
}); });
} }
function getActiveTrack() {
if (streamRef) {
var tracks = streamRef.getVideoTracks();
if (tracks && tracks.length) {
return tracks[0];
}
}
}
exports.default = { exports.default = {
request: function request(video, videoConstraints) { request: function request(video, videoConstraints) {
return pickConstraints(videoConstraints).then(initCamera.bind(null, video)); return pickConstraints(videoConstraints).then(initCamera.bind(null, video));
@ -3592,13 +3603,10 @@ exports.default = {
}, },
enumerateVideoDevices: enumerateVideoDevices, enumerateVideoDevices: enumerateVideoDevices,
getActiveStreamLabel: function getActiveStreamLabel() { getActiveStreamLabel: function getActiveStreamLabel() {
if (streamRef) { var track = getActiveTrack();
var tracks = streamRef.getVideoTracks(); return track ? track.label : '';
if (tracks && tracks.length) { },
return tracks[0].label; getActiveTrack: getActiveTrack
}
}
}
}; };
/***/ }), /***/ }),
@ -10053,11 +10061,19 @@ function createScanner(pixelCapturer) {
} }
function calculateClipping(canvasSize) { function calculateClipping(canvasSize) {
var area = _config.detector.area; if (_config.detector && _config.detector.area) {
var patchSize = _config.locator.patchSize || "medium"; var area = _config.detector.area;
var halfSample = _config.locator.halfSample || true; var patchSize = _config.locator.patchSize || "medium";
var halfSample = _config.locator.halfSample || true;
return _checkImageConstraints({ area: area, patchSize: patchSize, canvasSize: canvasSize, halfSample: halfSample }); return _checkImageConstraints({ area: area, patchSize: patchSize, canvasSize: canvasSize, halfSample: halfSample });
}
return {
x: 0,
y: 0,
width: canvasSize.width,
height: canvasSize.height
};
} }
function update() { function update() {

File diff suppressed because one or more lines are too long

@ -42,7 +42,9 @@ function initCamera(video, constraints) {
.then((stream) => { .then((stream) => {
return new Promise((resolve) => { return new Promise((resolve) => {
streamRef = stream; streamRef = stream;
video.setAttribute("autoplay", 'true'); video.setAttribute("autoplay", true);
video.setAttribute('muted', true);
video.setAttribute('playsinline', true);
video.srcObject = stream; video.srcObject = stream;
video.addEventListener('loadedmetadata', () => { video.addEventListener('loadedmetadata', () => {
video.play(); video.play();
@ -87,6 +89,15 @@ function enumerateVideoDevices() {
.then(devices => devices.filter(device => device.kind === 'videoinput')); .then(devices => devices.filter(device => device.kind === 'videoinput'));
} }
function getActiveTrack() {
if (streamRef) {
const tracks = streamRef.getVideoTracks();
if (tracks && tracks.length) {
return tracks[0];
}
}
}
export default { export default {
request: function(video, videoConstraints) { request: function(video, videoConstraints) {
return pickConstraints(videoConstraints) return pickConstraints(videoConstraints)
@ -101,11 +112,8 @@ export default {
}, },
enumerateVideoDevices, enumerateVideoDevices,
getActiveStreamLabel: function() { getActiveStreamLabel: function() {
if (streamRef) { const track = getActiveTrack();
const tracks = streamRef.getVideoTracks(); return track ? track.label : '';
if (tracks && tracks.length) { },
return tracks[0].label; getActiveTrack
}
}
}
}; };

@ -212,11 +212,19 @@ function createScanner(pixelCapturer) {
} }
function calculateClipping(canvasSize) { function calculateClipping(canvasSize) {
const area = _config.detector.area; if (_config.detector && _config.detector.area) {
const patchSize = _config.locator.patchSize || "medium"; const area = _config.detector.area;
const halfSample = _config.locator.halfSample || true; const patchSize = _config.locator.patchSize || "medium";
const halfSample = _config.locator.halfSample || true;
return _checkImageConstraints({area, patchSize, canvasSize, halfSample}); return _checkImageConstraints({area, patchSize, canvasSize, halfSample});
}
return {
x: 0,
y: 0,
width: canvasSize.width,
height: canvasSize.height,
};
} }
function update() { function update() {

@ -22,7 +22,7 @@ describe('decodeSingle', function () {
}; };
} }
this.timeout(10000); this.timeout(5000);
function _runTestSet(testSet, config) { function _runTestSet(testSet, config) {
var readers = config.decoder.readers.slice(), var readers = config.decoder.readers.slice(),
@ -43,18 +43,32 @@ describe('decodeSingle', function () {
it('should decode ' + folder + " correctly", function(done) { it('should decode ' + folder + " correctly", function(done) {
async.eachSeries(testSet, function (sample, callback) { async.eachSeries(testSet, function (sample, callback) {
config.src = folder + sample.name;
config.readers = readers;
Quagga Quagga
.config(config) .fromImage({
.fromSource(config.src) constraints: {
.addEventListener('processed', function(result){ src: folder + sample.name,
console.log(sample.name); width: config.inputStream.size,
expect(result.codeResult.code).to.equal(sample.result); height: config.inputStream.size,
expect(result.codeResult.format).to.equal(sample.format); },
callback(); locator: config.locator,
decoder: {
readers: readers,
},
numOfWorkers: config.numOfWorkers,
}) })
.start(); .then(scanner => {
console.log('Scanner created', scanner);
scanner.detect()
.then((result) => {
console.log(sample.name);
expect(result.codeResult.code).to.equal(sample.result);
expect(result.codeResult.format).to.equal(sample.format);
})
.catch(err => {
console.log(sample.name, err);
})
.then(callback);
});
}, function() { }, function() {
done(); done();
}); });

Loading…
Cancel
Save