919 resultados para Optical pattern recognition Data processing
Resumo:
Tutkielma käyttää automaattista kuviontunnistusalgoritmia ja yleisiä kahden liukuvan keskiarvon leikkauspiste –sääntöjä selittääkseen Stuttgartin pörssissä toimivien yksityissijoittajien myynti-osto –epätasapainoa ja siten vastatakseen kysymykseen ”käyttävätkö yksityissijoittajat teknisen analyysin menetelmiä kaupankäyntipäätöstensä perustana?” Perusolettama sijoittajien käyttäytymisestä ja teknisen analyysin tuottavuudesta tehtyjen tutkimusten perusteella oli, että yksityissijoittajat käyttäisivät teknisen analyysin metodeja. Empiirinen tutkimus, jonka aineistona on DAX30 yhtiöiden data vuosilta 2009 – 2013, ei tuottanut riittävän selkeää vastausta tutkimuskysymykseen. Heikko todistusaineisto näyttää kuitenkin osoittavan, että yksityissijoittajat muuttavat kaupankäyntikäyttäytymistänsä eräiden kuvioiden ja leikkauspistesääntöjen ohjastamaan suuntaan.
Resumo:
The chemical composition of apple juices may be used to discriminate between the varieties for consumption and those for raw material. Fuji and Gala have a chemical pattern that can be used for this classification. Multivariate methods correlate independent continuous chemical descriptors with the categorical apple variety. Three main descriptors of apple juice were selected: malic acid, total reducing sugar and total phenolic compounds. A chemometric approach, employing PCA and SIMCA, was used to classify apple juice samples. PCA was performed with 24 juices from Fuji and Gala, and SIMCA, with 15 juices. The exploratory and predictive models recognized 88% and 64%, respectively, as belonging to a mixed domain. The apple juice from commercial fruits shows a pattern related to cv. Fuji and Gala with boundaries from 0.18 to 0.389 g.100 mL-1 (malic acid), from 8.65 to 15.18 g.100 mL-1 (total reducing sugar) and from 100 to 400 mg.L-1 (total phenolic compounds), but such boundaries were slightly shorter in the remaining set of commercial apple juices, specifically from 0.16 to 0.36 g.100 mL-1, from 9.25 to 15.5 g.100 mL-1 and from 180 to 606 mg.L-1 for acidity, reducing sugar and phenolic compounds, respectively, representing the acid, sweet and bitter tastes.
Resumo:
This paper explores behavioral patterns of web users on an online magazine web-site. The goal of the study is to first find and visualize user paths within the data generated during collection, and to identify some generic behavioral typologies of user behavior. To form a theoretical foundation for processing data and identifying behavioral ar-chetypes, the study relies on established consumer behavior literature to propose typologies of behavior. For data processing, the study utilizes methodologies of ap-plied cluster analysis and sequential path analysis. Utilizing a dataset of click stream data generated from the real-life clicks of 250 ran-domly selected website visitors over a period of six weeks. Based on the data collect-ed, an exploratory method is followed in order to find and visualize generally occur-ring paths of users on the website. Six distinct behavioral typologies were recog-nized, with the dominant user consuming mainly blog content, as opposed to editori-al content. Most importantly, it was observed that approximately 80% of clicks were of the blog content category, meaning that the majority of web traffic occurring in the site takes place in content other than the desired editorial content pages. The out-come of the study is a set of managerial recommendations for each identified behavioral archetype.
Resumo:
One of the fundamental problems with image processing of petrographic thin sections is that the appearance (colour I intensity) of a mineral grain will vary with the orientation of the crystal lattice to the preferred direction of the polarizing filters on a petrographic microscope. This makes it very difficult to determine grain boundaries, grain orientation and mineral species from a single captured image. To overcome this problem, the Rotating Polarizer Stage was used to replace the fixed polarizer and analyzer on a standard petrographic microscope. The Rotating Polarizer Stage rotates the polarizers while the thin section remains stationary, allowing for better data gathering possibilities. Instead of capturing a single image of a thin section, six composite data sets are created by rotating the polarizers through 900 (or 1800 if quartz c-axes measurements need to be taken) in both plane and cross polarized light. The composite data sets can be viewed as separate images and consist of the average intensity image, the maximum intensity image, the minimum intensity image, the maximum position image, the minimum position image and the gradient image. The overall strategy used by the image processing system is to gather the composite data sets, determine the grain boundaries using the gradient image, classify the different mineral species present using the minimum and maximum intensity images and then perform measurements of grain shape and, where possible, partial crystallographic orientation using the maximum intensity and maximum position images.
Resumo:
The main focus of this thesis is to evaluate and compare Hyperbalilearning algorithm (HBL) to other learning algorithms. In this work HBL is compared to feed forward artificial neural networks using back propagation learning, K-nearest neighbor and 103 algorithms. In order to evaluate the similarity of these algorithms, we carried out three experiments using nine benchmark data sets from UCI machine learning repository. The first experiment compares HBL to other algorithms when sample size of dataset is changing. The second experiment compares HBL to other algorithms when dimensionality of data changes. The last experiment compares HBL to other algorithms according to the level of agreement to data target values. Our observations in general showed, considering classification accuracy as a measure, HBL is performing as good as most ANn variants. Additionally, we also deduced that HBL.:s classification accuracy outperforms 103's and K-nearest neighbour's for the selected data sets.
Resumo:
Remote sensing techniques involving hyperspectral imagery have applications in a number of sciences that study some aspects of the surface of the planet. The analysis of hyperspectral images is complex because of the large amount of information involved and the noise within that data. Investigating images with regard to identify minerals, rocks, vegetation and other materials is an application of hyperspectral remote sensing in the earth sciences. This thesis evaluates the performance of two classification and clustering techniques on hyperspectral images for mineral identification. Support Vector Machines (SVM) and Self-Organizing Maps (SOM) are applied as classification and clustering techniques, respectively. Principal Component Analysis (PCA) is used to prepare the data to be analyzed. The purpose of using PCA is to reduce the amount of data that needs to be processed by identifying the most important components within the data. A well-studied dataset from Cuprite, Nevada and a dataset of more complex data from Baffin Island were used to assess the performance of these techniques. The main goal of this research study is to evaluate the advantage of training a classifier based on a small amount of data compared to an unsupervised method. Determining the effect of feature extraction on the accuracy of the clustering and classification method is another goal of this research. This thesis concludes that using PCA increases the learning accuracy, and especially so in classification. SVM classifies Cuprite data with a high precision and the SOM challenges SVM on datasets with high level of noise (like Baffin Island).
Resumo:
L’évolution rapide des technologies de détection et de caractérisation des exoplanètes depuis le début des années 1990 permet de croire que de nouveaux instruments du type Terrestrial Planet Finder (TPF) pourront prendre les premiers spectres d’exoplanètes semblables à la Terre d’ici une ou deux décennies. Dans ce contexte, l’étude du spectre de la seule planète habitée connue, la Terre, est essentielle pour concevoir ces instruments et analyser leurs résultats. Cette recherche présente les spectres de la Terre dans le visible (390-900 nm), acquis lors de 8 nuits d’observation étalées sur plus d’un an. Ces spectres ont été obtenus en observant la lumière cendrée de la Lune avec le télescope de 1.6 m de l’Observatoire du Mont-Mégantic (OMM). La surface de la Lune réfléchissant de manière diffuse la lumière provenant d’une portion de la Terre, ces spectres sont non résolus spatialement. L’évolution de ces spectres en fonction de la lumière réfléchie à différentes phases de Terre est analogue à celle du spectre d’une exoplanète, dont la phase change selon sa position autour de l’étoile. L'eau, l'oxygène et l'ozone de l’atmosphère, détectés dans tous nos spectres, sont des biomarqueurs dont la présence suggère l’habitabilité de la planète et/ou la présence d’une activité biologique. Le Vegetation Red Edge (VRE), une autre biosignature spectrale, dû aux organismes photosynthétiques à la surface, est caractérisé par l’augmentation de la réflectivité autour de 700 nm. Pour les spectres de 5 nuits, cette augmentation a été évaluée entre -5 et 15% ±~5%, après que les contributions de la diffusion de Rayleigh, des aérosols et d’une large bande moléculaire de l’ozone aient été enlevées. Les valeurs mesurées sont cohérentes avec la présence de végétation dans la phase de la Terre contribuant au spectre, mais s’étendent sur une plage de variations plus large que celles trouvées dans la littérature (0-10%). Cela pourrait s’expliquer par des choix faits lors de la réduction des données et du calcul du VRE, ou encore par la présence d’autres éléments de surface ou de l’atmosphère dont la contribution spectrale autour de 700 nm serait variable.
Resumo:
La biométrie, appliquée dans un contexte de traitement automatisé des données et de reconnaissance des identités, fait partie de ces technologies nouvelles dont la complexité d’utilisation fait émerger de nouveaux enjeux et où ses effets à long terme sont incalculables. L’envergure des risques suscite des questionnements dont il est essentiel de trouver les réponses. On justifie le recours à cette technologie dans le but d’apporter plus de sécurité, mais, vient-elle vraiment apporter plus de protection dans le contexte actuel? En outre, le régime législatif québécois est-il suffisant pour encadrer tous les risques qu’elle génère? Les technologies biométriques sont flexibles en ce sens qu’elles permettent de saisir une multitude de caractéristiques biométriques et offrent aux utilisateurs plusieurs modalités de fonctionnement. Par exemple, on peut l’utiliser pour l’identification tout comme pour l’authentification. Bien que la différence entre les deux concepts puisse être difficile à saisir, nous verrons qu’ils auront des répercussions différentes sur nos droits et ne comporteront pas les mêmes risques. Par ailleurs, le droit fondamental qui sera le plus touché par l’utilisation de la biométrie sera évidemment le droit à la vie privée. Encore non bien compris, le droit à la vie privée est complexe et son application est difficile dans le contexte des nouvelles technologies. La circulation des données biométriques, la surveillance accrue, le détournement d’usage et l’usurpation d’identité figurent au tableau des risques connus de la biométrie. De plus, nous verrons que son utilisation pourra avoir des conséquences sur d’autres droits fondamentaux, selon la manière dont le système est employé. Les tests de nécessité du projet et de proportionnalité de l’atteinte à nos droits seront les éléments clés pour évaluer la conformité d’un système biométrique. Ensuite, le succès de la technologie dépendra des mesures de sécurité mises en place pour assurer la protection des données biométriques, leur intégrité et leur accès, une fois la légitimité du système établie.
Resumo:
The thesis deals with the preparation of chemical, optical, thermal and electrical characterization of five compounds, namely metal free naphthalocyanine, vanadyl napthalocyanine, zinc naphlocyanine, europium dinaphthalocyanine, and europium diphthalocyanine in the pristine and iodine-doped forms. Two important technological properties of these compounds have been investigated. The electrical properties are important in applications sensors and semiconductor lasers. Opto-thermal properties assume significance for optical imaging and data recording. The electrical properties were investigated by dc and ac techniques. This work has revealed some novel information on the conduction mechanism in five macrocyclic compounds and their iodine-doped forms. Also useful data on the thermal diffusivity of the target compounds have been obtained by optical techniques.
Resumo:
During 1990's the Wavelet Transform emerged as an important signal processing tool with potential applications in time-frequency analysis and non-stationary signal processing.Wavelets have gained popularity in broad range of disciplines like signal/image compression, medical diagnostics, boundary value problems, geophysical signal processing, statistical signal processing,pattern recognition,underwater acoustics etc.In 1993, G. Evangelista introduced the Pitch- synchronous Wavelet Transform, which is particularly suited for pseudo-periodic signal processing.The work presented in this thesis mainly concentrates on two interrelated topics in signal processing,viz. the Wavelet Transform based signal compression and the computation of Discrete Wavelet Transform. A new compression scheme is described in which the Pitch-Synchronous Wavelet Transform technique is combined with the popular linear Predictive Coding method for pseudo-periodic signal processing. Subsequently,A novel Parallel Multiple Subsequence structure is presented for the efficient computation of Wavelet Transform. Case studies also presented to highlight the potential applications.
Resumo:
Handwriting is an acquired tool used for communication of one's observations or feelings. Factors that inuence a person's handwriting not only dependent on the individual's bio-mechanical constraints, handwriting education received, writing instrument, type of paper, background, but also factors like stress, motivation and the purpose of the handwriting. Despite the high variation in a person's handwriting, recent results from different writer identification studies have shown that it possesses sufficient individual traits to be used as an identification method. Handwriting as a behavioral biometric has had the interest of researchers for a long time. But recently it has been enjoying new interest due to an increased need and effort to deal with problems ranging from white-collar crime to terrorist threats. The identification of the writer based on a piece of handwriting is a challenging task for pattern recognition. The main objective of this thesis is to develop a text independent writer identification system for Malayalam Handwriting. The study also extends to developing a framework for online character recognition of Grantha script and Malayalam characters
Resumo:
The telemetry data processing operation intended for a given mission are pre-defined by an onboard telemetry configuration, mission trajectory and overall telemetry methodology have stabilized lately for ISRO vehicles. The given problem on telemetry data processing is reduced through hierarchical problem reduction whereby the sequencing of operations evolves as the control task and operations on data as the function task. The function task Input, Output and execution criteria are captured into tables which are examined by the control task and then schedules when the function task when the criteria is being met.
Resumo:
In this paper the effectiveness of a novel method of computer assisted pedicle screw insertion was studied using testing of hypothesis procedure with a sample size of 48. Pattern recognition based on geometric features of markers on the drill has been performed on real time optical video obtained from orthogonally placed CCD cameras. The study reveals the exactness of the calculated position of the drill using navigation based on CT image of the vertebra and real time optical video of the drill. The significance value is 0.424 at 95% confidence level which indicates good precision with a standard mean error of only 0.00724. The virtual vision method is less hazardous to both patient and the surgeon
Resumo:
Zur Senkung von Kosten werden in vielen Unternehmen Dienstleistungen, die nicht zur Kernkompetenz gehören, an externe Dienstleister ausgelagert. Dieser Prozess wird auch als Outsourcing bezeichnet. Die dadurch entstehenden Abhängigkeiten zu den externen Dienstleistern werden mit Hilfe von Service Level Agreements (SLAs) vertraglich geregelt. Die Aufgabe des Service Level Managements (SLM) ist es, die Einhaltung der vertraglich fixierten Dienstgüteparameter zu überwachen bzw. sicherzustellen. Für eine automatische Bearbeitung ist daher eine formale Spezifikation von SLAs notwendig. Da der Markt eine Vielzahl von unterschiedlichen SLM-Werkzeugen hervorgebracht hat, entstehen in der Praxis Probleme durch proprietäre SLA-Formate und fehlende Spezifikationsmethoden. Daraus resultiert eine Werkzeugabhängigkeit und eine limitierte Wiederverwendbarkeit bereits spezifizierter SLAs. In der vorliegenden Arbeit wird ein Ansatz für ein plattformunabhängiges Service Level Management entwickelt. Ziel ist eine Vereinheitlichung der Modellierung, so dass unterschiedliche Managementansätze integriert und eine Trennung zwischen Problem- und Technologiedomäne erreicht wird. Zudem wird durch die Plattformunabhängigkeit eine hohe zeitliche Stabilität erstellter Modelle erreicht. Weiteres Ziel der Arbeit ist, die Wiederverwendbarkeit modellierter SLAs zu gewährleisten und eine prozessorientierte Modellierungsmethodik bereitzustellen. Eine automatisierte Etablierung modellierter SLAs ist für eine praktische Nutzung von entscheidender Relevanz. Zur Erreichung dieser Ziele werden die Prinzipien der Model Driven Architecture (MDA) auf die Problemdomäne des Service Level Managements angewandt. Zentrale Idee der Arbeit ist die Definition von SLA-Mustern, die konfigurationsunabhängige Abstraktionen von Service Level Agreements darstellen. Diese SLA-Muster entsprechen dem Plattformunabhängigen Modell (PIM) der MDA. Durch eine geeignete Modelltransformation wird aus einem SLA-Muster eine SLA-Instanz generiert, die alle notwendigen Konfigurationsinformationen beinhaltet und bereits im Format der Zielplattform vorliegt. Eine SLA-Instanz entspricht damit dem Plattformspezifischen Modell (PSM) der MDA. Die Etablierung der SLA-Instanzen und die daraus resultierende Konfiguration des Managementsystems entspricht dem Plattformspezifischen Code (PSC) der MDA. Nach diesem Schritt ist das Managementsystem in der Lage, die im SLA vereinbarten Dienstgüteparameter eigenständig zu überwachen. Im Rahmen der Arbeit wurde eine UML-Erweiterung definiert, die eine Modellierung von SLA-Mustern mit Hilfe eines UML-Werkzeugs ermöglicht. Hierbei kann die Modellierung rein graphisch als auch unter Einbeziehung der Object Constraint Language (OCL) erfolgen. Für die praktische Realisierung des Ansatzes wurde eine Managementarchitektur entwickelt, die im Rahmen eines Prototypen realisiert wurde. Der Gesamtansatz wurde anhand einer Fallstudie evaluiert.
Resumo:
Analysis by reduction is a linguistically motivated method for checking correctness of a sentence. It can be modelled by restarting automata. In this paper we propose a method for learning restarting automata which are strictly locally testable (SLT-R-automata). The method is based on the concept of identification in the limit from positive examples only. Also we characterize the class of languages accepted by SLT-R-automata with respect to the Chomsky hierarchy.