7 resultados para Speech Recognition System using MFCC

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The identification of people by measuring some traits of individual anatomy or physiology has led to a specific research area called biometric recognition. This thesis is focused on improving fingerprint recognition systems considering three important problems: fingerprint enhancement, fingerprint orientation extraction and automatic evaluation of fingerprint algorithms. An effective extraction of salient fingerprint features depends on the quality of the input fingerprint. If the fingerprint is very noisy, we are not able to detect a reliable set of features. A new fingerprint enhancement method, which is both iterative and contextual, is proposed. This approach detects high-quality regions in fingerprints, selectively applies contextual filtering and iteratively expands like wildfire toward low-quality ones. A precise estimation of the orientation field would greatly simplify the estimation of other fingerprint features (singular points, minutiae) and improve the performance of a fingerprint recognition system. The fingerprint orientation extraction is improved following two directions. First, after the introduction of a new taxonomy of fingerprint orientation extraction methods, several variants of baseline methods are implemented and, pointing out the role of pre- and post- processing, we show how to improve the extraction. Second, the introduction of a new hybrid orientation extraction method, which follows an adaptive scheme, allows to improve significantly the orientation extraction in noisy fingerprints. Scientific papers typically propose recognition systems that integrate many modules and therefore an automatic evaluation of fingerprint algorithms is needed to isolate the contributions that determine an actual progress in the state-of-the-art. The lack of a publicly available framework to compare fingerprint orientation extraction algorithms, motivates the introduction of a new benchmark area called FOE (including fingerprints and manually-marked orientation ground-truth) along with fingerprint matching benchmarks in the FVC-onGoing framework. The success of such framework is discussed by providing relevant statistics: more than 1450 algorithms submitted and two international competitions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of the present research is to define a Semantic Web framework for precedent modelling, by using knowledge extracted from text, metadata, and rules, while maintaining a strong text-to-knowledge morphism between legal text and legal concepts, in order to fill the gap between legal document and its semantics. The framework is composed of four different models that make use of standard languages from the Semantic Web stack of technologies: a document metadata structure, modelling the main parts of a judgement, and creating a bridge between a text and its semantic annotations of legal concepts; a legal core ontology, modelling abstract legal concepts and institutions contained in a rule of law; a legal domain ontology, modelling the main legal concepts in a specific domain concerned by case-law; an argumentation system, modelling the structure of argumentation. The input to the framework includes metadata associated with judicial concepts, and an ontology library representing the structure of case-law. The research relies on the previous efforts of the community in the field of legal knowledge representation and rule interchange for applications in the legal domain, in order to apply the theory to a set of real legal documents, stressing the OWL axioms definitions as much as possible in order to enable them to provide a semantically powerful representation of the legal document and a solid ground for an argumentation system using a defeasible subset of predicate logics. It appears that some new features of OWL2 unlock useful reasoning features for legal knowledge, especially if combined with defeasible rules and argumentation schemes. The main task is thus to formalize legal concepts and argumentation patterns contained in a judgement, with the following requirement: to check, validate and reuse the discourse of a judge - and the argumentation he produces - as expressed by the judicial text.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the past decade, the advent of efficient genome sequencing tools and high-throughput experimental biotechnology has lead to enormous progress in the life science. Among the most important innovations is the microarray tecnology. It allows to quantify the expression for thousands of genes simultaneously by measurin the hybridization from a tissue of interest to probes on a small glass or plastic slide. The characteristics of these data include a fair amount of random noise, a predictor dimension in the thousand, and a sample noise in the dozens. One of the most exciting areas to which microarray technology has been applied is the challenge of deciphering complex disease such as cancer. In these studies, samples are taken from two or more groups of individuals with heterogeneous phenotypes, pathologies, or clinical outcomes. these samples are hybridized to microarrays in an effort to find a small number of genes which are strongly correlated with the group of individuals. Eventhough today methods to analyse the data are welle developed and close to reach a standard organization (through the effort of preposed International project like Microarray Gene Expression Data -MGED- Society [1]) it is not unfrequant to stumble in a clinician's question that do not have a compelling statistical method that could permit to answer it.The contribution of this dissertation in deciphering disease regards the development of new approaches aiming at handle open problems posed by clinicians in handle specific experimental designs. In Chapter 1 starting from a biological necessary introduction, we revise the microarray tecnologies and all the important steps that involve an experiment from the production of the array, to the quality controls ending with preprocessing steps that will be used into the data analysis in the rest of the dissertation. While in Chapter 2 a critical review of standard analysis methods are provided stressing most of problems that In Chapter 3 is introduced a method to adress the issue of unbalanced design of miacroarray experiments. In microarray experiments, experimental design is a crucial starting-point for obtaining reasonable results. In a two-class problem, an equal or similar number of samples it should be collected between the two classes. However in some cases, e.g. rare pathologies, the approach to be taken is less evident. We propose to address this issue by applying a modified version of SAM [2]. MultiSAM consists in a reiterated application of a SAM analysis, comparing the less populated class (LPC) with 1,000 random samplings of the same size from the more populated class (MPC) A list of the differentially expressed genes is generated for each SAM application. After 1,000 reiterations, each single probe given a "score" ranging from 0 to 1,000 based on its recurrence in the 1,000 lists as differentially expressed. The performance of MultiSAM was compared to the performance of SAM and LIMMA [3] over two simulated data sets via beta and exponential distribution. The results of all three algorithms over low- noise data sets seems acceptable However, on a real unbalanced two-channel data set reagardin Chronic Lymphocitic Leukemia, LIMMA finds no significant probe, SAM finds 23 significantly changed probes but cannot separate the two classes, while MultiSAM finds 122 probes with score >300 and separates the data into two clusters by hierarchical clustering. We also report extra-assay validation in terms of differentially expressed genes Although standard algorithms perform well over low-noise simulated data sets, multi-SAM seems to be the only one able to reveal subtle differences in gene expression profiles on real unbalanced data. In Chapter 4 a method to adress similarities evaluation in a three-class prblem by means of Relevance Vector Machine [4] is described. In fact, looking at microarray data in a prognostic and diagnostic clinical framework, not only differences could have a crucial role. In some cases similarities can give useful and, sometimes even more, important information. The goal, given three classes, could be to establish, with a certain level of confidence, if the third one is similar to the first or the second one. In this work we show that Relevance Vector Machine (RVM) [2] could be a possible solutions to the limitation of standard supervised classification. In fact, RVM offers many advantages compared, for example, with his well-known precursor (Support Vector Machine - SVM [3]). Among these advantages, the estimate of posterior probability of class membership represents a key feature to address the similarity issue. This is a highly important, but often overlooked, option of any practical pattern recognition system. We focused on Tumor-Grade-three-class problem, so we have 67 samples of grade I (G1), 54 samples of grade 3 (G3) and 100 samples of grade 2 (G2). The goal is to find a model able to separate G1 from G3, then evaluate the third class G2 as test-set to obtain the probability for samples of G2 to be member of class G1 or class G3. The analysis showed that breast cancer samples of grade II have a molecular profile more similar to breast cancer samples of grade I. Looking at the literature this result have been guessed, but no measure of significance was gived before.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ambient Intelligence (AmI) envisions a world where smart, electronic environments are aware and responsive to their context. People moving into these settings engage many computational devices and systems simultaneously even if they are not aware of their presence. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. The dependence on a large amount of fixed and mobile sensors embedded into the environment makes of Wireless Sensor Networks one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes, simple devices that typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. In order to handle the large amount of data generated by a WSN several multi sensor data fusion techniques have been developed. The aim of multisensor data fusion is to combine data to achieve better accuracy and inferences than could be achieved by the use of a single sensor alone. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas: Multimodal Surveillance and Activity Recognition. Novel techniques to handle data from a network of low-cost, low-power Pyroelectric InfraRed (PIR) sensors are presented. Such techniques allow the detection of the number of people moving in the environment, their direction of movement and their position. We discuss how a mesh of PIR sensors can be integrated with a video surveillance system to increase its performance in people tracking. Furthermore we embed a PIR sensor within the design of a Wireless Video Sensor Node (WVSN) to extend its lifetime. Activity recognition is a fundamental block in natural interfaces. A challenging objective is to design an activity recognition system that is able to exploit a redundant but unreliable WSN. We present our activity in building a novel activity recognition architecture for such a dynamic system. The architecture has a hierarchical structure where simple nodes performs gesture classification and a high level meta classifiers fuses a changing number of classifier outputs. We demonstrate the benefit of such architecture in terms of increased recognition performance, and fault and noise robustness. Furthermore we show how we can extend network lifetime by performing a performance-power trade-off. Smart objects can enhance user experience within smart environments. We present our work in extending the capabilities of the Smart Micrel Cube (SMCube), a smart object used as tangible interface within a tangible computing framework, through the development of a gesture recognition algorithm suitable for this limited computational power device. Finally the development of activity recognition techniques can greatly benefit from the availability of shared dataset. We report our experience in building a dataset for activity recognition. Such dataset is freely available to the scientific community for research purposes and can be used as a testbench for developing, testing and comparing different activity recognition techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The above factors emphasize the scope of this thesis for further investigations on zirconia, the improvement of all-ceramic zirconia restorations, and especially the interaction of zirconia and veneering and its influence on the performance of the whole restoration. The introduction, chapter 1, gave a literature overview on zirconia ceramics. In chapter 2, the objective of the study was to evaluate the effect of abrading before and after sintering using alumina-based abrasives on the surface of yttria-tetragonal zirconia polycrystals. Particular attention was paid to the amount of surface stress–assisted phase transformation (tetragonal→monoclinic) and the presence of microcracks. Chapter 3 is based on the idea that the conventional sintering techniques for zirconia based materials, which are commonly used in dental reconstruction, may not provide a uniform heating, with consequent generation of microstructural flaws in the final component. As a consequence of the sintering system, using microwave heating, may represent a viable alternative. The purpose of the study was to compare the dimensional variations and physical and microstructural characteristics of commercial zirconia (Y-TZP), used as a dental restoration material, sintered in conventional and microwave furnaces. Chapter 4 described the effect of sandblasting before and after sintering on the surface roughness of zirconia and the microtensile bond strength of a pressable veneering ceramic to zirconia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last decades, medical malpractice has been framed as one of the most critical issues for healthcare providers and health policy, holding a central role on both the policy agenda and public debate. The Law and Economics literature has devoted much attention to medical malpractice and to the investigation of the impact of malpractice reforms. Nonetheless, some reforms have been much less empirically studied as in the case of schedules, and their effects remain highly debated. The present work seeks to contribute to the study of medical malpractice and of schedules of noneconomic damages in a civil law country with a public national health system, using Italy as case study. Besides considering schedules and exploiting a quasi-experimental setting, the novelty of our contribution consists in the inclusion of the performance of the judiciary (measured as courts’ civil backlog) in the empirical analysis. The empirical analysis is twofold. First, it investigates how limiting compensations for pain and suffering through schedules impacts on the malpractice insurance market in terms of presence of private insurers and of premiums applied. Second, it examines whether, and to what extent, healthcare providers react to the implementation of this policy in terms of both levels and composition of the medical treatments offered. Our findings show that the introduction of schedules increases the presence of insurers only in inefficient courts, while it does not produce significant effects on paid premiums. Judicial inefficiency is attractive to insurers for average values of schedules penetration of the market, with an increasing positive impact of inefficiency as the territorial coverage of schedules increases. Moreover, the implementation of schedules tends to reduce the use of defensive practices on the part of clinicians, but the magnitude of this impact is ultimately determined by the actual degree of backlog of the court implementing schedules.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gas separation membranes of high CO2 permeability and selectivity have great potential in both natural gas sweetening and carbon dioxide capture. Many modified PIM membranes results permselectivity above Robinson upper bound. The big problem that should be solved for these polymers to be commercialized is their aging through time. In high glassy polymeric membrane such as PIM-1 and its modifications, solubility selectivity has more contribution towards permselectivity than diffusivity selectivity. So in this thesis work pure and mixed gas sorption behavior of carbon dioxide and methane in three PIM-based membranes (PIM-1, TZPIM-1 and AO-PIM-1) and Polynonene membrane is rigorously studied. Sorption experiment is performed at different temperatures and molar fraction. Sorption isotherms found from the experiment shows that there is a decrease of solubility as the temperature of the experiment increases for both gases in all polymers. There is also a decrease of solubility due to the presence of the other gas in the system in the mixed gas experiments due to competitive sorption effect. Variation of solubility is more visible in methane sorption than carbon dioxide, which will make the mixed gas solubility selectivity higher than that of pure gas solubility selectivity. Modeling of the system using NELF and Dual mode sorption model estimates the experimental results correctly Sorption of gases in heat treated and untreated membranes show that the sorption isotherms don’t vary due to the application of heat treatment for both carbon dioxide and methane. But there is decrease in the diffusivity coefficient and permeability of pure gases due to heat treatment. Both diffusivity coefficient and permeability decreases with increasing of heat treatment temperature. Diffusivity coefficient calculated from transient sorption experiment and steady state permeability experiment is also compared in this thesis work. The results reveal that transient diffusivity coefficient is higher than steady state diffusivity selectivity.