970 resultados para QUT Speaker Identity Verification System
Resumo:
Étude de cas / Case study
Resumo:
Motivation for Speaker recognition work is presented in the first part of the thesis. An exhaustive survey of past work in this field is also presented. A low cost system not including complex computation has been chosen for implementation. Towards achieving this a PC based system is designed and developed. A front end analog to digital convertor (12 bit) is built and interfaced to a PC. Software to control the ADC and to perform various analytical functions including feature vector evaluation is developed. It is shown that a fixed set of phrases incorporating evenly balanced phonemes is aptly suited for the speaker recognition work at hand. A set of phrases are chosen for recognition. Two new methods are adopted for the feature evaluation. Some new measurements involving a symmetry check method for pitch period detection and ACE‘ are used as featured. Arguments are provided to show the need for a new model for speech production. Starting from heuristic, a knowledge based (KB) speech production model is presented. In this model, a KB provides impulses to a voice producing mechanism and constant correction is applied via a feedback path. It is this correction that differs from speaker to speaker. Methods of defining measurable parameters for use as features are described. Algorithms for speaker recognition are developed and implemented. Two methods are presented. The first is based on the model postulated. Here the entropy on the utterance of a phoneme is evaluated. The transitions of voiced regions are used as speaker dependent features. The second method presented uses features found in other works, but evaluated differently. A knock—out scheme is used to provide the weightage values for the selection of features. Results of implementation are presented which show on an average of 80% recognition. It is also shown that if there are long gaps between sessions, the performance deteriorates and is speaker dependent. Cross recognition percentages are also presented and this in the worst case rises to 30% while the best case is 0%. Suggestions for further work are given in the concluding chapter.
Resumo:
Any automatically measurable, robust and distinctive physical characteristic or personal trait that can be used to identify an individual or verify the claimed identity of an individual, referred to as biometrics, has gained significant interest in the wake of heightened concerns about security and rapid advancements in networking, communication and mobility. Multimodal biometrics is expected to be ultra-secure and reliable, due to the presence of multiple and independent—verification clues. In this study, a multimodal biometric system utilising audio and facial signatures has been implemented and error analysis has been carried out. A total of one thousand face images and 250 sound tracks of 50 users are used for training the proposed system. To account for the attempts of the unregistered signatures data of 25 new users are tested. The short term spectral features were extracted from the sound data and Vector Quantization was done using K-means algorithm. Face images are identified based on Eigen face approach using Principal Component Analysis. The success rate of multimodal system using speech and face is higher when compared to individual unimodal recognition systems
Resumo:
Malayalam is one of the 22 scheduled languages in India with more than 130 million speakers. This paper presents a report on the development of a speaker independent, continuous transcription system for Malayalam. The system employs Hidden Markov Model (HMM) for acoustic modeling and Mel Frequency Cepstral Coefficient (MFCC) for feature extraction. It is trained with 21 male and female speakers in the age group ranging from 20 to 40 years. The system obtained a word recognition accuracy of 87.4% and a sentence recognition accuracy of 84%, when tested with a set of continuous speech data.
Resumo:
A connected digit speech recognition is important in many applications such as automated banking system, catalogue-dialing, automatic data entry, automated banking system, etc. This paper presents an optimum speaker-independent connected digit recognizer forMalayalam language. The system employs Perceptual Linear Predictive (PLP) cepstral coefficient for speech parameterization and continuous density Hidden Markov Model (HMM) in the recognition process. Viterbi algorithm is used for decoding. The training data base has the utterance of 21 speakers from the age group of 20 to 40 years and the sound is recorded in the normal office environment where each speaker is asked to read 20 set of continuous digits. The system obtained an accuracy of 99.5 % with the unseen data.
Resumo:
Biometrics is an efficient technology with great possibilities in the area of security system development for official and commercial applications. The biometrics has recently become a significant part of any efficient person authentication solution. The advantage of using biometric traits is that they cannot be stolen, shared or even forgotten. The thesis addresses one of the emerging topics in Authentication System, viz., the implementation of Improved Biometric Authentication System using Multimodal Cue Integration, as the operator assisted identification turns out to be tedious, laborious and time consuming. In order to derive the best performance for the authentication system, an appropriate feature selection criteria has been evolved. It has been seen that the selection of too many features lead to the deterioration in the authentication performance and efficiency. In the work reported in this thesis, various judiciously chosen components of the biometric traits and their feature vectors are used for realizing the newly proposed Biometric Authentication System using Multimodal Cue Integration. The feature vectors so generated from the noisy biometric traits is compared with the feature vectors available in the knowledge base and the most matching pattern is identified for the purpose of user authentication. In an attempt to improve the success rate of the Feature Vector based authentication system, the proposed system has been augmented with the user dependent weighted fusion technique.
Resumo:
Thioredoxins are small, regulatory proteins with a mass of approximately 12 kDa and a characteristic conserved active center, which is represented in the pentapeptide trp-cys-gly-pro-cys. Up to now it is not possible to present a complete list of thioredoxin interaction partners because there is no predictable sequence in the target enzymes where thioredoxins can interact with. To get closer information about the functions and possible interaction partners of the three thioredoxins from the social soil amoeba Dictyostelium discoideum (DdTrx1 - 3) we have chosen two different strategies. In the first one the thioredoxin levels in the cell should changed by different mutants. But both the antisense technique as well as the creation of knock out mutants were not appropiate strategies in this case. Just an thioredoxin overexpressing mutant results in a developmental phenotype which allows some conclusions for possible functions of the thioredoxin in Dictyostelium discoideum. The second strategie was the two hybrid system where thioredoxin interactions partners can identified systematically. After a screening with a cDNA library from Dictyostelium 13 potential interaction partners could be detected, among them a ribonucleotid reductase, TRFA, two different cytochrome c oxidase subunits, filopodin, three ribosomal proteins, the elongationfactor 1a and the alcohol dehydrogenase from yeast. The verification of the interaction between thioredoxin and these two hybrid clones happened indirectly by a dobble mutant of thioredoxin 1, where the cysteines in the active center were replaced by redox-inactive serins. Further examinations of two choosen candidates resulted that the alcohol dehydrogenase from yeast is a thioredoxin-modululated enzym and that there is an interaction between the elongationfactor 1a and the thioredoxin 1 from Dictyostelium discoideum.
Resumo:
I have designed and implemented a system for the multilevel verification of synchronous MOS VLSI circuits. The system, called Silica Pithecus, accepts the schematic of an MOS circuit and a specification of the circuit's intended digital behavior. Silica Pithecus determines if the circuit meets its specification. If the circuit fails to meet its specification Silica Pithecus returns to the designer the reason for the failure. Unlike earlier verifiers which modelled primitives (e.g., transistors) as unidirectional digital devices, Silica Pithecus models primitives more realistically. Transistors are modelled as bidirectional devices of varying resistances, and nodes are modelled as capacitors. Silica Pithecus operates hierarchically, interactively, and incrementally. Major contributions of this research include a formal understanding of the relationship between different behavioral descriptions (e.g., signal, boolean, and arithmetic descriptions) of the same device, and a formalization of the relationship between the structure, behavior, and context of device. Given these formal structures my methods find sufficient conditions on the inputs of circuits which guarantee the correct operation of the circuit in the desired descriptive domain. These methods are algorithmic and complete. They also handle complex phenomena such as races and charge sharing. Informal notions such as races and hazards are shown to be derivable from the correctness conditions used by my methods.
Resumo:
Ontic is an interactive system for developing and verifying mathematics. Ontic's verification mechanism is capable of automatically finding and applying information from a library containing hundreds of mathematical facts. Starting with only the axioms of Zermelo-Fraenkel set theory, the Ontic system has been used to build a data base of definitions and lemmas leading to a proof of the Stone representation theorem for Boolean lattices. The Ontic system has been used to explore issues in knowledge representation, automated deduction, and the automatic use of large data bases.
Resumo:
Wednesday 12th March 2014 Speaker(s): Dr Tim Chown Organiser: Time: 12/03/2014 11:00-11:50 Location: B32/3077 File size: 642 Mb Abstract The WAIS seminar series is designed to be a blend of classic seminars, research discussions, debates and tutorials. The Domain Name System (DNS) is a critical part of the Internet infrastructure. In this talk we begin by explaining the basic model of operation of the DNS, including how domain names are delegated and how a DNS resolver performs a DNS lookup. We then take a tour of DNS-related topics, including caching, poisoning, governance, the increasing misuse of the DNS in DDoS attacks, and the expansion of the DNS namespace to new top level domains and internationalised domain names. We also present the latest work in the IETF on DNS privacy. The talk will be pitched such that no detailed technical knowledge is required. We hope that attendees will gain some familiarity with how the DNS works, some key issues surrounding DNS operation, and how the DNS might touch on various areas of research within WAIS.
Resumo:
La present tesi pretén recollir l'experiència viscuda en desenvolupar un sistema supervisor intel·ligent per a la millora de la gestió de plantes depuradores d'aigües residuals., implementar-lo en planta real (EDAR Granollers) i avaluar-ne el funcionament dia a dia amb situacions típiques de la planta. Aquest sistema supervisor combina i integra eines de control clàssic de les plantes depuradores (controlador automàtic del nivell d'oxigen dissolt al reactor biològic, ús de models descriptius del procés...) amb l'aplicació d'eines del camp de la intel·ligència artificial (sistemes basats en el coneixement, concretament sistemes experts i sistemes basats en casos, i xarxes neuronals). Aquest document s'estructura en 9 capítols diferents. Hi ha una primera part introductòria on es fa una revisió de l'estat actual del control de les EDARs i s'explica el perquè de la complexitat de la gestió d'aquests processos (capítol 1). Aquest capítol introductori juntament amb el capítol 2, on es pretén explicar els antecedents d'aquesta tesi, serveixen per establir els objectius d'aquest treball (capítol 3). A continuació, el capítol 4 descriu les peculiaritats i especificitats de la planta que s'ha escollit per implementar el sistema supervisor. Els capítols 5 i 6 del present document exposen el treball fet per a desenvolupar el sistema basat en regles o sistema expert (capítol 6) i el sistema basat en casos (capítol 7). El capítol 8 descriu la integració d'aquestes dues eines de raonament en una arquitectura multi nivell distribuïda. Finalment, hi ha una darrer capítol que correspon a la avaluació (verificació i validació), en primer lloc, de cadascuna de les eines per separat i, posteriorment, del sistema global en front de situacions reals que es donin a la depuradora
Resumo:
The main theme of the ICTOP'94 Lisbon meeting is museum personnel training for the universal museum. At the very beginning it is important to identify what the notion universal museum can cover. It is necessary to underline the ambiguity of the term. On the one hand, the word 'universal' can be taken to refer to the variety of collected museum materials or museum collections, on the other hand it could refer to the efforts of the museum to be active outside the museum walls in order to achieve the integration of the heritage of a certain territory into a museological system. 'Universal' could also refer to the "new dimensions of reality: the fantastic reality of the virtual images, only existing in the human brain" (Scheiner 1994:7), which is very close to M. McLuhan's view of the world as a 'global village'. Thus, what is universal could be taken as being common and available to all the people of the world. 'Universal' can imply also the radical broadening of the concept of object: "mountain, silex, frog, waterfonts, stars, the moon ... everything is an object, with due fluctuations" (Hainard in Scheiner 1994: 7), which will cause the total involvement of the human being into his/her physical and spiritual environment. In the process of universalization, links between cultural and natural heritage and their links with human beings become more solid, helping to create a strong mutual interdependence.
Resumo:
In this article, we examine the case of a system that cooperates with a “direct” user to plan an activity that some “indirect” user, not interacting with the system, should perform. The specific application we consider is the prescription of drugs. In this case, the direct user is the prescriber and the indirect user is the person who is responsible for performing the therapy. Relevant characteristics of the two users are represented in two user models. Explanation strategies are represented in planning operators whose preconditions encode the cognitive state of the indirect user; this allows tailoring the message to the indirect user's characteristics. Expansion of optional subgoals and selection among candidate operators is made by applying decision criteria represented as metarules, that negotiate between direct and indirect users' views also taking into account the context where explanation is provided. After the message has been generated, the direct user may ask to add or remove some items, or change the message style. The system defends the indirect user's needs as far as possible by mentioning the rationale behind the generated message. If needed, the plan is repaired and the direct user model is revised accordingly, so that the system learns progressively to generate messages suited to the preferences of people with whom it interacts.
Resumo:
Forecasting atmospheric blocking is one of the main problems facing medium-range weather forecasters in the extratropics. The European Centre for Medium-Range Weather Forecasts (ECMWF) Ensemble Prediction System (EPS) provides an excellent basis for medium-range forecasting as it provides a number of different possible realizations of the meteorological future. This ensemble of forecasts attempts to account for uncertainties in both the initial conditions and the model formulation. Since 18 July 2000, routine output from the EPS has included the field of potential temperature on the potential vorticity (PV) D 2 PV units (PVU) surface, the dynamical tropopause. This has enabled the objective identification of blocking using an index based on the reversal of the meridional potential-temperature gradient. A year of EPS probability forecasts of Euro-Atlantic and Pacific blocking have been produced and are assessed in this paper, concentrating on the Euro-Atlantic sector. Standard verification techniques such as Brier scores, Relative Operating Characteristic (ROC) curves and reliability diagrams are used. It is shown that Euro-Atlantic sector-blocking forecasts are skilful relative to climatology out to 10 days, and are more skilful than the deterministic control forecast at all lead times. The EPS is also more skilful than a probabilistic version of this deterministic forecast, though the difference is smaller. In addition, it is shown that the onset of a sector-blocking episode is less well predicted than its decay. As the lead time increases, the probability forecasts tend towards a model climatology with slightly less blocking than is seen in the real atmosphere. This small under-forecasting bias in the blocking forecasts is possibly related to a westerly bias in the ECMWF model. Copyright © 2003 Royal Meteorological Society
Resumo:
Objectives: To assess the impact of a closed-loop electronic prescribing, automated dispensing, barcode patient identification and electronic medication administration record (EMAR) system on prescribing and administration errors, confirmation of patient identity before administration, and staff time. Design, setting and participants: Before-and-after study in a surgical ward of a teaching hospital, involving patients and staff of that ward. Intervention: Closed-loop electronic prescribing, automated dispensing, barcode patient identification and EMAR system. Main outcome measures: Percentage of new medication orders with a prescribing error, percentage of doses with medication administration errors (MAEs) and percentage given without checking patient identity. Time spent prescribing and providing a ward pharmacy service. Nursing time on medication tasks. Results: Prescribing errors were identified in 3.8% of 2450 medication orders pre-intervention and 2.0% of 2353 orders afterwards (p<0.001; χ2 test). MAEs occurred in 7.0% of 1473 non-intravenous doses pre-intervention and 4.3% of 1139 afterwards (p = 0.005; χ2 test). Patient identity was not checked for 82.6% of 1344 doses pre-intervention and 18.9% of 1291 afterwards (p<0.001; χ2 test). Medical staff required 15 s to prescribe a regular inpatient drug pre-intervention and 39 s afterwards (p = 0.03; t test). Time spent providing a ward pharmacy service increased from 68 min to 98 min each weekday (p = 0.001; t test); 22% of drug charts were unavailable pre-intervention. Time per drug administration round decreased from 50 min to 40 min (p = 0.006; t test); nursing time on medication tasks outside of drug rounds increased from 21.1% to 28.7% (p = 0.006; χ2 test). Conclusions: A closed-loop electronic prescribing, dispensing and barcode patient identification system reduced prescribing errors and MAEs, and increased confirmation of patient identity before administration. Time spent on medication-related tasks increased.