86 resultados para Speaker verification
Resumo:
Nowadays, computer-based systems tend to become more complex and control increasingly critical functions affecting different areas of human activities. Failures of such systems might result in loss of human lives as well as significant damage to the environment. Therefore, their safety needs to be ensured. However, the development of safety-critical systems is not a trivial exercise. Hence, to preclude design faults and guarantee the desired behaviour, different industrial standards prescribe the use of rigorous techniques for development and verification of such systems. The more critical the system is, the more rigorous approach should be undertaken. To ensure safety of a critical computer-based system, satisfaction of the safety requirements imposed on this system should be demonstrated. This task involves a number of activities. In particular, a set of the safety requirements is usually derived by conducting various safety analysis techniques. Strong assurance that the system satisfies the safety requirements can be provided by formal methods, i.e., mathematically-based techniques. At the same time, the evidence that the system under consideration meets the imposed safety requirements might be demonstrated by constructing safety cases. However, the overall safety assurance process of critical computerbased systems remains insufficiently defined due to the following reasons. Firstly, there are semantic differences between safety requirements and formal models. Informally represented safety requirements should be translated into the underlying formal language to enable further veri cation. Secondly, the development of formal models of complex systems can be labour-intensive and time consuming. Thirdly, there are only a few well-defined methods for integration of formal verification results into safety cases. This thesis proposes an integrated approach to the rigorous development and verification of safety-critical systems that (1) facilitates elicitation of safety requirements and their incorporation into formal models, (2) simplifies formal modelling and verification by proposing specification and refinement patterns, and (3) assists in the construction of safety cases from the artefacts generated by formal reasoning. Our chosen formal framework is Event-B. It allows us to tackle the complexity of safety-critical systems as well as to structure safety requirements by applying abstraction and stepwise refinement. The Rodin platform, a tool supporting Event-B, assists in automatic model transformations and proof-based verification of the desired system properties. The proposed approach has been validated by several case studies from different application domains.
Resumo:
Hissiteollisuudessa nostokoneistoina käytettyjen sähkömoottoreiden laatuvaatimukset ovat tiuken-tuneet viime vuosina. Erityisesti koneistojen tuottama ääni ja mekaaninen värähtely ovat olleet jat-kuvasti tiukentuneen tarkastelun alaisena. Hissikoriin ja hissiä ympäröiviin rakenteisiin välittyvästä värähtelystä johtuva ääni on yksi hissin laatuvaikutelmaan merkittävimmin vaikuttavia tekijöitä. Nostokoneisto on yksi tärkeimmistä äänen ja värähtelyn lähteistä hissijärjestelmässä. Koneiston suunnittelulla edellä mainittuja tekijöitä voidaan minimoida. Sähkökoneiden suunnittelussa finiit-tielementtimenetelmien (FEM) käyttö on vakiintunut haastavimmissa sovelluksissa. Kone Oyj:llä nostokoneistoina käytetään aksiaalivuokestomagneettitahtikoneita (AFPMSM), joiden FEM simu-lointiin käytetään yleisesti kolmea eri tapaa. Kukin näistä vaihtoehdoista pitää sisällään omat hyö-tynsä, että haittansa. Suunnittelun kannalta tärkeää on oikean menetelmän valinta ai-ka/informatiivisuus suhteen maksimoimiseksi. Erittäin tärkeää on myös saatujen tulosten oikeelli-suus. Tämän diplomityön tavoite on kehittää järjestelmä, jonka avulla AFPMS-koneen voimia voidaan mitata yksityiskohtaisella tasolla. Järjestelmän avulla voidaan tarkastella käytössä olevien FE-menetelmien tulosten oikeellisuutta sekä äänen että värähtelyn syntymekanismeja. Järjestelmän tarkoitus on myös syventää Kone Oyj tietotaitoa AFPMS-koneiden toiminnasta. Tässä työssä esitellään AFPMS-koneen epäideaalisuuksia, jotka voivat vaikuttaa mittajärjestelmän suunnitteluun. Myös koneen epäideaalisuuksiin lukeutuvaa ääntä on tarkasteltu tässä työssä. Jotta työn tavoitteiden mukaista FE-menetelmien vertailua ja tulosten oikeellisuuden tarkastelua voitai-siin tehdä, myös yleisimpiä AFPMS-koneen FE-menetelmiä tarkastellaan. Työn tuloksena on mittajärjestelmän suunnitelma, jonka avulla voidaan toteuttaa kuuden vapausas-teen voimamittaus jokaiselle koneistomagneetille alle 1N resoluutiolla. Suunnitellun järjestelmän toimivuutta on tarkasteltu FE-menetelmiä käyttäen ja järjestelmässä käytettävän voima-anturin ky-vykkyyttä on todennettu referenssimittauksin. Suunniteltu mittajärjestelmä mahdollistaa sähkömoottorin useiden eri epäideaalisuuksien tarkaste-lun yksityiskohtaisella tasolla. Mittausajatuksen soveltaminen myös muiden koneiden tutkimiseen tarjoaa mahdollisuuksia jatkotutkimuksille.
Resumo:
The power is still today an issue in wearable computing applications. The aim of the present paper is to raise awareness of the power consumption of wearable computing devices in specific scenarios to be able in the future to design energy efficient wireless sensors for context recognition in wearable computing applications. The approach is based on a hardware study. The objective of this paper is to analyze and compare the total power consumption of three representative wearable computing devices in realistic scenarios such as Display, Speaker, Camera and microphone, Transfer by Wi-Fi, Monitoring outdoor physical activity and Pedometer. A scenario based energy model is also developed. The Samsung Galaxy Nexus I9250 smartphone, the Vuzix M100 Smart Glasses and the SimValley Smartwatch AW-420.RX are the three devices representative of their form factors. The power consumption is measured using PowerTutor, an android energy profiler application with logging option and using unknown parameters so it is adjusted with the USB meter. The result shows that the screen size is the main parameter influencing the power consumption. The power consumption for an identical scenario varies depending on the wearable devices meaning that others components, parameters or processes might impact on the power consumption and further study is needed to explain these variations. This paper also shows that different inputs (touchscreen is more efficient than buttons controls) and outputs (speaker sensor is more efficient than display sensor) impact the energy consumption in different way. This paper gives recommendations to reduce the energy consumption in healthcare wearable computing application using the energy model.
Resumo:
Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.
Resumo:
Resilience is the property of a system to remain trustworthy despite changes. Changes of a different nature, whether due to failures of system components or varying operational conditions, significantly increase the complexity of system development. Therefore, advanced development technologies are required to build robust and flexible system architectures capable of adapting to such changes. Moreover, powerful quantitative techniques are needed to assess the impact of these changes on various system characteristics. Architectural flexibility is achieved by embedding into the system design the mechanisms for identifying changes and reacting on them. Hence a resilient system should have both advanced monitoring and error detection capabilities to recognise changes as well as sophisticated reconfiguration mechanisms to adapt to them. The aim of such reconfiguration is to ensure that the system stays operational, i.e., remains capable of achieving its goals. Design, verification and assessment of the system reconfiguration mechanisms is a challenging and error prone engineering task. In this thesis, we propose and validate a formal framework for development and assessment of resilient systems. Such a framework provides us with the means to specify and verify complex component interactions, model their cooperative behaviour in achieving system goals, and analyse the chosen reconfiguration strategies. Due to the variety of properties to be analysed, such a framework should have an integrated nature. To ensure the system functional correctness, it should rely on formal modelling and verification, while, to assess the impact of changes on such properties as performance and reliability, it should be combined with quantitative analysis. To ensure scalability of the proposed framework, we choose Event-B as the basis for reasoning about functional correctness. Event-B is a statebased formal approach that promotes the correct-by-construction development paradigm and formal verification by theorem proving. Event-B has a mature industrial-strength tool support { the Rodin platform. Proof-based verification as well as the reliance on abstraction and decomposition adopted in Event-B provides the designers with a powerful support for the development of complex systems. Moreover, the top-down system development by refinement allows the developers to explicitly express and verify critical system-level properties. Besides ensuring functional correctness, to achieve resilience we also need to analyse a number of non-functional characteristics, such as reliability and performance. Therefore, in this thesis we also demonstrate how formal development in Event-B can be combined with quantitative analysis. Namely, we experiment with integration of such techniques as probabilistic model checking in PRISM and discrete-event simulation in SimPy with formal development in Event-B. Such an integration allows us to assess how changes and di erent recon guration strategies a ect the overall system resilience. The approach proposed in this thesis is validated by a number of case studies from such areas as robotics, space, healthcare and cloud domain.
Resumo:
It has long been known that amino acids are the building blocks for proteins and govern their folding into specific three-dimensional structures. However, the details of this process are still unknown and represent one of the main problems in structural bioinformatics, which is a highly active research area with the focus on the prediction of three-dimensional structure and its relationship to protein function. The protein structure prediction procedure encompasses several different steps from searches and analyses of sequences and structures, through sequence alignment to the creation of the structural model. Careful evaluation and analysis ultimately results in a hypothetical structure, which can be used to study biological phenomena in, for example, research at the molecular level, biotechnology and especially in drug discovery and development. In this thesis, the structures of five proteins were modeled with templatebased methods, which use proteins with known structures (templates) to model related or structurally similar proteins. The resulting models were an important asset for the interpretation and explanation of biological phenomena, such as amino acids and interaction networks that are essential for the function and/or ligand specificity of the studied proteins. The five proteins represent different case studies with their own challenges like varying template availability, which resulted in a different structure prediction process. This thesis presents the techniques and considerations, which should be taken into account in the modeling procedure to overcome limitations and produce a hypothetical and reliable three-dimensional structure. As each project shows, the reliability is highly dependent on the extensive incorporation of experimental data or known literature and, although experimental verification of in silico results is always desirable to increase the reliability, the presented projects show that also the experimental studies can greatly benefit from structural models. With the help of in silico studies, the experiments can be targeted and precisely designed, thereby saving both money and time. As the programs used in structural bioinformatics are constantly improved and the range of templates increases through structural genomics efforts, the mutual benefits between in silico and experimental studies become even more prominent. Hence, reliable models for protein three-dimensional structures achieved through careful planning and thoughtful executions are, and will continue to be, valuable and indispensable sources for structural information to be combined with functional data.
Resumo:
The aim of this study was to contribute to the current knowledge-based theory by focusing on a research gap that exists in the empirically proven determination of the simultaneous but differentiable effects of intellectual capital (IC) assets and knowledge management (KM) practices on organisational performance (OP). The analysis was built on the past research and theoreticised interactions between the latent constructs specified using the survey-based items that were measured from a sample of Finnish companies for IC and KM and the dependent construct for OP determined using information available from financial databases. Two widely used and commonly recommended measures in the literature on management science, i.e. the return on total assets (ROA) and the return on equity (ROE), were calculated for OP. Thus the investigation of the relationship between IC and KM impacting OP in relation to the hypotheses founded was possible to conduct using objectively derived performance indicators. Using financial OP measures also strengthened the dynamic features of data needed in analysing simultaneous and causal dependences between the modelled constructs specified using structural path models. The estimates were obtained for the parameters of structural path models using a partial least squares-based regression estimator. Results showed that the path dependencies between IC and OP or KM and OP were always insignificant when analysed separate to any other interactions or indirect effects caused by simultaneous modelling and regardless of the OP measure used that was either ROA or ROE. The dependency between the constructs for KM and IC appeared to be very strong and was always significant when modelled simultaneously with other possible interactions between the constructs and using either ROA or ROE to define OP. This study, however, did not find statistically unambiguous evidence for proving the hypothesised causal mediation effects suggesting, for instance, that the effects of KM practices on OP are mediated by the IC assets. Due to the fact that some indication about the fluctuations of causal effects was assessed, it was concluded that further studies are needed for verifying the fundamental and likely hidden causal effects between the constructs of interest. Therefore, it was also recommended that complementary modelling and data processing measures be conducted for elucidating whether the mediation effects occur between IC, KM and OP, the verification of which requires further investigations of measured items and can be build on the findings of this study.
Resumo:
The vast majority of our contemporary society owns a mobile phone, which has resulted in a dramatic rise in the amount of networked computers in recent years. Security issues in the computers have followed the same trend and nearly everyone is now affected by such issues. How could the situation be improved? For software engineers, an obvious answer is to build computer software with security in mind. A problem with building software with security is how to define secure software or how to measure security. This thesis divides the problem into three research questions. First, how can we measure the security of software? Second, what types of tools are available for measuring security? And finally, what do these tools reveal about the security of software? Measuring tools of these kind are commonly called metrics. This thesis is focused on the perspective of software engineers in the software design phase. Focus on the design phase means that code level semantics or programming language specifics are not discussed in this work. Organizational policy, management issues or software development process are also out of the scope. The first two research problems were studied using a literature review while the third was studied using a case study research. The target of the case study was a Java based email server called Apache James, which had details from its changelog and security issues available and the source code was accessible. The research revealed that there is a consensus in the terminology on software security. Security verification activities are commonly divided into evaluation and assurance. The focus of this work was in assurance, which means to verify one’s own work. There are 34 metrics available for security measurements, of which five are evaluation metrics and 29 are assurance metrics. We found, however, that the general quality of these metrics was not good. Only three metrics in the design category passed the inspection criteria and could be used in the case study. The metrics claim to give quantitative information on the security of the software, but in practice they were limited to evaluating different versions of the same software. Apart from being relative, the metrics were unable to detect security issues or point out problems in the design. Furthermore, interpreting the metrics’ results was difficult. In conclusion, the general state of the software security metrics leaves a lot to be desired. The metrics studied had both theoretical and practical issues, and are not suitable for daily engineering workflows. The metrics studied provided a basis for further research, since they pointed out areas where the security metrics were necessary to improve whether verification of security from the design was desired.
Resumo:
This thesis presents the results of an analysis of the content in the series of Russian textbooks Kafe Piter, which is widely used in Finnish educational institutions for adult learners at the time that the research is conducted. The purpose of this study is to determine and describe how a textbook may purvey an image of a foreign country (in this case, Russia). Mixed-methods research with a focus on the qualitative content analysis of Kafe Piter is performed. The guidelines for textbook evaluation of cultural content proposed by Byram (1993) are used in this study as the basis for creating a qualitative analysis checklist, which is adopted according to the needs of the current research. The selection of the categories in the checklist is based on major themes where direct statements about Russia, Russian people and culture appear in the textbook. The cultural content and the way in which it is presented in Kafe Piter are also compared to the intercultural competence objectives of the Common European Framework of Reference for Languages. Because the textbook was not written by a native Russian speaker, it was also important to investigate the types of mistakes found in the books. A simple quantitative analysis in the form of descriptive statistics was done, which consisted of counting the mistakes and inaccuracies in Kafe Piter. The mistakes were categorized into several different groups: factual or cultural, lexicosemantic, grammatical, spelling and punctuation mistakes. Based on the results, the cultural content of Kafe Piter provides a rich variety of cultural information that allows for a good understanding of the Russian language and Russian culture. A sufficient number of cross-cultural elements also appear in the textbook, including cultural images and information describing and comparing Russian and Finnish ways of life. Based on the cultural topics covered in Kafe Piter, we conclude that the textbook is in line with the intercultural competence objectives set out in the Common European Framework of Reference for Languages. The results of the study also make it clear that a thorough proofreading of Kafe Piter is needed in order to correct mistakes - more than 130 cultural and linguistic mistakes and inaccuracies appear in the textbook.
Resumo:
D-luokan vahvistimien etu perinteisiin A- tai AB-luokan vahvistimiin nähden on niitten korkea hyötysuhde ja pieni koko. Lisäksi ne ovat edullisia ja niihin voidaan asentaa pienemmät jäähdytyslevyt kuin perinteisiin vahvistimiin korkean hyötysuhteen ansiosta. Tässä tutkimuksessa asennetaan Peavey Solo –kitaravahvistimen pääteasteena toimivan AB-luokan päätevahvistimen tilalle tehokkaampi ja uudempaa vahvistinluokkaa edustava D-luokan vahvistin. Tutkimuksessa selvitetään, miten D-luokan vahvistimella toteutettu kytkentä toimii verrattuna alkuperäiseen kytkentään tutkimalla kitaravahvistimelle tyypillisiä sähköisiä ominaisuuksia. Työ toteutetaan mittauksin. Koska työssä olevissa kytkennöissä on jännitteisiä osia, on siinä perehdyttävä sähköturvallisuuteen. D-luokan vahvistin asettaa tietyt sähkötekniset kestävyysvaatimukset kytkennän teholähteelle ja kaiutinelementille, jolloin työssä paneudutaan myös niiden valintaan. D-luokan vahvistimeen päivitetyn kytkennän tuottamat äänenpainetasot olivat valtaosassa mittauksia suuremmat kuin alkuperäisellä kytkennällä. Äänenpainetasojen mittaustilanteissa myös kuormalle syötetyt tehot olivat suuremmat päivitetyllä kytkennällä verrattuna alkuperäiseen. Suurin päivitetyn kytkennän kuorman teho 72,72 W saavutettiin, kun kaiutinelementin yli oli 24,12 V:n jännitteen tehollisarvo ja kuorman teho oli noin 36 kertaa suurempi kuin alkuperäisellä kytkennällä. D-luokan vahvistimen mitatut harmonisen särön arvot ovat huomattavasti pienempiä kuin alkuperäisellä vahvistimella. Muokkaamattoman vahvistimen mitatusta sähköisestä taajuusvasteeesta nähdään, että alkuperäinen vahvistin korostaa tiettyjä taajuuksia, kun D-luokan vahvistimen taajuusvaste on sen sijaan likimain tasainen. Akustinen taajuusvaste ei ole niin tasainen D-luokan vahvistimeen päivitetyllä kytkennällä kuin alkuperäisellä kytkennällä ja sillä on myös kapeampi taajuusalue. Jos päivitetyn kytkennän sointia haluttaisiin parantaa, pitäisi tutkia lisää akustiseen taajuusvasteeseen vaikuttavia asioita.
Tietokoneavusteisten tilintarkastuksen tukijärjestelmien käyttö tilintarkastuksen riskienhallinnassa
Resumo:
Suuryritysten skandaalit ovat herättäneet huolenaiheita organisaatioiden tilintarkas-tuksen hallintajärjestelmistä. Tietokonepohjainen tilintarkastuksen tukijärjestelmä voi auttaa tilintarkastajaa suorittamaan valvontaa ja varmistuskokeita, tilinpäätöstietojen analysointia ja tarkistusta sekä jatkuvaa seurantaa ja tilintarkastusta. Tilintarkastuk-sen hallintaohjelmiston avulla voidaan tehostaa työnkulkua ja vähentää virheiden riskiä. Tämän tutkielman tavoitteena on tutkia sähköisen tukijärjestelmän käyttöä tilintarkastusprosessissa sekä tilintarkastukseen liittyvien riskien hallinnassa. Tavoit-teena on saada selville, miten sähköistä tukijärjestelmää käytetään hyväksi tilintar-kastusriskien hallitsemisessa osana tilintarkastusprosessia. Tutkimus on toteutettu laadullisena tutkimuksena. Tutkimuksen empiirinen aineisto koostuu neljästä teemahaastattelusta. Kaikki haastateltavat ovat samasta tilintar-kastusyhteisöstä. Teemahaastattelun aiheet on koottu aikaisemmissa tutkimuksissa esiinnousseista teemoista. Tutkielman empiiristen tutkimustulosten mukaan sähköiset järjestelmät ovat vaikut-taneet merkittävästi tilintarkastajan työhön. Järjestelmätarkastuksen avulla saadaan tarkastettua tehokkaasti suuria aineistomääriä, ja näin koko tarkastus nopeutuu. Laatuvaatimukset ovat kuitenkin kiristyneet, mikä osaltaan syö tehokkuutta. Järjes-telmätarkastajilla on käytössään monenlaisia sähköisiä tilintarkastuksen järjestel-miä, joilla voidaan hakea ja analysoida dataa asiakkaan järjestelmästä. Tämän jäl-keen tarkastajilla on mahdollisuus käydä läpi analysoitua dataa erilaisten raporttien muodossa. Järjestelmätarkastajien toimesta voidaan käydä läpi asiakkaan koko populaatio. Tämä osaltaan auttaa tilintarkastuksen riskienhallinnassa.