7 resultados para quality estimation
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
In this thesis programmatic, application-layer means for better energy-efficiency in the VoIP application domain are studied. The work presented concentrates on optimizations which are suitable for VoIP-implementations utilizing SIP and IEEE 802.11 technologies. Energy-saving optimizations can have an impact on perceived call quality, and thus energy-saving means are studied together with those factors affecting perceived call quality. In this thesis a general view on a topic is given. Based on theory, adaptive optimization schemes for dynamic controlling of application's operation are proposed. A runtime quality model, capable of being integrated into optimization schemes, is developed for VoIP call quality estimation. Based on proposed optimization schemes, some power consumption measurements are done to find out achievable advantages. Measurement results show that a reduction in power consumption is possible to achieve with the help of adaptive optimization schemes.
Resumo:
Diplomityö on tehty Lappeenrannan teknillisessä korkeakoulussa Konepajatekniikan laitoksella. Työ on osa Konepajatekniikan laitoksen toteuttamaa ”LELA” levytuotteiden laadunvalvontaprojektia. Projektiosapuolet olivat: Abloy Oy (Joensuu), Flextronics Enclosures (Oulu), Hihra Oy (Turku), Lillbacka Oy (Alahärmä), Nokia Networks (Oulu), Segerström & Svensson (Uusikaupunki) ja Scanfil Oy (Sievi). Lisäksi projektin pääasiallisena rahoittavana osapuolena toimi Teknologian kehittämiskeskus, Tekes. Työ perustui ohutlevykomponentteja valmistavissa kohdeyrityksissä suoritettuun laadunarviointitutkimukseen, joka sisälsi tuotantoon kohdistetun virhekartoituksen sekä laadunarviointikyselyn. Tutkimuksessa suoritetun tuotannon virhekartoituksen mukaan tyypillisessä ohutlevykomponentteja valmistavassa yrityksessä virheidenesiintymistodennäköisyys on n. 5 – 12 % / tuote. Eniten virheitä syntyi epäkeskopuristintöissä. Tutkimus osoittaa, että virhekartoituksen avulla yrityksen on mahdollista kartoittaa tuotannossa syntyvät virheet ja edelleen kohdistaa laadun kehitystoimenpiteet oikeisiin tuotannon osa-alueisiin.
Resumo:
Kuvien laatu on tutkituimpia ja käytetyimpiä aiheita. Tässä työssä tarkastellaan värin laatu ja spektrikuvia. Työssä annetaan yleiskuva olemassa olevista pakattujen ja erillisten kuvien laadunarviointimenetelmistä painottaen näiden menetelmien soveltaminen spektrikuviin. Tässä työssä esitellään spektriväriulkomuotomalli värikuvien laadunarvioinnille. Malli sovelletaan spektrikuvista jäljennettyihin värikuviin. Malli pohjautuu sekä tilastolliseen spektrikuvamalliin, joka muodostaa yhteyden spektrikuvien ja valokuvien parametrien välille, että kuvan yleiseen ulkomuotoon. Värikuvien tilastollisten spektriparametrien ja fyysisten parametrien välinen yhteys on varmennettu tietokone-pohjaisella kuvamallinnuksella. Mallin ominaisuuksien pohjalta on kehitetty koekäyttöön tarkoitettu menetelmä värikuvien laadunarvioinnille. On kehitetty asiantuntija-pohjainen kyselymenetelmä ja sumea päättelyjärjestelmä värikuvien laadunarvioinnille. Tutkimus osoittaa, että spektri-väri –yhteys ja sumea päättelyjärjestelmä soveltuvat tehokkaasti värikuvien laadunarviointiin.
Resumo:
Nowadays software testing and quality assurance have a great value in software development process. Software testing does not mean a concrete discipline, it is the process of validation and verification that starts from the idea of future product and finishes at the end of product’s maintenance. The importance of software testing methods and tools that can be applied on different testing phases is highly stressed in industry. The initial objectives for this thesis were to provide a sufficient literature review on different testing phases and for each of the phases define the method that can be effectively used for improving software’s quality. Software testing phases, chosen for study are: unit testing, integration testing, functional testing, system testing, acceptance testing and usability testing. The research showed that there are many software testing methods that can be applied at different phases and in the most of the cases the choice of the method should be done depending on software type and its specification. In the thesis the problem, concerned to each of the phases was identified; the method that can help in eliminating this problem was suggested and particularly described.
Resumo:
Software engineering is criticized as not being engineering or 'well-developed' science at all. Software engineers seem not to know exactly how long their projects will last, what they will cost, and will the software work properly after release. Measurements have to be taken in software projects to improve this situation. It is of limited use to only collect metrics afterwards. The values of the relevant metrics have to be predicted, too. The predictions (i.e. estimates) form the basis for proper project management. One of the most painful problems in software projects is effort estimation. It has a clear and central effect on other project attributes like cost and schedule, and to product attributes like size and quality. Effort estimation can be used for several purposes. In this thesis only the effort estimation in software projects for project management purposes is discussed. There is a short introduction to the measurement issues, and some metrics relevantin estimation context are presented. Effort estimation methods are covered quite broadly. The main new contribution in this thesis is the new estimation model that has been created. It takes use of the basic concepts of Function Point Analysis, but avoids the problems and pitfalls found in the method. It is relativelyeasy to use and learn. Effort estimation accuracy has significantly improved after taking this model into use. A major innovation related to the new estimationmodel is the identified need for hierarchical software size measurement. The author of this thesis has developed a three level solution for the estimation model. All currently used size metrics are static in nature, but this new proposed metric is dynamic. It takes use of the increased understanding of the nature of the work as specification and design work proceeds. It thus 'grows up' along with software projects. The effort estimation model development is not possible without gathering and analyzing history data. However, there are many problems with data in software engineering. A major roadblock is the amount and quality of data available. This thesis shows some useful techniques that have been successful in gathering and analyzing the data needed. An estimation process is needed to ensure that methods are used in a proper way, estimates are stored, reported and analyzed properly, and they are used for project management activities. A higher mechanism called measurement framework is also introduced shortly. The purpose of the framework is to define and maintain a measurement or estimationprocess. Without a proper framework, the estimation capability of an organization declines. It requires effort even to maintain an achieved level of estimationaccuracy. Estimation results in several successive releases are analyzed. It isclearly seen that the new estimation model works and the estimation improvementactions have been successful. The calibration of the hierarchical model is a critical activity. An example is shown to shed more light on the calibration and the model itself. There are also remarks about the sensitivity of the model. Finally, an example of usage is shown.
Resumo:
Construction of multiple sequence alignments is a fundamental task in Bioinformatics. Multiple sequence alignments are used as a prerequisite in many Bioinformatics methods, and subsequently the quality of such methods can be critically dependent on the quality of the alignment. However, automatic construction of a multiple sequence alignment for a set of remotely related sequences does not always provide biologically relevant alignments.Therefore, there is a need for an objective approach for evaluating the quality of automatically aligned sequences. The profile hidden Markov model is a powerful approach in comparative genomics. In the profile hidden Markov model, the symbol probabilities are estimated at each conserved alignment position. This can increase the dimension of parameter space and cause an overfitting problem. These two research problems are both related to conservation. We have developed statistical measures for quantifying the conservation of multiple sequence alignments. Two types of methods are considered, those identifying conserved residues in an alignment position, and those calculating positional conservation scores. The positional conservation score was exploited in a statistical prediction model for assessing the quality of multiple sequence alignments. The residue conservation score was used as part of the emission probability estimation method proposed for profile hidden Markov models. The results of the predicted alignment quality score highly correlated with the correct alignment quality scores, indicating that our method is reliable for assessing the quality of any multiple sequence alignment. The comparison of the emission probability estimation method with the maximum likelihood method showed that the number of estimated parameters in the model was dramatically decreased, while the same level of accuracy was maintained. To conclude, we have shown that conservation can be successfully used in the statistical model for alignment quality assessment and in the estimation of emission probabilities in the profile hidden Markov models.
Resumo:
Since its discovery, chaos has been a very interesting and challenging topic of research. Many great minds spent their entire lives trying to give some rules to it. Nowadays, thanks to the research of last century and the advent of computers, it is possible to predict chaotic phenomena of nature for a certain limited amount of time. The aim of this study is to present a recently discovered method for the parameter estimation of the chaotic dynamical system models via the correlation integral likelihood, and give some hints for a more optimized use of it, together with a possible application to the industry. The main part of our study concerned two chaotic attractors whose general behaviour is diff erent, in order to capture eventual di fferences in the results. In the various simulations that we performed, the initial conditions have been changed in a quite exhaustive way. The results obtained show that, under certain conditions, this method works very well in all the case. In particular, it came out that the most important aspect is to be very careful while creating the training set and the empirical likelihood, since a lack of information in this part of the procedure leads to low quality results.