963 resultados para quality metrics


Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Parrots belong to a group of behaviorally advanced vertebrates and have an advanced ability of vocal learning relative to other vocal-learning birds. They can imitate human speech, synchronize their body movements to a rhythmic beat, and understand complex concepts of referential meaning to sounds. However, little is known about the genetics of these traits. Elucidating the genetic bases would require whole genome sequencing and a robust assembly of a parrot genome. FINDINGS: We present a genomic resource for the budgerigar, an Australian Parakeet (Melopsittacus undulatus) -- the most widely studied parrot species in neuroscience and behavior. We present genomic sequence data that includes over 300× raw read coverage from multiple sequencing technologies and chromosome optical maps from a single male animal. The reads and optical maps were used to create three hybrid assemblies representing some of the largest genomic scaffolds to date for a bird; two of which were annotated based on similarities to reference sets of non-redundant human, zebra finch and chicken proteins, and budgerigar transcriptome sequence assemblies. The sequence reads for this project were in part generated and used for both the Assemblathon 2 competition and the first de novo assembly of a giga-scale vertebrate genome utilizing PacBio single-molecule sequencing. CONCLUSIONS: Across several quality metrics, these budgerigar assemblies are comparable to or better than the chicken and zebra finch genome assemblies built from traditional Sanger sequencing reads, and are sufficient to analyze regions that are difficult to sequence and assemble, including those not yet assembled in prior bird genomes, and promoter regions of genes differentially regulated in vocal learning brain regions. This work provides valuable data and material for genome technology development and for investigating the genomics of complex behavioral traits.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The renewed concern in assessing risks and consequences from technological hazards in industrial and urban areas continues emphasizing the development of local-scale consequence analysis (CA) modelling tools able to predict shortterm pollution episodes and exposure effects on humans and the environment in case of accident with hazardous gases (hazmat). In this context, the main objective of this thesis is the development and validation of the EFfects of Released Hazardous gAses (EFRHA) model. This modelling tool is designed to simulate the outflow and atmospheric dispersion of heavy and passive hazmat gases in complex and build-up areas, and to estimate the exposure consequences of short-term pollution episodes in accordance to regulatory/safety threshold limits. Five main modules comprising up-to-date methods constitute the model: meteorological, terrain, source term, dispersion, and effects modules. Different initial physical states accident scenarios can be examined. Considered the main core of the developed tool, the dispersion module comprises a shallow layer modelling approach capable to account the main influence of obstacles during the hazmat gas dispersion phenomena. Model validation includes qualitative and quantitative analyses of main outputs by the comparison of modelled results against measurements and/or modelled databases. The preliminary analysis of meteorological and source term modules against modelled outputs from extensively validated models shows the consistent description of ambient conditions and the variation of the hazmat gas release. Dispersion is compared against measurements observations in obstructed and unobstructed areas for different release and dispersion scenarios. From the performance validation exercise, acceptable agreement was obtained, showing the reasonable numerical representation of measured features. In general, quality metrics are within or close to the acceptance limits recommended for ‘non-CFD models’, demonstrating its capability to reasonably predict hazmat gases accidental release and atmospheric dispersion in industrial and urban areas. EFRHA model was also applied to a particular case study, the Estarreja Chemical Complex (ECC), for a set of accidental release scenarios within a CA scope. The results show the magnitude of potential effects on the surrounding populated area and influence of the type of accident and the environment on the main outputs. Overall the present thesis shows that EFRHA model can be used as a straightforward tool to support CA studies in the scope of training and planning, but also, to support decision and emergency response in case of hazmat gases accidental release in industrial and built-up areas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Existing capability models lack qualitative and quantitative means to compare business capabilities. This paper extends previous work and uses affordance theories to consistently model and analyse capabilities. We use the concept of objective and subjective affordances to model capability as a tuple of a set of resource affordance system mechanisms and action paths, dependent on one or more critical affordance factors. We identify an affordance chain of subjective affordances by which affordances work together to enable an action and an affordance path that links action affordances to create a capability system. We define the mechanism and path underlying capability. We show how affordance modelling notation, AMN, can represent affordances comprising a capability. We propose a method to quantitatively and qualitatively compare capabilities using efficiency, effectiveness and quality metrics. The method is demonstrated by a medical example comparing the capability of syringe and needless anaesthetic systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Generating quadrilateral meshes is a highly non-trivial task, as design decisions are frequently driven by specific application demands. Automatic techniques can optimize objective quality metrics, such as mesh regularity, orthogonality, alignment and adaptivity; however, they cannot make subjective design decisions. There are a few quad meshing approaches that offer some mechanisms to include the user in the mesh generation process; however, these techniques either require a large amount of user interaction or do not provide necessary or easy to use inputs. Here, we propose a template-based approach for generating quad-only meshes from triangle surfaces. Our approach offers a flexible mechanism to allow external input, through the definition of alignment features that are respected during the mesh generation process. While allowing user inputs to support subjective design decisions, our approach also takes into account objective quality metrics to produce semi-regular, quad-only meshes that align well to desired surface features. Published by Elsevier Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A presente tese pretende contribuir criticamente para o entendimento das intrincadas relações existentes entre o Estado, o capital e a produção acadêmica. Para isso, se propôs a interpretar as relações acadêmicas de produção na pós-graduação em Administração no Brasil articulando-as com categorias analíticas mais amplas, delineadas de forma a fornecer um quadro, ao fundo, da economia-política. O pressuposto de que a atual intensificação dos ritmos de produção acadêmica contrasta com um passado – idealizado – de ciência contemplativa precisou ser confrontado com o desenvolvimento histórico da educação superior e da pós-graduação no país objetivando-se demover certas mitificações do debate. O Estado, em sua versão reformada a partir do ideário friedmaniano, o conceito de capital monopolista e a teoria do processo de trabalho forneceram suporte teórico-metodológico – e empírico – para a interpretação do quadro político-econômico proposto. Para a passagem do geral para o particular – das conexões entre o Estado e o capital à produção acadêmica – recorreu-se à coleta de dados em duas frentes: (i) analisou-se a produção acadêmica de todos os 168 pesquisadores-doutores bolsistas (até março de 2014) em Produtividade em Pesquisa (PQ) do CNPq na área de Administração e (ii) realizou-se entrevistas em profundidade com pesquisadores-doutores e doutorandos dos mais variados programas de pós-graduação em Administração do país. Os resultados foram inquietantes: verificou-se que está em curso um processo de intensificação da incorporação da mão-de-obra formada por alunos-orientandos às estruturas pedagógico-produtivas dos cursos de pós-graduação. Os orientandos respondem pela parcela mais substantiva do total da produção acadêmica, enquanto que os processos de trabalho aprofundam re-significações das atribuições dos cursos de pós-graduação e intensificam a divisão do trabalho, com impactos diversos nas relações entre os sujeitos da pós-graduação. Quando se procede ao movimento analítico inverso – das relações no interior da pós-graduação em Administração no Brasil para o quadro da economia-política posicionado ao fundo – observa-se que o Estado (principalmente através da CAPES e do CNPq) e o mercado capitalista acadêmico (tendendo a poucas empresas de capital monopolista) acrescentam determinações fundamentais às relações acadêmicas de produção. Índices de avaliação baseados em métricas de contabilidade da pesquisa se legitimam como monopólios epistemológicos da qualidade e se institucionalizam pelas ações coordenadas da CAPES e do CNPq no conjunto do sistema oficial de pós-graduação. Metas de produção são estabelecidas e re-significadas pelos sujeitos. No limite, define-se até mesmo o tipo de ciência que se produz na área. Conclui-se que a resistência aos atuais padrões intensificados de produção acadêmica passa pelo entendimento crítico de todas essas relações que se costuram e se estruturam no interior da pós-graduação.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As tecnologias wireless vêm evoluindo de forma rápida nas últimas décadas, pois são uma eficiente alternativa para transmissão de informações, sejam dados, voz, vídeos e demais serviços de rede. O conhecimento do processo de propagação dessas informações em diferentes ambientes é um fator de grande importância para o planejamento e o desenvolvimento de sistemas de comunicações sem fio. Devido ao rápido avanço e popularização dessas redes, os serviços oferecidos tornaram-se mais complexos e com isso, necessitam de requisitos de qualidades para que sejam ofertados ao usuário final de forma satisfatória. Devido a isso, torna-se necessário aos projetistas desses sistemas, uma metodologia que ofereça uma melhor avaliação do ambiente indoor. Essa avaliação é feita através da análise da área de cobertura e do comportamento das métricas de serviços multimídia em qualquer posição do ambiente que está recebendo o serviço. O trabalho desenvolvido nessa dissertação objetiva avaliar uma metodologia para a predição de métricas de qualidade de experiência. Para isso, foram realizadas campanhas de medições de transmissões de vídeo em uma rede sem fio e foram avaliados alguns parâmetros da rede (jitter de pacotes/frames, perda de pacotes/frames) e alguns parâmetros de qualidade de experiência (PSNR, SSIM e VQM). Os resultados apresentaram boa concordância com os modelos da literatura e com as medições.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The use of numerical simulation in the design and evaluation of products performance is ever increasing. To a greater extent, such estimates are needed in a early design stage, when physical prototypes are not available. When dealing with vibro-acoustic models, known to be computationally expensive, a question remains, which is related to the accuracy of such models in view of the well-know variability inherent to the mass manufacturing production techniques. In addition, both academia and industry have recently realized the importance of actually listening to a products sound, either by measurements or by virtual sound synthesis, in order to assess its performance. In this work, the scatter of significant parameter variations on a simplified vehicle vibro-acoustic model is calculated on loudness metrics using Monte Carlo analysis. The mapping from the system parameters to sound quality metric is performed by a fully-coupled vibro-acoustic finite element model. Different loudness metrics are used, including overall sound pressure level expressed in dB and Specific Loudness in Sones. Sound quality equivalent sources are used to excite this model and the sound pressure level at the driver's head position is acquired to be evaluated according to sound quality metrics. No significant variation has been perceived when evaluating the system using regular sound pressure level expressed in in dB and dB(A). This happens because of the third-octave filters that averages the results under some frequency bands. On the other hand, Zwicker Loudness presents important variations, arguably, due to the masking effects.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

ASTM A529 carbon¿manganese steel angle specimens were joined by flash butt welding and the effects of varying process parameter settings on the resulting welds were investigated. The weld metal and heat affected zones were examined and tested using tensile testing, ultrasonic scanning, Rockwell hardness testing, optical microscopy, and scanning electron microscopy with energy dispersive spectroscopy in order to quantify the effect of process variables on weld quality. Statistical analysis of experimental tensile and ultrasonic scanning data highlighted the sensitivity of weld strength and the presence of weld zone inclusions and interfacial defects to the process factors of upset current, flashing time duration, and upset dimension. Subsequent microstructural analysis revealed various phases within the weld and heat affected zone, including acicular ferrite, Widmanstätten or side-plate ferrite, and grain boundary ferrite. Inspection of the fracture surfaces of multiple tensile specimens, with scanning electron microscopy, displayed evidence of brittle cleavage fracture within the weld zone for certain factor combinations. Test results also indicated that hardness was increased in the weld zone for all specimens, which can be attributed to the extensive deformation of the upset operation. The significance of weld process factor levels on microstructure, fracture characteristics, and weld zone strength was analyzed. The relationships between significant flash welding process variables and weld quality metrics as applied to ASTM A529-Grade 50 steel angle were formalized in empirical process models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Atmospheric turbulence near the ground severely limits the quality of imagery acquired over long horizontal paths. In defense, surveillance, and border security applications, there is interest in deploying man-portable, embedded systems incorporating image reconstruction methods to compensate turbulence effects. While many image reconstruction methods have been proposed, their suitability for use in man-portable embedded systems is uncertain. To be effective, these systems must operate over significant variations in turbulence conditions while subject to other variations due to operation by novice users. Systems that meet these requirements and are otherwise designed to be immune to the factors that cause variation in performance are considered robust. In addition robustness in design, the portable nature of these systems implies a preference for systems with a minimum level of computational complexity. Speckle imaging methods have recently been proposed as being well suited for use in man-portable horizontal imagers. In this work, the robustness of speckle imaging methods is established by identifying a subset of design parameters that provide immunity to the expected variations in operating conditions while minimizing the computation time necessary for image recovery. Design parameters are selected by parametric evaluation of system performance as factors external to the system are varied. The precise control necessary for such an evaluation is made possible using image sets of turbulence degraded imagery developed using a novel technique for simulating anisoplanatic image formation over long horizontal paths. System performance is statistically evaluated over multiple reconstruction using the Mean Squared Error (MSE) to evaluate reconstruction quality. In addition to more general design parameters, the relative performance the bispectrum and the Knox-Thompson phase recovery methods is also compared. As an outcome of this work it can be concluded that speckle-imaging techniques are robust to the variation in turbulence conditions and user controlled parameters expected when operating during the day over long horizontal paths. Speckle imaging systems that incorporate 15 or more image frames and 4 estimates of the object phase per reconstruction provide up to 45% reduction in MSE and 68% reduction in the deviation. In addition, Knox-Thompson phase recover method is shown to produce images in half the time required by the bispectrum. The quality of images reconstructed using Knox-Thompson and bispectrum methods are also found to be nearly identical. Finally, it is shown that certain blind image quality metrics can be used in place of the MSE to evaluate quality in field scenarios. Using blind metrics rather depending on user estimates allows for reconstruction quality that differs from the minimum MSE by as little as 1%, significantly reducing the deviation in performance due to user action.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Desde los inicios de la codificación de vídeo digital hasta hoy, tanto la señal de video sin comprimir de entrada al codificador como la señal de salida descomprimida del decodificador, independientemente de su resolución, uso de submuestreo en los planos de diferencia de color, etc. han tenido siempre la característica común de utilizar 8 bits para representar cada una de las muestras. De la misma manera, los estándares de codificación de vídeo imponen trabajar internamente con estos 8 bits de precisión interna al realizar operaciones con las muestras cuando aún no se han transformado al dominio de la frecuencia. Sin embargo, el estándar H.264, en gran auge hoy en día, permite en algunos de sus perfiles orientados al mundo profesional codificar vídeo con más de 8 bits por muestra. Cuando se utilizan estos perfiles, las operaciones efectuadas sobre las muestras todavía sin transformar se realizan con la misma precisión que el número de bits del vídeo de entrada al codificador. Este aumento de precisión interna tiene el potencial de permitir unas predicciones más precisas, reduciendo el residuo a codificar y aumentando la eficiencia de codificación para una tasa binaria dada. El objetivo de este Proyecto Fin de Carrera es estudiar, utilizando las medidas de calidad visual objetiva PSNR (Peak Signal to Noise Ratio, relación señal ruido de pico) y SSIM (Structural Similarity, similaridad estructural), el efecto sobre la eficiencia de codificación y el rendimiento al trabajar con una cadena de codificación/descodificación H.264 de 10 bits en comparación con una cadena tradicional de 8 bits. Para ello se utiliza el codificador de código abierto x264, capaz de codificar video de 8 y 10 bits por muestra utilizando los perfiles High, High 10, High 4:2:2 y High 4:4:4 Predictive del estándar H.264. Debido a la ausencia de herramientas adecuadas para calcular las medidas PSNR y SSIM de vídeo con más de 8 bits por muestra y un tipo de submuestreo de planos de diferencia de color distinto al 4:2:0, como parte de este proyecto se desarrolla también una aplicación de análisis en lenguaje de programación C capaz de calcular dichas medidas a partir de dos archivos de vídeo sin comprimir en formato YUV o Y4M. ABSTRACT Since the beginning of digital video compression, the uncompressed video source used as input stream to the encoder and the uncompressed decoded output stream have both used 8 bits for representing each sample, independent of resolution, chroma subsampling scheme used, etc. In the same way, video coding standards force encoders to work internally with 8 bits of internal precision when working with samples before being transformed to the frequency domain. However, the H.264 standard allows coding video with more than 8 bits per sample in some of its professionally oriented profiles. When using these profiles, all work on samples still in the spatial domain is done with the same precision the input video has. This increase in internal precision has the potential of allowing more precise predictions, reducing the residual to be encoded, and thus increasing coding efficiency for a given bitrate. The goal of this Project is to study, using PSNR (Peak Signal to Noise Ratio) and SSIM (Structural Similarity) objective video quality metrics, the effects on coding efficiency and performance caused by using an H.264 10 bit coding/decoding chain compared to a traditional 8 bit chain. In order to achieve this goal the open source x264 encoder is used, which allows encoding video with 8 and 10 bits per sample using the H.264 High, High 10, High 4:2:2 and High 4:4:4 Predictive profiles. Given that no proper tools exist for computing PSNR and SSIM values of video with more than 8 bits per sample and chroma subsampling schemes other than 4:2:0, an analysis application written in the C programming language is developed as part of this Project. This application is able to compute both metrics from two uncompressed video files in the YUV or Y4M format.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Systematic evaluation of Learning Objects is essential to make high quality Web-based education possible. For this reason, several educational repositories and e-Learning systems have developed their own evaluation models and tools. However, the differences of the context in which Learning Objects are produced and consumed suggest that no single evaluation model is sufficient for all scenarios. Besides, no much effort has been put in developing open tools to facilitate Learning Object evaluation and use the quality information for the benefit of end users. This paper presents LOEP, an open source web platform that aims to facilitate Learning Object evaluation in different scenarios and educational settings by supporting and integrating several evaluation models and quality metrics. The work exposed in this paper shows that LOEP is capable of providing Learning Object evaluation to e-Learning systems in an open, low cost, reliable and effective way. Possible scenarios where LOEP could be used to implement quality control policies and to enhance search engines are also described. Finally, we report the results of a survey conducted among reviewers that used LOEP, showing that they perceived LOEP as a powerful and easy to use tool for evaluating Learning Objects.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

LHE (logarithmical hopping encoding) is a computationally efficient image compression algorithm that exploits the Weber–Fechner law to encode the error between colour component predictions and the actual value of such components. More concretely, for each pixel, luminance and chrominance predictions are calculated as a function of the surrounding pixels and then the error between the predictions and the actual values are logarithmically quantised. The main advantage of LHE is that although it is capable of achieving a low-bit rate encoding with high quality results in terms of peak signal-to-noise ratio (PSNR) and image quality metrics with full-reference (FSIM) and non-reference (blind/referenceless image spatial quality evaluator), its time complexity is O( n) and its memory complexity is O(1). Furthermore, an enhanced version of the algorithm is proposed, where the output codes provided by the logarithmical quantiser are used in a pre-processing stage to estimate the perceptual relevance of the image blocks. This allows the algorithm to downsample the blocks with low perceptual relevance, thus improving the compression rate. The performance of LHE is especially remarkable when the bit per pixel rate is low, showing much better quality, in terms of PSNR and FSIM, than JPEG and slightly lower quality than JPEG-2000 but being more computationally efficient.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Statistical machine translation (SMT) is an approach to Machine Translation (MT) that uses statistical models whose parameter estimation is based on the analysis of existing human translations (contained in bilingual corpora). From a translation student’s standpoint, this dissertation aims to explain how a phrase-based SMT system works, to determine the role of the statistical models it uses in the translation process and to assess the quality of the translations provided that system is trained with in-domain goodquality corpora. To that end, a phrase-based SMT system based on Moses has been trained and subsequently used for the English to Spanish translation of two texts related in topic to the training data. Finally, the quality of this output texts produced by the system has been assessed through a quantitative evaluation carried out with three different automatic evaluation measures and a qualitative evaluation based on the Multidimensional Quality Metrics (MQM).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There is growing interest in the use of context-awareness as a technique for developing pervasive computing applications that are flexible, adaptable, and capable of acting autonomously on behalf of users. However, context-awareness introduces a variety of software engineering challenges. In this paper, we address these challenges by proposing a set of conceptual models designed to support the software engineering process, including context modelling techniques, a preference model for representing context-dependent requirements, and two programming models. We also present a software infrastructure and software engineering process that can be used in conjunction with our models. Finally, we discuss a case study that demonstrates the strengths of our models and software engineering approach with respect to a set of software quality metrics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to investigate how research and development (R&D) collaboration takes place for complex new products in the automotive sector. The research aims to give guidelines to increase the effectiveness of such collaborations. Design/methodology/approach – The methodology used to investigate this issue was grounded theory. The empirical data were collected through a mixture of interviews and questionnaires. The resulting inducted conceptual models were subsequently validated in industrial workshops. Findings – The findings show that frontloading of the collaborative members was a major issue in managing successful R&D collaborations. Research limitations/implications – The limitation of this research is that it is only based in the German automotive industry. Practical implications – Practical implications have come out of this research. Models and guidelines are given to help make a success of collaborative projects and their potential impacts on time, cost and quality metrics. Originality/value – Frontloading is not often studied in a collaborative manner; it is normally studied within just one organisation. This study has novel value because it has involved a number of different members throughout the supplier network.