992 resultados para Test sequence


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background A large number of probabilistic models used in sequence analysis assign non-zero probability values to most input sequences. To decide when a given probability is sufficient the most common way is bayesian binary classification, where the probability of the model characterizing the sequence family of interest is compared to that of an alternative probability model. We can use as alternative model a null model. This is the scoring technique used by sequence analysis tools such as HMMER, SAM and INFERNAL. The most prevalent null models are position-independent residue distributions that include: the uniform distribution, genomic distribution, family-specific distribution and the target sequence distribution. This paper presents a study to evaluate the impact of the choice of a null model in the final result of classifications. In particular, we are interested in minimizing the number of false predictions in a classification. This is a crucial issue to reduce costs of biological validation. Results For all the tests, the target null model presented the lowest number of false positives, when using random sequences as a test. The study was performed in DNA sequences using GC content as the measure of content bias, but the results should be valid also for protein sequences. To broaden the application of the results, the study was performed using randomly generated sequences. Previous studies were performed on aminoacid sequences, using only one probabilistic model (HMM) and on a specific benchmark, and lack more general conclusions about the performance of null models. Finally, a benchmark test with P. falciparum confirmed these results. Conclusions Of the evaluated models the best suited for classification are the uniform model and the target model. However, the use of the uniform model presents a GC bias that can cause more false positives for candidate sequences with extreme compositional bias, a characteristic not described in previous studies. In these cases the target model is more dependable for biological validation due to its higher specificity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Identification of nontuberculous mycobacteria (NTM) based on phenotypic tests is time-consuming, labor-intensive, expensive and often provides erroneous or inconclusive results. In the molecular method referred to as PRA-hsp65, a fragment of the hsp65 gene is amplified by PCR and then analyzed by restriction digest; this rapid approach offers the promise of accurate, cost-effective species identification. The aim of this study was to determine whether species identification of NTM using PRA-hsp65 is sufficiently reliable to serve as the routine methodology in a reference laboratory. Results A total of 434 NTM isolates were obtained from 5019 cultures submitted to the Institute Adolpho Lutz, Sao Paulo Brazil, between January 2000 and January 2001. Species identification was performed for all isolates using conventional phenotypic methods and PRA-hsp65. For isolates for which these methods gave discordant results, definitive species identification was obtained by sequencing a 441 bp fragment of hsp65. Phenotypic evaluation and PRA-hsp65 were concordant for 321 (74%) isolates. These assignments were presumed to be correct. For the remaining 113 discordant isolates, definitive identification was based on sequencing a 441 bp fragment of hsp65. PRA-hsp65 identified 30 isolates with hsp65 alleles representing 13 previously unreported PRA-hsp65 patterns. Overall, species identification by PRA-hsp65 was significantly more accurate than by phenotype methods (392 (90.3%) vs. 338 (77.9%), respectively; p < .0001, Fisher's test). Among the 333 isolates representing the most common pathogenic species, PRA-hsp65 provided an incorrect result for only 1.2%. Conclusion PRA-hsp65 is a rapid and highly reliable method and deserves consideration by any clinical microbiology laboratory charged with performing species identification of NTM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last decade, the end-state comfort effect (e.g., Rosenbaum et al., 2006) has received a considerable amount of attention. However, some of the underlying mechanisms are still to be investigated, amongst others, how sequential planning affects end-state comfort and how this effect develops over learning. In a two-step sequencing task, e.g., postural comfort can be planned on the intermediate position (next state) or on the actual end position (final state). It might be hypothesized that, in initial acquisition, next state’s comfort is crucial for action planning but that, in the course of learning, final state’s comfort is taken more and more into account. To test this hypothesis, a variant of Rosenbaum’s vertical stick transportation task was used. Participants (N = 16, right-handed) received extensive practice on a two-step transportation task (10,000 trials over 12 sessions). From the initial position on the middle stair of a staircase in front of the participant, the stick had to be transported either 20 cm upwards and then 40 cm downwards or 20 cm downwards and then 40 cm upwards (N = 8 per subgroup). Participants were supposed to produce fluid movements without changing grasp. In the pre- and posttest, participants were tested on both two-step sequencing tasks as well as on 20 cm single-step upwards and downwards movements (10 trials per condition). For the test trials, grasp height was calculated kinematographically. In the pretest, large end/next/final-state comfort effects for single-step transportation tasks and large next-state comfort effects for sequenced tasks were found. However, no change in grasp height from pre- to posttest could be revealed. Results show that, in vertical stick transportation sequences, the final state is not taken into account when planning grasp height. Instead, action planning seems to be solely based on aspects of the next action goal that is to be reached.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Typically, statistical learning is investigated by testing the acquisition of specific items or forming general rules. As implicit sequence learning also involves the extraction of regularities from the environment, it can also be considered as an instance of statistical learning. In the present study, a Serial Reaction Time Task was used to test whether the continuous versus interleaved repetition of a sequence affects implicit learning despite the equal exposure to the sequences. The results revealed a sequence learning advantage for the continuous repetition condition compared to the interleaved condition. This suggests that by repetition, additional sequence information was extracted although the exposure to the sequences was identical as in the interleaved condition. The results are discussed in terms of similarities and potential differences between typical statistical learning paradigms and sequence learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND The copy number variation (CNV) in beta-defensin genes (DEFB) on human chromosome 8p23 has been proposed to contribute to the phenotypic differences in inflammatory diseases. However, determination of exact DEFB CN is a major challenge in association studies. Quantitative real-time PCR (qPCR), paralog ratio tests (PRT) and multiplex ligation-dependent probe amplification (MLPA) have been extensively used to determine DEFB CN in different laboratories, but inter-method inconsistencies were observed frequently. In this study we asked which one is superior among the three methods for DEFB CN determination. RESULTS We developed a clustering approach for MLPA and PRT to statistically correlate data from a single experiment. Then we compared qPCR, a newly designed PRT and MLPA for DEFB CN determination in 285 DNA samples. We found MLPA had the best convergence and clustering results of the raw data and the highest call rate. In addition, the concordance rates between MLPA or PRT and qPCR (32.12% and 37.99%, respectively) were unacceptably low with underestimated CN by qPCR. Concordance rate between MLPA and PRT (90.52%) was high but PRT systematically underestimated CN by one in a subset of samples. In these samples a sequence variant which caused complete PCR dropout of the respective DEFB cluster copies was found in one primer binding site of one of the targeted paralogous pseudogenes. CONCLUSION MLPA is superior to PRT and even more to qPCR for DEFB CN determination. Although the applied PRT provides in most cases reliable results, such a test is particularly sensitive to low-frequency sequence variations preferably accumulating in loci like pseudogenes which are most likely not under selective pressure. In the light of the superior performance of multiplex assays, the drawbacks of such single PRTs could be overcome by combining more test markers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Simple Sequence Repeats (SSRs) are widely used in population genetic studies but their classical development is costly and time-consuming. The ever-increasing available DNA datasets generated by high-throughput techniques offer an inexpensive alternative for SSRs discovery. Expressed Sequence Tags (ESTs) have been widely used as SSR source for plants of economic relevance but their application to non-model species is still modest. Methods Here, we explored the use of publicly available ESTs (GenBank at the National Center for Biotechnology Information-NCBI) for SSRs development in non-model plants, focusing on genera listed by the International Union for the Conservation of Nature (IUCN). We also search two model genera with fully annotated genomes for EST-SSRs, Arabidopsis and Oryza, and used them as controls for genome distribution analyses. Overall, we downloaded 16 031 555 sequences for 258 plant genera which were mined for SSRsand their primers with the help of QDD1. Genome distribution analyses in Oryza and Arabidopsis were done by blasting the sequences with SSR against the Oryza sativa and Arabidopsis thaliana reference genomes implemented in the Basal Local Alignment Tool (BLAST) of the NCBI website. Finally, we performed an empirical test to determine the performance of our EST-SSRs in a few individuals from four species of two eudicot genera, Trifolium and Centaurea. Results We explored a total of 14 498 726 EST sequences from the dbEST database (NCBI) in 257 plant genera from the IUCN Red List. We identify a very large number (17 102) of ready-to-test EST-SSRs in most plant genera (193) at no cost. Overall, dinucleotide and trinucleotide repeats were the prevalent types but the abundance of the various types of repeat differed between taxonomic groups. Control genomes revealed that trinucleotide repeats were mostly located in coding regions while dinucleotide repeats were largely associated with untranslated regions. Our results from the empirical test revealed considerable amplification success and transferability between congenerics. Conclusions The present work represents the first large-scale study developing SSRs by utilizing publicly accessible EST databases in threatened plants. Here we provide a very large number of ready-to-test EST-SSR (17 102) for 193 genera. The cross-species transferability suggests that the number of possible target species would be large. Since trinucleotide repeats are abundant and mainly linked to exons they might be useful in evolutionary and conservation studies. Altogether, our study highly supports the use of EST databases as an extremely affordable and fast alternative for SSR developing in threatened plants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While sequence learning research models complex phenomena, previous studies have mostly focused on unimodal sequences. The goal of the current experiment is to put implicit sequence learning into a multimodal context: to test whether it can operate across different modalities. We used the Task Sequence Learning paradigm to test whether sequence learning varies across modalities, and whether participants are able to learn multimodal sequences. Our results show that implicit sequence learning is very similar regardless of the source modality. However, the presence of correlated task and response sequences was required for learning to take place. The experiment provides new evidence for implicit sequence learning of abstract conceptual representations. In general, the results suggest that correlated sequences are necessary for implicit sequence learning to occur. Moreover, they show that elements from different modalities can be automatically integrated into one unitary multimodal sequence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Schwalbenberg II loess-paleosol sequence (LPS) denotes a key site for Marine Isotope Stage (MIS 3) in Western Europe owing to eight succeeding cambisols, which primarily constitute the Ahrgau Subformation. Therefore, this LPS qualifies as a test candidate for the potential of temporal high-resolution geochemical data obtained X-ray fluorescence (XRF) scanning of discrete samplesproviding a fast and non-destructive tool for determining the element composition. The geochemical data is first contextualized to existing proxy data such as magnetic susceptibility (MS) and organic carbon (Corg) and then aggregated to element log ratios characteristic for weathering intensity [LOG (Ca/Sr), LOG (Rb/Sr), LOG (Ba/Sr), LOG (Rb/K)] and dust provenance [LOG (Ti/Zr), LOG (Ti/Al), LOG (Si/Al)]. Generally, an interpretation of rock magnetic particles is challenged in western Europe, where not only magnetic enhancement but also depletion plays a role. Our data indicates leaching and top-soil erosion induced MS depletion at the Schwalbenberg II LPS. Besides weathering, LOG (Ca/Sr) is susceptible for secondary calcification. Thus, also LOG (Rb/Sr) and LOG (Ba/Sr) are shown to be influenced by calcification dynamics. Consequently, LOG (Rb/K) seems to be the most suitable weathering index identifying the Sinzig Soils S1 and S2 as the most pronounced paleosols for this site. Sinzig Soil S3 is enclosed by gelic gleysols and in contrast to S1 and S2 only initially weathered pointing to colder climate conditions. Also the Remagen Soils are characterized by subtle to moderate positive excursions in the weathering indices. Comparing the Schwalbenberg II LPS with the nearby Eifel Lake Sediment Archive (ELSA) and other more distant German, Austrian and Czech LPS while discussing time and climate as limiting factors for pedogenesis, we suggest that the lithologically determined paleosols are in-situ soil formations. The provenance indices document a Zr-enrichment at the transition from the Ahrgau to the Hesbaye Subformation. This is explained by a conceptual model incorporating multiple sediment recycling and sorting effects in eolian and fluvial domains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La difusión de TV3D actual utiliza formatos como el Side-by-Side o Top-and-Bottom, en los que cada par de imágenes, correspondiente a las vistas de los ojos derecho e izquierdo, se encapsula con la mitad de la resolución espacial en una sola imagen. Estas imágenes se muestran de manera casi simultánea de forma que el ojo humano compone una imagen con profundidad que se asemeja a la visión binocular natural. Desde hace un par de años las principales plataformas de televisión han empezado a crear canales con contenido 3D. La televisión 3D (TV3D) se ha introducido en los hogares gracias a los televisores estereoscópicos. Estos televisores, que son compatibles con los formatos antes mencionados, extraen de cada imagen sus dos vistas, recuperan la resolución original y presentan cada vista alternativamente en la pantalla, generando al mismo tiempo una señal de sincronismo para las gafas activas, creando de esta forma la sensación tridimensional de las imágenes. En este PFC se pretende realizar el diseño VHDL de un cambiador de formato que genere en tiempo real la secuencia de imágenes correspondiente a los ojos derecho e izquierdo, con resolución completa, a partir de una secuencia codificada en formato tipo Top-and-Bottom y el banco de test para su prueba. Este circuito se implementará como un periférico del procesador NIOS II de Altera. El diseño podría utilizarse como parte de un sistema que permita la visualización de las actuales emisiones de televisión 3D en un televisor convencional. La tecnología de referencia que se utilizará serán las FPGAs, más concretamente la tarjeta Cyclone III FPGA Starter Kit (EP3C25 FPGA) de Altera, junto a una tarjeta de ampliación de Microtronix con entrada y salida HDMI para video y audio. Además se pretende crear la documentación necesaria para el desarrollo de futuros trabajos relacionados con la televisión 3D. ABSTRACT Current TV3D broadcasting uses formats as Side-by-Side or Top-and-Bottom, where every single pair of images, corresponding to left and right eyes views, are encapsulated with half spatial resolution in one single image. These images are almost simultaneously displayed so that the human eye forms an image with depth resembling naturally binocular vision. From a couple of years the major TV platforms have begun to create 3D content channels. 3D Television (3DTV) has been introduced in homes through stereoscopic televisions. These televisions, which are compatible with the above formats, each image is extracted from the two views, and recover the original resolution and displays alternately each view in screen, while generating a synchronization signal for active glasses, thereby creating the three-dimensional sensation of the images. The main objective in this PFC is to make the design of an exchanger VHDL format in real time to generate the image sequence corresponding to the right and left eyes, with full resolution from an encoded sequence type format Top-and-Bottom and test bench for testing. This circuit is implemented as a Altera NIOS II processor peripheral.The design could be used as part of a system enabling the display of current television broadcasts 3D on a conventional television. The reference technology that will be use are FPGAs, more specifically Cyclone III FPGA Starter Card Kit (EP3C25 FPGA) Altera, along with an expansion card Microtronix with HDMI input and output video and audio. It also aims to create documentation for the development of future works related to 3D TV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is the subjective and objective evaluation of angledependent absorption coefficients. As the assumption of a constant absorption coefficient over the angle of incidence is not always held, a new model acknowledging an angle-dependent reflection must be considered, to get a more accurate prediction in the sound field. The study provides information about the behavior of different materials in several rooms, depending on the reflection modeling of incident sound waves. An objective evaluation was run for an implementation of angle-dependent reflection factors in the image source and ray tracing simulation models. Results obtained were analysed after comparison to diffuse-field averaged data. However, changes in acoustic characteristics of a room do not always mean a variation in the listener’s perception. Thus, additional subjective evaluation allowed a comparison between the different results obtained with the computer simulation and the response from the individuals who participated in the listening test. The listening test was designed following a three-alternative forced-choice (3AFC) paradigm. In each interaction asked to the subjects a sequence of either three pink noise bursts or three natural signals was alternated. These results were supposed to show the influence and perception of the two different ways to implement surface reflection –either with diffuse or angle-dependent absorption properties. Results show slightly audible effects when material properties were exaggerated. El objetivo de este trabajo es la evaluación objetiva y subjetiva del coeficiente de absorción en función del ángulo de incidencia de la onda de sonido. La suposición de un coeficiente de absorción constante con respecto al ángulo de incidencia no siempre se sostiene. Por ello, un nuevo modelo considerando la reflexión dependiente del ángulo se debe tener en cuenta para obtener predicciones más certeras en el campo del sonido. El estudio proporciona información sobre el comportamiento de diferentes materiales en distintos recintos, dependientes del modelo de reflexión de las ondas de sonido incidentes. Debido a las dificultades a la hora de realizar las medidas y, por lo tanto, a la falta de datos, los coeficientes de absorción dependientes del ángulo a menudo no se tienen en cuenta a la hora de realizar las simulaciones. Hoy en día, aún no hay una tendencia de aplicar el coeficiente de absorción dependiente del ángulo para mejorar los modelos de reflexión. Por otra parte, para una medición satisfactoria de la absorción dependiente del ángulo, sólo hay unos pocos métodos. Las técnicas de medición actuales llevan mucho tiempo y hay algunos materiales, condiciones y ángulos que no pueden ser reproducidos y, por lo tanto, no es posible su medición. Sin embargo, en el presente estudio, los ángulos de incidencia de las ondas de sonido son conocidos y almacenados en una de base de datos para cada uno de los materiales, de modo que los coeficientes de absorción para el ángulo dado pueden ser devueltos siempre que sean requeridos por el usuario. Para realizar el estudio se llevó a cabo una evaluación objetiva, por medio de la implementación del factor de reflexión dependiente del ángulo en los modelos de fuentes imagen y trazado de rayos. Los resultados fueron analizados después de ser comparados con el promedio de los datos obtenidos en medidas en el campo difuso. La simulación se hizo una vez se configuraron un número de materiales creados por el autor, a partir de los datos existentes en la literatura y los catálogos de fabricantes. Los modelos de Komatsu y Mechel sirvieron como referencia para los materiales porosos, configurando la resistividad al aire o el grosor, y para los paneles perforados, introduciendo el radio de los orificios y la distancia entre centros, respectivamente. Estos materiales se situaban en la pared opuesta a la que se consideraba que debía alojar a la fuente sonora. El resto de superficies se modelaban con el mismo material, variando su coeficiente de absorción y/o de dispersión. Al mismo tiempo, una serie de recintos fueron modelados para poder reproducir distintos escenarios de los que obtener los resultados. Sin embargo, los cambios en las características acústicas de un recinto no significan variaciones en la percepción por parte del oyente. Por ello, una evaluación subjetiva adicional permitió una comparación entre los diferentes resultados obtenidos mediante la simulación informática y la respuesta de los individuos que participaron en la prueba de escucha. Ésta fue diseñada bajo las pautas del modelo de test three-alternative forced-choice (3AFC), con treinta y dos preguntas diferentes. En cada iteración los sujetos fueron preguntados por una secuencia alterna entre tres señales, siendo dos de ellas iguales. Éstas podían ser tanto ráfagas de ruido rosa como señales naturales, en este test se utilizó un fragmento de una obra clásica interpretada por un piano. Antes de contestar al cuestionario, los bloques de preguntas eran ordenados al azar. Para cada ensayo, la mezcla era diferente, así los sujetos no repetían la misma prueba, evitando un sesgo por efectos de aprendizaje. Los bloques se barajaban recordando siempre el orden inicial, para después almacenar los resultados reordenados. La prueba de escucha fue realizada por veintitrés personas, toda ellas con conocimientos dentro del campo de la acústica. Antes de llevar a cabo la prueba de escucha en un entorno adecuado, una hoja con las instrucciones fue facilitada a cada persona. Los resultados muestran la influencia y percepción de las dos maneras distintas de implementar las reflexiones de una superficie –ya sea con respecto a la propiedad de difusión o de absorción dependiente del ángulo de los materiales. Los resultados objetivos, después de ejecutar las simulaciones, muestran los datos medios obtenidos para comprender el comportamiento de distintos materiales de acuerdo con el modelo de reflexión utilizado en el caso de estudio. En las tablas proporcionadas en la memoria se muestran los valores del tiempo de reverberación, la claridad y el tiempo de caída temprana. Los datos de las características del recinto obtenidos en este análisis tienen una fuerte dependencia respecto al coeficiente de absorción de los diferentes materiales que recubren las superficies del cuarto. En los resultados subjetivos, la media de percepción, a la hora de distinguir las distintas señales, por parte de los sujetos, se situó significativamente por debajo del umbral marcado por el punto de inflexión de la función psicométrica. Sin embargo, es posible concluir que la mayoría de los individuos tienden a ser capaces de detectar alguna diferencia entre los estímulos presentados en el 3AFC test. En conclusión, la hipótesis de que los valores del coeficiente de absorción dependiente del ángulo difieren es contrastada. Pero la respuesta subjetiva de los individuos muestra que únicamente hay ligeras variaciones en la percepción si el coeficiente varía en intervalos pequeños entre los valores manejados en la simulación. Además, si los parámetros de los materiales acústicos no son exagerados, los sujetos no perciben ninguna variación. Los primeros resultados obtenidos, proporcionando información respecto a la dependencia del ángulo, llevan a una nueva consideración en el campo de la acústica, y en la realización de nuevos proyectos en el futuro. Para futuras líneas de investigación, las simulaciones se deberían realizar con distintos tipos de recintos, buscando escenarios con geometrías irregulares. También, la implementación de distintos materiales para obtener resultados más certeros. Otra de las fases de los futuros proyectos puede realizarse teniendo en cuenta el coeficiente de dispersión dependiente del ángulo de incidencia de la onda de sonido. En la parte de la evaluación subjetiva, realizar una serie de pruebas de escucha con distintos individuos, incluyendo personas sin una formación relacionada con la ingeniería acústica.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Assessing video quality is a complex task. While most pixel-based metrics do not present enough correlation between objective and subjective results, algorithms need to correspond to human perception when analyzing quality in a video sequence. For analyzing the perceived quality derived from concrete video artifacts in determined region of interest we present a novel methodology for generating test sequences which allow the analysis of impact of each individual distortion. Through results obtained after subjective assessment it is possible to create psychovisual models based on weighting pixels belonging to different regions of interest distributed by color, position, motion or content. Interesting results are obtained in subjective assessment which demonstrates the necessity of new metrics adapted to human visual system.