940 resultados para correlation coefficient analysis
Resumo:
Background/significance. The scarcity of reliable and valid Spanish language instruments for health related research has hindered research with the Hispanic population. Research suggests that fatalistic attitudes are related to poor cancer screening behaviors and may be one reason for low participation of Mexican-Americans in cancer screening. This problem is of major concern because Mexican-Americans constitute the largest Hispanic subgroup in the U.S.^ Purpose. The purposes of this study were: (1) To translate the Powe Fatalism Inventory, (PFI) into Spanish, and culturally adapt the instrument to the Mexican-American culture as found along the U.S.-Mexico border and (2) To test the equivalence between the Spanish translated, culturally adapted version of the PFI and the English version of the PFI to include clarity, content validity, reading level and reliability.^ Design. Descriptive, cross-sectional.^ Methods. The Spanish language translation used a translation model which incorporates a cultural adaptation process. The SPFI was administered to 175 bilingual participants residing in a midsize, U.S-Mexico border city. Data analysis included estimation of Cronbach's alpha, factor analysis, paired samples t-test comparison and multiple regression analysis using SPSS software, as well as measurement of content validity and reading level of the SPFI. ^ Findings. A reliability estimate using Cronbach's alpha coefficient was 0.81 for the SPFI compared to 0.80 for the PFI in this study. Factor Analysis extracted four factors which explained 59% of the variance. Paired t-test comparison revealed no statistically significant differences between the SPFI and PFI total or individual item scores. Content Validity Index was determined to be 1.0. Reading Level was assessed to be less than a 6th grade reading level. The correlation coefficient between the SPFI and PFI was 0.95.^ Conclusions. This study provided strong psychometric evidence that the Spanish translated, culturally adapted SPFI is an equivalent tool to the English version of the PFI in measuring cancer fatalism. This indicates that the two forms of the instrument can be used interchangeably in a single study to accommodate reading and speaking abilities of respondents. ^
Resumo:
In recent years, disaster preparedness through assessment of medical and special needs persons (MSNP) has taken a center place in public eye in effect of frequent natural disasters such as hurricanes, storm surge or tsunami due to climate change and increased human activity on our planet. Statistical methods complex survey design and analysis have equally gained significance as a consequence. However, there exist many challenges still, to infer such assessments over the target population for policy level advocacy and implementation. ^ Objective. This study discusses the use of some of the statistical methods for disaster preparedness and medical needs assessment to facilitate local and state governments for its policy level decision making and logistic support to avoid any loss of life and property in future calamities. ^ Methods. In order to obtain precise and unbiased estimates for Medical Special Needs Persons (MSNP) and disaster preparedness for evacuation in Rio Grande Valley (RGV) of Texas, a stratified and cluster-randomized multi-stage sampling design was implemented. US School of Public Health, Brownsville surveyed 3088 households in three counties namely Cameron, Hidalgo, and Willacy. Multiple statistical methods were implemented and estimates were obtained taking into count probability of selection and clustering effects. Statistical methods for data analysis discussed were Multivariate Linear Regression (MLR), Survey Linear Regression (Svy-Reg), Generalized Estimation Equation (GEE) and Multilevel Mixed Models (MLM) all with and without sampling weights. ^ Results. Estimated population for RGV was 1,146,796. There were 51.5% female, 90% Hispanic, 73% married, 56% unemployed and 37% with their personal transport. 40% people attained education up to elementary school, another 42% reaching high school and only 18% went to college. Median household income is less than $15,000/year. MSNP estimated to be 44,196 (3.98%) [95% CI: 39,029; 51,123]. All statistical models are in concordance with MSNP estimates ranging from 44,000 to 48,000. MSNP estimates for statistical methods are: MLR (47,707; 95% CI: 42,462; 52,999), MLR with weights (45,882; 95% CI: 39,792; 51,972), Bootstrap Regression (47,730; 95% CI: 41,629; 53,785), GEE (47,649; 95% CI: 41,629; 53,670), GEE with weights (45,076; 95% CI: 39,029; 51,123), Svy-Reg (44,196; 95% CI: 40,004; 48,390) and MLM (46,513; 95% CI: 39,869; 53,157). ^ Conclusion. RGV is a flood zone, most susceptible to hurricanes and other natural disasters. People in the region are mostly Hispanic, under-educated with least income levels in the U.S. In case of any disaster people in large are incapacitated with only 37% have their personal transport to take care of MSNP. Local and state government’s intervention in terms of planning, preparation and support for evacuation is necessary in any such disaster to avoid loss of precious human life. ^ Key words: Complex Surveys, statistical methods, multilevel models, cluster randomized, sampling weights, raking, survey regression, generalized estimation equations (GEE), random effects, Intracluster correlation coefficient (ICC).^
Resumo:
The main objective of this study was to attempt to develop some indicators for measuring the food safety status of a country. A conceptual model was put forth by the investigator. The assumption was that food safety status was multifactorily influenced by medico-health levels, food-nutrition programs, and consumer protection activities. However, all these in turn depended upon socio-economic status of the country.^ Twenty-six indicators were reviewed and examined. Seventeen were first screened and three were finally selected, by the stepwise multiple regression analysis, to reflect the food safety status. Sixty-one countries/areas were included in this study.^ The three indicators were life expectancy at birth with multiple correlation coefficient (R2 = 34.62%), adult literacy rate (R2 = 29.66%), and child mortality rate for ages 1-4 (R2 = 9.99%). They showed a cumulative R2 of 57.79%. ^
Resumo:
Radiomics is the high-throughput extraction and analysis of quantitative image features. For non-small cell lung cancer (NSCLC) patients, radiomics can be applied to standard of care computed tomography (CT) images to improve tumor diagnosis, staging, and response assessment. The first objective of this work was to show that CT image features extracted from pre-treatment NSCLC tumors could be used to predict tumor shrinkage in response to therapy. This is important since tumor shrinkage is an important cancer treatment endpoint that is correlated with probability of disease progression and overall survival. Accurate prediction of tumor shrinkage could also lead to individually customized treatment plans. To accomplish this objective, 64 stage NSCLC patients with similar treatments were all imaged using the same CT scanner and protocol. Quantitative image features were extracted and principal component regression with simulated annealing subset selection was used to predict shrinkage. Cross validation and permutation tests were used to validate the results. The optimal model gave a strong correlation between the observed and predicted shrinkages with . The second objective of this work was to identify sets of NSCLC CT image features that are reproducible, non-redundant, and informative across multiple machines. Feature sets with these qualities are needed for NSCLC radiomics models to be robust to machine variation and spurious correlation. To accomplish this objective, test-retest CT image pairs were obtained from 56 NSCLC patients imaged on three CT machines from two institutions. For each machine, quantitative image features with concordance correlation coefficient values greater than 0.90 were considered reproducible. Multi-machine reproducible feature sets were created by taking the intersection of individual machine reproducible feature sets. Redundant features were removed through hierarchical clustering. The findings showed that image feature reproducibility and redundancy depended on both the CT machine and the CT image type (average cine 4D-CT imaging vs. end-exhale cine 4D-CT imaging vs. helical inspiratory breath-hold 3D CT). For each image type, a set of cross-machine reproducible, non-redundant, and informative image features was identified. Compared to end-exhale 4D-CT and breath-hold 3D-CT, average 4D-CT derived image features showed superior multi-machine reproducibility and are the best candidates for clinical correlation.
Resumo:
A limiting factor in the accuracy and precision of U/Pb zircon dates is accurate correction for initial disequilibrium in the 238U and 235U decay chains. The longest-lived-and therefore most abundant-intermediate daughter product in the 235U isotopic decay chain is 231Pa (T1/2 = 32.71 ka), and the partitioning behavior of Pa in zircon is not well constrained. Here we report high-precision thermal ionization mass spectrometry (TIMS) U-Pb zircon data from two samples from Ocean Drilling Program (ODP) Hole 735B, which show evidence for incorporation of excess 231Pa during zircon crystallization. The most precise analyses from the two samples have consistent Th-corrected 206Pb/238U dates with weighted means of 11.9325 ± 0.0039 Ma (n = 9) and 11.920 ± 0.011 Ma (n = 4), but distinctly older 207Pb/235U dates that vary from 12.330 ± 0.048 Ma to 12.140 ± 0.044 Ma and 12.03 ± 0.24 to 12.40 ± 0.27 Ma, respectively. If the excess 207Pb is due to variable initial excess 231Pa, calculated initial (231Pa)/(235U) activity ratios for the two samples range from 5.6 ± 1.0 to 9.6 ± 1.1 and 3.5 ± 5.2 to 11.4 ± 5.8. The data from the more precisely dated sample yields estimated DPazircon/DUzircon from 2.2-3.8 and 5.6-9.6, assuming (231Pa)/(235U) of the melt equal to the global average of recently erupted mid-ocean ridge basaltic glasses or secular equilibrium, respectively. High precision ID-TIMS analyses from nine additional samples from Hole 735B and nearby Hole 1105A suggest similar partitioning. The lower range of DPazircon/DUzircon is consistent with ion microprobe measurements of 231Pa in zircons from Holocene and Pleistocene rhyolitic eruptions (Schmitt (2007; doi:10.2138/am.2007.2449) and Schmitt (2011; doi:10.1146/annurev-earth-040610-133330)). The data suggest that 231Pa is preferentially incorporated during zircon crystallization over a range of magmatic compositions, and excess initial 231Pa may be more common in zircons than acknowledged. The degree of initial disequilibrium in the 235U decay chain suggested by the data from this study, and other recent high precision datasets, leads to resolvable discordance in high precision dates of Cenozoic to Mesozoic zircons. Minor discordance in zircons of this age may therefore reflect initial excess 231Pa and does not require either inheritance or Pb loss.
Resumo:
Sediment core logs from six sediment cores in the Labrador Sea show millennial-scale climate variability during the last glacial by recording all Heinrich events and several major Dansgaard-Oeschger cycles. The same millennial-scale climate change is documented for surface-water d18O records of Neogloboquadrina pachyderma (left coiled); hence the surface-water d18O record can be derived from sediment core logging by means of multiple linear regression, providing a paleoclimate proxy record at very high temporal resolution (70 yrs). For the Labrador Sea, sediment core logs contain important information about deep-water current velocities and also reflect the variable input of IRD from different sources as inferred from grain-size analysis, benthic d18O, the relation of density and p-wave velocity, and magnetic susceptibility. For the last glacial, faster deep-water currents which correspond to highs in sediment physical properties, occurred during iceberg discharge and lasted for a several centuries to a few millennia. Those enhanced currents might have contributed to increased production of intermediate waters during times of reduced production of North Atlantic Deep Water. Hudson Strait might have acted as a major supplier of detrital carbonate only during lowered sea level (greater ice extent). During coldest atmospheric temperatures over Greenland, deep-water currents increased during iceberg discharge in the Labrador Sea, then surface water freshened shortly after, while the abrupt atmospheric temperature rise happened after a larger time lag of >=1 kyr. The correlation implies a strong link and common forcing for atmosphere, sea surface, and deep water during the last glacial at millennial time scales but decoupling at orbital time scales.
Resumo:
En el ámbito de la llanura pampeana tienen lugar procesos degradativos que condicionan la actividad agrícola ganadera, vinculados con la erosión de tipo hídrica superficial. El presente trabajo busca modelar la emisión de sedimentos en una cuenca hidrográfica con forestaciones del Noreste Pampeano. La metodología implementada consiste en aplicar un modelo cartográfico cuantitativo desarrollado en base geoespacial con Sistema de Información Geográfica, apoyado en la Ecuación Universal de Pérdida de Suelo Modificada (MUSLE). Se realizó un análisis de validación estadística con ensayos de microsimulador de lluvias a campo, para una lluvia de 30 mm.h-1 de dos años de retorno. Los resultados obtenidos fueron mapas georreferenciados de cada factor de la MUSLE valorizados por color-intensidad, que alcanzan un valor de 33,77 Mg de sedimentos emitidos a la salida de la cuenca, con un coeficiente de correlación de 0,94 y un grado de ajuste de Nash-Sutcliffe de 0,82. Se concluye que el modelo cartográfico generó información espacial precisa de los componentes de la MUSLE para un evento de lluvia concreto. La aplicación del microsimulador de lluvias permitió la obtención de valores reales de emisión de sedimentos, lográndose un alto grado de ajuste. La emisión de sedimentos en la cuenca resultó ser leve a nula.
Resumo:
A study of the polarimetric backscattering response of newly formed sea ice types under a large assortment of surface coverage was conducted using a ship-based C-band polarimetric radar system. Polarimetric backscattering results and physical data for 40 stations during the fall freeze-up of 2003, 2006, and 2007 are presented. Analysis of the copolarized correlation coefficient showed its sensitivity to both sea ice thickness and surface coverage and resulted in a statistically significant separation of ice thickness into two regimes: ice less than 6 cm thick and ice greater than 8 cm thick. A case study quantified the backscatter of a layer of snow infiltrated frost flowers on new sea ice, showing that the presence of the old frost flowers can enhance the backscatter by more than 6 dB. Finally, a statistical analysis of a series of temporal-spatial measurements over a visually homogeneous frost-flower-covered ice floe identified temperature as a significant, but not exclusive, factor in the backscattering measurements.
Resumo:
Synthetic mass accumulation rates have been calculated for ODP Site 707 using depth-density and depth-porosity functions to estimate values for these parameters with increasing sediment thickness, at 1 Ma time intervals determined on the basis of published microfossil datums. These datums were the basis of the age model used by Peterson and Backman (1990, doi:10.2973/odp.proc.sr.115.163.1990) to calculate actual mass accumulation rate data using density and porosity measurements. A comparison is made between the synthetic and actual mass accumulation rate values for the time interval 37 Ma to the Recent for 1 Myr time intervals. There is a correlation coefficient of 0.993 between the two data sets, with an absolute difference generally less than 0.1 g/cm**2/kyr. We have used the method to extend the mass accumulation rate analysis back to the Late Paleocene (60 Ma) for Site 707. Providing age datums (e.g. fossil or magnetic anomaly data) are available the generation of synthetic mass accumulation rates can be calculated for any sediment sequence.
Resumo:
Organic and mineral phosphorus (P_org and P_min) have been determined in pore waters of terrigenous, biogenous, as well as weakly phosphatic and phosphatic sediments from the shelf of the West Africa (in 30 samples). Concentrations of P_min in the pore waters have been examined in close relation to grain size and chemical composition (amounts of P and N_org) of solid phase of the sediments. It has been demonstrated that among sands and coarse silts, maximum concentrations of P_min (up to 1.7 mg/l) in the pore waters have been observed in weakly phosphatic and phosphatic sediments rich in organic matter of the highly productive shelf of the Southwest Africa. Concentrations of P_min in the pore waters are most clearly associated with contents of N_org in the solid phase of the sediments (correlation coefficient R = 0.71) and P_org in the pore waters (R = 0.78).
Resumo:
During Leg 127, the formation microscanner (FMS) logging tool was used as part of an Ocean Drilling Program (ODP) logging program for only the second time in the history of the program. Resistivity images, also known as FMS logs, were obtained at Sites 794 and 797 that covered nearly the complete Yamato Basin sedimentary sequence to a depth below 500 mbsf. The FMS images from these two sites at the northeastern and southwestern corners of the Yamato Basin thus were amenable to comparison. A strong visual correlation was noticed between the FMS logs taken in Holes 794B and 797C in an upper Miocene interval (350-384 mbsf), although the two sites are approximately 360 km apart. In this interval, the FMS logs showed a series of more resistive thin beds (10-200 cm) alternating with relatively lower resistivity layers: a pattern that was manifested by alternating dark (low resistivity) and light (high resistivity) banding in the FMS images. We attribute this layering to interbedding of chert and porcellanite layers, a common lithologic sequence throughout Japan (Tada and Iijima, 1983, doi:10.1306/212F82E7-2B24-11D7-8648000102C1865D). Spatial frequency analysis of this interval of dominant dark-light banding showed spatial cycles of period of 1.1 to 1.3 and 0.6 m. This pronounced layering and the correlation between the two sites terminate at 384 mbsf, coincident with the opal-CT to quartz transition at Site 794. We think the correlation in the FMS logs might well extend earlier in the middle Miocene, but the opal-CT to quartz transition obscures this layering below 384 mbsf. Although 34 m is only a small part of the core recovered at these two sites, it is significant because it represents an area of extremely poor core recovery and an interval for which a near-depositional hiatus was postulated for Site 797, but not for Site 794.
Resumo:
Abstract Air pollution is a big threat and a phenomenon that has a specific impact on human health, in addition, changes that occur in the chemical composition of the atmosphere can change the weather and cause acid rain or ozone destruction. Those are phenomena of global importance. The World Health Organization (WHO) considerates air pollution as one of the most important global priorities. Salamanca, Gto., Mexico has been ranked as one of the most polluted cities in this country. The industry of the area led to a major economic development and rapid population growth in the second half of the twentieth century. The impact in the air quality is important and significant efforts have been made to measure the concentrations of pollutants. The main pollution sources are locally based plants in the chemical and power generation sectors. The registered concerning pollutants are Sulphur Dioxide (SO2) and particles on the order of ∼10 micrometers or less (PM10). The prediction in the concentration of those pollutants can be a powerful tool in order to take preventive measures such as the reduction of emissions and alerting the affected population. In this PhD thesis we propose a model to predict concentrations of pollutants SO2 and PM10 for each monitoring booth in the Atmospheric Monitoring Network Salamanca (REDMAS - for its spanish acronym). The proposed models consider the use of meteorological variables as factors influencing the concentration of pollutants. The information used along this work is the current real data from REDMAS. In the proposed model, Artificial Neural Networks (ANN) combined with clustering algorithms are used. The type of ANN used is the Multilayer Perceptron with a hidden layer, using separate structures for the prediction of each pollutant. The meteorological variables used for prediction were: Wind Direction (WD), wind speed (WS), Temperature (T) and relative humidity (RH). Clustering algorithms, K-means and Fuzzy C-means, are used to find relationships between air pollutants and weather variables under consideration, which are added as input of the RNA. Those relationships provide information to the ANN in order to obtain the prediction of the pollutants. The results of the model proposed in this work are compared with the results of a multivariate linear regression and multilayer perceptron neural network. The evaluation of the prediction is calculated with the mean absolute error, the root mean square error, the correlation coefficient and the index of agreement. The results show the importance of meteorological variables in the prediction of the concentration of the pollutants SO2 and PM10 in the city of Salamanca, Gto., Mexico. The results show that the proposed model perform better than multivariate linear regression and multilayer perceptron neural network. The models implemented for each monitoring booth have the ability to make predictions of air quality that can be used in a system of real-time forecasting and human health impact analysis. Among the main results of the development of this thesis we can cite: A model based on artificial neural network combined with clustering algorithms for prediction with a hour ahead of the concentration of each pollutant (SO2 and PM10) is proposed. A different model was designed for each pollutant and for each of the three monitoring booths of the REDMAS. A model to predict the average of pollutant concentration in the next 24 hours of pollutants SO2 and PM10 is proposed, based on artificial neural network combined with clustering algorithms. Model was designed for each booth of the REDMAS and each pollutant separately. Resumen La contaminación atmosférica es una amenaza aguda, constituye un fenómeno que tiene particular incidencia sobre la salud del hombre. Los cambios que se producen en la composición química de la atmósfera pueden cambiar el clima, producir lluvia ácida o destruir el ozono, fenómenos todos ellos de una gran importancia global. La Organización Mundial de la Salud (OMS) considera la contaminación atmosférica como una de las más importantes prioridades mundiales. Salamanca, Gto., México; ha sido catalogada como una de las ciudades más contaminadas en este país. La industria de la zona propició un importante desarrollo económico y un crecimiento acelerado de la población en la segunda mitad del siglo XX. Las afectaciones en el aire son graves y se han hecho importantes esfuerzos por medir las concentraciones de los contaminantes. Las principales fuentes de contaminación son fuentes fijas como industrias químicas y de generación eléctrica. Los contaminantes que se han registrado como preocupantes son el Bióxido de Azufre (SO2) y las Partículas Menores a 10 micrómetros (PM10). La predicción de las concentraciones de estos contaminantes puede ser una potente herramienta que permita tomar medidas preventivas como reducción de emisiones a la atmósfera y alertar a la población afectada. En la presente tesis doctoral se propone un modelo de predicción de concentraci ón de los contaminantes más críticos SO2 y PM10 para cada caseta de monitorización de la Red de Monitorización Atmosférica de Salamanca (REDMAS). Los modelos propuestos plantean el uso de las variables meteorol ógicas como factores que influyen en la concentración de los contaminantes. La información utilizada durante el desarrollo de este trabajo corresponde a datos reales obtenidos de la REDMAS. En el Modelo Propuesto (MP) se aplican Redes Neuronales Artificiales (RNA) combinadas con algoritmos de agrupamiento. La RNA utilizada es el Perceptrón Multicapa con una capa oculta, utilizando estructuras independientes para la predicción de cada contaminante. Las variables meteorológicas disponibles para realizar la predicción fueron: Dirección de Viento (DV), Velocidad de Viento (VV), Temperatura (T) y Humedad Relativa (HR). Los algoritmos de agrupamiento K-means y Fuzzy C-means son utilizados para encontrar relaciones existentes entre los contaminantes atmosféricos en estudio y las variables meteorológicas. Dichas relaciones aportan información a las RNA para obtener la predicción de los contaminantes, la cual es agregada como entrada de las RNA. Los resultados del modelo propuesto en este trabajo son comparados con los resultados de una Regresión Lineal Multivariable (RLM) y un Perceptrón Multicapa (MLP). La evaluación de la predicción se realiza con el Error Medio Absoluto, la Raíz del Error Cuadrático Medio, el coeficiente de correlación y el índice de acuerdo. Los resultados obtenidos muestran la importancia de las variables meteorológicas en la predicción de la concentración de los contaminantes SO2 y PM10 en la ciudad de Salamanca, Gto., México. Los resultados muestran que el MP predice mejor la concentración de los contaminantes SO2 y PM10 que los modelos RLM y MLP. Los modelos implementados para cada caseta de monitorizaci ón tienen la capacidad para realizar predicciones de calidad del aire, estos modelos pueden ser implementados en un sistema que permita realizar la predicción en tiempo real y analizar el impacto en la salud de la población. Entre los principales resultados obtenidos del desarrollo de esta tesis podemos citar: Se propone un modelo basado en una red neuronal artificial combinado con algoritmos de agrupamiento para la predicción con una hora de anticipaci ón de la concentración de cada contaminante (SO2 y PM10). Se diseñó un modelo diferente para cada contaminante y para cada una de las tres casetas de monitorización de la REDMAS. Se propone un modelo de predicción del promedio de la concentración de las próximas 24 horas de los contaminantes SO2 y PM10, basado en una red neuronal artificial combinado con algoritmos de agrupamiento. Se diseñó un modelo para cada caseta de monitorización de la REDMAS y para cada contaminante por separado.
Resumo:
Métrica de calidad de video de alta definición construida a partir de ratios de referencia completa. La medida de calidad de video, en inglés Visual Quality Assessment (VQA), es uno de los mayores retos por solucionar en el entorno multimedia. La calidad de vídeo tiene un impacto altísimo en la percepción del usuario final (consumidor) de los servicios sustentados en la provisión de contenidos multimedia y, por tanto, factor clave en la valoración del nuevo paradigma denominado Calidad de la Experiencia, en inglés Quality of Experience (QoE). Los modelos de medida de calidad de vídeo se pueden agrupar en varias ramas según la base técnica que sustenta el sistema de medida, destacando en importancia los que emplean modelos psicovisuales orientados a reproducir las características del sistema visual humano, en inglés Human Visual System, del que toman sus siglas HVS, y los que, por el contrario, optan por una aproximación ingenieril en la que el cálculo de calidad está basado en la extracción de parámetros intrínsecos de la imagen y su comparación. A pesar de los avances recogidos en este campo en los últimos años, la investigación en métricas de calidad de vídeo, tanto en presencia de referencia (los modelos denominados de referencia completa), como en presencia de parte de ella (modelos de referencia reducida) e incluso los que trabajan en ausencia de la misma (denominados sin referencia), tiene un amplio camino de mejora y objetivos por alcanzar. Dentro de ellos, la medida de señales de alta definición, especialmente las utilizadas en las primeras etapas de la cadena de valor que son de muy alta calidad, son de especial interés por su influencia en la calidad final del servicio y no existen modelos fiables de medida en la actualidad. Esta tesis doctoral presenta un modelo de medida de calidad de referencia completa que hemos llamado PARMENIA (PArallel Ratios MEtric from iNtrInsic features Analysis), basado en la ponderación de cuatro ratios de calidad calculados a partir de características intrínsecas de la imagen. Son: El Ratio de Fidelidad, calculado mediante el gradiente morfológico o gradiente de Beucher. El Ratio de Similitud Visual, calculado mediante los puntos visualmente significativos de la imagen a través de filtrados locales de contraste. El Ratio de Nitidez, que procede de la extracción del estadístico de textura de Haralick contraste. El Ratio de Complejidad, obtenido de la definición de homogeneidad del conjunto de estadísticos de textura de Haralick PARMENIA presenta como novedad la utilización de la morfología matemática y estadísticos de Haralick como base de una métrica de medida de calidad, pues esas técnicas han estado tradicionalmente más ligadas a la teledetección y la segmentación de objetos. Además, la aproximación de la métrica como un conjunto ponderado de ratios es igualmente novedosa debido a que se alimenta de modelos de similitud estructural y otros más clásicos, basados en la perceptibilidad del error generado por la degradación de la señal asociada a la compresión. PARMENIA presenta resultados con una altísima correlación con las valoraciones MOS procedentes de las pruebas subjetivas a usuarios que se han realizado para la validación de la misma. El corpus de trabajo seleccionado procede de conjuntos de secuencias validados internacionalmente, de modo que los resultados aportados sean de la máxima calidad y el máximo rigor posible. La metodología de trabajo seguida ha consistido en la generación de un conjunto de secuencias de prueba de distintas calidades a través de la codificación con distintos escalones de cuantificación, la obtención de las valoraciones subjetivas de las mismas a través de pruebas subjetivas de calidad (basadas en la recomendación de la Unión Internacional de Telecomunicaciones BT.500), y la validación mediante el cálculo de la correlación de PARMENIA con estos valores subjetivos, cuantificada a través del coeficiente de correlación de Pearson. Una vez realizada la validación de los ratios y optimizada su influencia en la medida final y su alta correlación con la percepción, se ha realizado una segunda revisión sobre secuencias del hdtv test dataset 1 del Grupo de Expertos de Calidad de Vídeo (VQEG, Video Quality Expert Group) mostrando los resultados obtenidos sus claras ventajas. Abstract Visual Quality Assessment has been so far one of the most intriguing challenges on the media environment. Progressive evolution towards higher resolutions while increasing the quality needed (e.g. high definition and better image quality) aims to redefine models for quality measuring. Given the growing interest in multimedia services delivery, perceptual quality measurement has become a very active area of research. First, in this work, a classification of objective video quality metrics based on their underlying methodologies and approaches for measuring video quality has been introduced to sum up the state of the art. Then, this doctoral thesis describes an enhanced solution for full reference objective quality measurement based on mathematical morphology, texture features and visual similarity information that provides a normalized metric that we have called PARMENIA (PArallel Ratios MEtric from iNtrInsic features Analysis), with a high correlated MOS score. The PARMENIA metric is based on the pooling of different quality ratios that are obtained from three different approaches: Beucher’s gradient, local contrast filtering, and contrast and homogeneity Haralick’s texture features. The metric performance is excellent, and improves the current state of the art by providing a wide dynamic range that make easier to discriminate between very close quality coded sequences, especially for very high bit rates whose quality, currently, is transparent for quality metrics. PARMENIA introduces a degree of novelty against other working metrics: on the one hand, exploits the structural information variation to build the metric’s kernel, but complements the measure with texture information and a ratio of visual meaningful points that is closer to typical error sensitivity based approaches. We would like to point out that PARMENIA approach is the only metric built upon full reference ratios, and using mathematical morphology and texture features (typically used in segmentation) for quality assessment. On the other hand, it gets results with a wide dynamic range that allows measuring the quality of high definition sequences from bit rates of hundreds of Megabits (Mbps) down to typical distribution rates (5-6 Mbps), even streaming rates (1- 2 Mbps). Thus, a direct correlation between PARMENIA and MOS scores are easily constructed. PARMENIA may further enhance the number of available choices in objective quality measurement, especially for very high quality HD materials. All this results come from validation that has been achieved through internationally validated datasets on which subjective tests based on ITU-T BT.500 methodology have been carried out. Pearson correlation coefficient has been calculated to verify the accuracy of PARMENIA and its reliability.
Resumo:
Abstract Tree tomato (Solanum betaceum) is an Andean small tree cultivated for its juicy fruits. Little information is available on the characterization of genetic resources and breeding of this neglected crop. We have studied the molecular diversity with AFLP markers using 11 combinations of primers of a collection of 25 S. betaceum accessions belonging to four cultivar groups, most of which had been previously morphologically characterized, as well as one accession of the wild relative S. cajanumense.Atotal of 197 AFLP fragments were scored, of which 84 (43 %) were polymorphic. When excluding S. cajanumense from the analysis, the number of polymorphic AFLP fragments was 78 (40 %). Unique AFLP fingerprints were obtained for every accession, but no AFLP fragments specific and universal to any of the four cultivar groups were found. The total genetic diversity (HT) of cultivated accessions was HT = 0.2904, while for cultivar groups it ranged from HT = 0.1846 in the orange group to HT = 0.2498 in the orange pointed group. Genetic differentiation among cultivar groups (GST) was low (GST = 0.2248), which was matched by low values of genetic distance among cultivar groups. The diversity of collections from Ecuador, which we hypothesize is a center of diversity for tree tomato, was similar to that from other origins (HT = 0.2884 and HT = 0.2645, respectively). Cluster and PCoA analyses clearly separated wild S. cajanumense from the cultivated species. However, materials of different cultivar groups and origins were intermingled in both analyses. The Mantel test correlation coefficient of the matrices of morphological and AFLP distances was low (-0.024) and non-significant. Overall, the results show that a wide diversity is present in each of the cultivar groups, indicate that Ecuador may be regarded as a center of accumulation of diversity for this crop, and confirm that AFLP and morphological characterization data are complementary. The results obtained are of value for the conservation of genetic resources and breeding of tree tomato, as an assessment of the genetic diversity and relationships among differen
Resumo:
An increasing number of neuroimaging studies are concerned with the identification of interactions or statistical dependencies between brain areas. Dependencies between the activities of different brain regions can be quantified with functional connectivity measures such as the cross-correlation coefficient. An important factor limiting the accuracy of such measures is the amount of empirical data available. For event-related protocols, the amount of data also affects the temporal resolution of the analysis. We use analytical expressions to calculate the amount of empirical data needed to establish whether a certain level of dependency is significant when the time series are autocorrelated, as is the case for biological signals. These analytical results are then contrasted with estimates from simulations based on real data recorded with magnetoencephalography during a resting-state paradigm and during the presentation of visual stimuli. Results indicate that, for broadband signals, 50–100 s of data is required to detect a true underlying cross-correlations coefficient of 0.05. This corresponds to a resolution of a few hundred milliseconds for typical event-related recordings. The required time window increases for narrow band signals as frequency decreases. For instance, approximately 3 times as much data is necessary for signals in the alpha band. Important implications can be derived for the design and interpretation of experiments to characterize weak interactions, which are potentially important for brain processing.