957 resultados para Test Set
Resumo:
There has been significant interest in indirect measures of attitudes like the Implicit Association Test (IAT), presumably because of the possibility of uncovering implicit prejudices. The authors derived a set of qualitative predictions for people's performance in the IAT on the basis of random walk models. These were supported in 3 experiments comparing clearly positive or negative categories to nonwords. They also provided evidence that participants shift their response criterion when doing the IAT. Because of these criterion shifts, a response pattern in the IAT can have multiple causes. Thus, it is not possible to infer a single cause (such as prejudice) from IAT results. A surprising additional result was that nonwords were treated as though they were evaluated more negatively than obviously negative items like insects, suggesting that low familiarity items may generate the pattern of data previously interpreted as evidence for implicit prejudice.
Resumo:
Stepwise uncertainty reduction (SUR) strategies aim at constructing a sequence of points for evaluating a function f in such a way that the residual uncertainty about a quantity of interest progressively decreases to zero. Using such strategies in the framework of Gaussian process modeling has been shown to be efficient for estimating the volume of excursion of f above a fixed threshold. However, SUR strategies remain cumbersome to use in practice because of their high computational complexity, and the fact that they deliver a single point at each iteration. In this article we introduce several multipoint sampling criteria, allowing the selection of batches of points at which f can be evaluated in parallel. Such criteria are of particular interest when f is costly to evaluate and several CPUs are simultaneously available. We also manage to drastically reduce the computational cost of these strategies through the use of closed form formulas. We illustrate their performances in various numerical experiments, including a nuclear safety test case. Basic notions about kriging, auxiliary problems, complexity calculations, R code, and data are available online as supplementary materials.
Resumo:
OBJECTIVE To provide guidance on standards for reporting studies of diagnostic test accuracy for dementia disorders. METHODS An international consensus process on reporting standards in dementia and cognitive impairment (STARDdem) was established, focusing on studies presenting data from which sensitivity and specificity were reported or could be derived. A working group led the initiative through 4 rounds of consensus work, using a modified Delphi process and culminating in a face-to-face consensus meeting in October 2012. The aim of this process was to agree on how best to supplement the generic standards of the STARD statement to enhance their utility and encourage their use in dementia research. RESULTS More than 200 comments were received during the wider consultation rounds. The areas at most risk of inadequate reporting were identified and a set of dementia-specific recommendations to supplement the STARD guidance were developed, including better reporting of patient selection, the reference standard used, avoidance of circularity, and reporting of test-retest reliability. CONCLUSION STARDdem is an implementation of the STARD statement in which the original checklist is elaborated and supplemented with guidance pertinent to studies of cognitive disorders. Its adoption is expected to increase transparency, enable more effective evaluation of diagnostic tests in Alzheimer disease and dementia, contribute to greater adherence to methodologic standards, and advance the development of Alzheimer biomarkers.
Resumo:
BACKGROUND/AIMS Several countries are working to adapt clinical trial regulations to align the approval process to the level of risk for trial participants. The optimal framework to categorize clinical trials according to risk remains unclear, however. Switzerland is the first European country to adopt a risk-based categorization procedure in January 2014. We assessed how accurately and consistently clinical trials are categorized using two different approaches: an approach using criteria set forth in the new law (concept) or an intuitive approach (ad hoc). METHODS This was a randomized controlled trial with a method-comparison study nested in each arm. We used clinical trial protocols from eight Swiss ethics committees approved between 2010 and 2011. Protocols were randomly assigned to be categorized in one of three risk categories using the concept or the ad hoc approach. Each protocol was independently categorized by the trial's sponsor, a group of experts and the approving ethics committee. The primary outcome was the difference in categorization agreement between the expert group and sponsors across arms. Linear weighted kappa was used to quantify agreements, with the difference between kappas being the primary effect measure. RESULTS We included 142 of 231 protocols in the final analysis (concept = 78; ad hoc = 64). Raw agreement between the expert group and sponsors was 0.74 in the concept and 0.78 in the ad hoc arm. Chance-corrected agreement was higher in the ad hoc (kappa: 0.34 (95% confidence interval = 0.10-0.58)) than in the concept arm (0.27 (0.06-0.50)), but the difference was not significant (p = 0.67). LIMITATIONS The main limitation was the large number of protocols excluded from the analysis mostly because they did not fit with the clinical trial definition of the new law. CONCLUSION A structured risk categorization approach was not better than an ad hoc approach. Laws introducing risk-based approaches should provide guidelines, examples and templates to ensure correct application.
Resumo:
The Culture Fair Test (CFT) is a psychometric test of fluid intelligence consisting of four subtests; Series, Classification, Matrices, and Topographies. The four subtests are only moderately intercorrelated, doubting the notion that they assess the same construct (i.e., fluid intelligence). As an explanation of these low correlations, we investigated the position effect. This effect is assumed to reflect implicit learning during testing. By applying fixed-links modeling to analyze the CFT data of 206 participants, we identified position effects as latent variables in the subtests; Classification, Matrices, and Topographies. These position effects were disentangled from a second set of latent variables representing fluid intelligence inherent in the four subtests. After this separation of position effect and basic fluid intelligence, the latent variables representing basic fluid intelligence in the subtests Series, Matrices, and Topographies could be combined to one common latent variable which was highly correlated with fluid intelligence derived from the subtest Classification (r=.72). Correlations between the three latent variables representing the position effects in the Classification, Matrices, and Topographies subtests ranged from r=.38 to r=.59. The results indicate that all four CFT subtests measure the same construct (i.e., fluid intelligence) but that the position effect confounds the factorial structure
Resumo:
A problem with a practical application of Varian.s Weak Axiom of Cost Minimization is that an observed violation may be due to random variation in the output quantities produced by firms rather than due to inefficiency on the part of the firm. In this paper, unlike in Varian (1985), the output rather than the input quantities are treated as random and an alternative statistical test of the violation of WACM is proposed. We assume that there is no technical inefficiency and provide a test of the hypothesis that an observed violation of WACM is merely due to random variations in the output levels of the firms being compared.. We suggest an intuitive approach for specifying a value of the variance of the noise term that is needed for the test. The paper includes an illustrative example utilizing a data set relating to a number of U.S. airlines.
Resumo:
In France, farmers commission about 250,000 soil-testing analyses per year to assist them managing soil fertility. The number and diversity of origin of the samples make these analyses an interesting and original information source regarding cultivated topsoil variability. Moreover, these analyses relate to several parameters strongly influenced by human activity (macronutrient contents, pH...), for which existing cartographic information is not very relevant. Compiling the results of these analyses into a database makes it possible to re-use these data within both a national and temporal framework. A database compilation relating to data collected over the period 1990-2009 has been recently achieved. So far, commercial soil-testing laboratories approved by the Ministry of Agriculture have provided analytical results from more than 2,000,000 samples. After the initial quality control stage, analytical results from more than 1,900,000 samples were available in the database. The anonymity of the landholders seeking soil analyses is perfectly preserved, as the only identifying information stored is the location of the nearest administrative city to the sample site. We present in this dataset a set of statistical parameters of the spatial distributions for several agronomic soil properties. These statistical parameters are calculated for 4 different nested spatial entities (administrative areas: e.g. regions, departments, counties and agricultural areas) and for 4 time periods (1990-1994, 1995-1999, 2000-2004, 2005-2009). Two kinds of agronomic soil properties are available: the firs one correspond to the quantitative variables like the organic carbon content and the second one corresponds to the qualitative variables like the texture class. For each spatial unit and temporal period, we calculated the following statistics stets: the first set is calculated for the quantitative variables and corresponds to the number of samples, the mean, the standard deviation and, the 2-,4-,10-quantiles; the second set is calculated for the qualitative variables and corresponds to the number of samples, the value of the dominant class, the number of samples of the dominant class, the second dominant class, the number of samples of the second dominant class.
Resumo:
Predicting species potential and future distribution has become a relevant tool in biodiversity monitoring and conservation. In this data article we present the suitability map of a virtual species generated based on two bioclimatic variables, and a dataset containing more than 700.000 random observations at the extent of Europe. The dataset includes spatial attributes such as, distance to roads, protected areas, country codes, and the habitat suitability of two spatially clustered species (grassland and forest species) and a wide spread species.
Resumo:
This Doctoral Thesis entitled Contribution to the analysis, design and assessment of compact antenna test ranges at millimeter wavelengths aims to deepen the knowledge of a particular antenna measurement system: the compact range, operating in the frequency bands of millimeter wavelengths. The thesis has been developed at Radiation Group (GR), an antenna laboratory which belongs to the Signals, Systems and Radiocommunications department (SSR), from Technical University of Madrid (UPM). The Radiation Group owns an extensive experience on antenna measurements, running at present four facilities which operate in different configurations: Gregorian compact antenna test range, spherical near field, planar near field and semianechoic arch system. The research work performed in line with this thesis contributes the knowledge of the first measurement configuration at higher frequencies, beyond the microwaves region where Radiation Group features customer-level performance. To reach this high level purpose, a set of scientific tasks were sequentially carried out. Those are succinctly described in the subsequent paragraphs. A first step dealed with the State of Art review. The study of scientific literature dealed with the analysis of measurement practices in compact antenna test ranges in addition with the particularities of millimeter wavelength technologies. Joint study of both fields of knowledge converged, when this measurement facilities are of interest, in a series of technological challenges which become serious bottlenecks at different stages: analysis, design and assessment. Thirdly after the overview study, focus was set on Electromagnetic analysis algorithms. These formulations allow to approach certain electromagnetic features of interest, such as field distribution phase or stray signal analysis of particular structures when they interact with electromagnetic waves sources. Properly operated, a CATR facility features electromagnetic waves collimation optics which are large, in terms of wavelengths. Accordingly, the electromagnetic analysis tasks introduce an extense number of mathematic unknowns which grow with frequency, following different polynomic order laws depending on the used algorithmia. In particular, the optics configuration which was of our interest consisted on the reflection type serrated edge collimator. The analysis of these devices requires a flexible handling of almost arbitrary scattering geometries, becoming this flexibility the nucleus of the algorithmia’s ability to perform the subsequent design tasks. This thesis’ contribution to this field of knowledge consisted on reaching a formulation which was powerful at the same time when dealing with various analysis geometries and computationally speaking. Two algorithmia were developed. While based on the same principle of hybridization, they reached different order Physics performance at the cost of the computational efficiency. Inter-comparison of their CATR design capabilities was performed, reaching both qualitative as well as quantitative conclusions on their scope. In third place, interest was shifted from analysis - design tasks towards range assessment. Millimetre wavelengths imply strict mechanical tolerances and fine setup adjustment. In addition, the large number of unknowns issue already faced in the analysis stage appears as well in the on chamber field probing stage. Natural decrease of dynamic range available by semiconductor millimeter waves sources requires in addition larger integration times at each probing point. These peculiarities increase exponentially the difficulty of performing assessment processes in CATR facilities beyond microwaves. The bottleneck becomes so tight that it compromises the range characterization beyond a certain limit frequency which typically lies on the lowest segment of millimeter wavelength frequencies. However the value of range assessment moves, on the contrary, towards the highest segment. This thesis contributes this technological scenario developing quiet zone probing techniques which achieves substantial data reduction ratii. Collaterally, it increases the robustness of the results to noise, which is a virtual rise of the setup’s available dynamic range. In fourth place, the environmental sensitivity of millimeter wavelengths issue was approached. It is well known the drifts of electromagnetic experiments due to the dependance of the re sults with respect to the surrounding environment. This feature relegates many industrial practices of microwave frequencies to the experimental stage, at millimeter wavelengths. In particular, evolution of the atmosphere within acceptable conditioning bounds redounds in drift phenomena which completely mask the experimental results. The contribution of this thesis on this aspect consists on modeling electrically the indoor atmosphere existing in a CATR, as a function of environmental variables which affect the range’s performance. A simple model was developed, being able to handle high level phenomena, such as feed - probe phase drift as a function of low level magnitudes easy to be sampled: relative humidity and temperature. With this model, environmental compensation can be performed and chamber conditioning is automatically extended towards higher frequencies. Therefore, the purpose of this thesis is to go further into the knowledge of millimetre wavelengths involving compact antenna test ranges. This knowledge is dosified through the sequential stages of a CATR conception, form early low level electromagnetic analysis towards the assessment of an operative facility, stages for each one of which nowadays bottleneck phenomena exist and seriously compromise the antenna measurement practices at millimeter wavelengths.
Resumo:
AnewRelativisticScreenedHydrogenicModel has been developed to calculate atomic data needed to compute the optical and thermodynamic properties of high energy density plasmas. The model is based on anewset of universal screeningconstants, including nlj-splitting that has been obtained by fitting to a large database of ionization potentials and excitation energies. This database was built with energies compiled from the National Institute of Standards and Technology (NIST) database of experimental atomic energy levels, and energies calculated with the Flexible Atomic Code (FAC). The screeningconstants have been computed up to the 5p3/2 subshell using a Genetic Algorithm technique with an objective function designed to minimize both the relative error and the maximum error. To select the best set of screeningconstants some additional physical criteria has been applied, which are based on the reproduction of the filling order of the shells and on obtaining the best ground state configuration. A statistical error analysis has been performed to test the model, which indicated that approximately 88% of the data lie within a ±10% error interval. We validate the model by comparing the results with ionization energies, transition energies, and wave functions computed using sophisticated self-consistent codes and experimental data.
Resumo:
Se va a realizar un estudio de la codificación de imágenes sobre el estándar HEVC (high-effiency video coding). El proyecto se va a centrar en el codificador híbrido, más concretamente sobre la aplicación de la transformada inversa del coseno que se realiza tanto en codificador como en el descodificador. La necesidad de codificar vídeo surge por la aparición de la secuencia de imágenes como señales digitales. El problema principal que tiene el vídeo es la cantidad de bits que aparecen al realizar la codificación. Como consecuencia del aumento de la calidad de las imágenes, se produce un crecimiento exponencial de la cantidad de información a codificar. La utilización de las transformadas al procesamiento digital de imágenes ha aumentado a lo largo de los años. La transformada inversa del coseno se ha convertido en el método más utilizado en el campo de la codificación de imágenes y video. Las ventajas de la transformada inversa del coseno permiten obtener altos índices de compresión a muy bajo coste. La teoría de las transformadas ha mejorado el procesamiento de imágenes. En la codificación por transformada, una imagen se divide en bloques y se identifica cada imagen a un conjunto de coeficientes. Esta codificación se aprovecha de las dependencias estadísticas de las imágenes para reducir la cantidad de datos. El proyecto realiza un estudio de la evolución a lo largo de los años de los distintos estándares de codificación de video. Se analiza el codificador híbrido con más profundidad así como el estándar HEVC. El objetivo final que busca este proyecto fin de carrera es la realización del núcleo de un procesador específico para la ejecución de la transformada inversa del coseno en un descodificador de vídeo compatible con el estándar HEVC. Es objetivo se logra siguiendo una serie de etapas, en las que se va añadiendo requisitos. Este sistema permite al diseñador hardware ir adquiriendo una experiencia y un conocimiento más profundo de la arquitectura final. ABSTRACT. A study about the codification of images based on the standard HEVC (high-efficiency video coding) will be developed. The project will be based on the hybrid encoder, in particular, on the application of the inverse cosine transform, which is used for the encoder as well as for the decoder. The necessity of encoding video arises because of the appearance of the sequence of images as digital signals. The main problem that video faces is the amount of bits that appear when making the codification. As a consequence of the increase of the quality of the images, an exponential growth on the quantity of information that should be encoded happens. The usage of transforms to the digital processing of images has increased along the years. The inverse cosine transform has become the most used method in the field of codification of images and video. The advantages of the inverse cosine transform allow to obtain high levels of comprehension at a very low price. The theory of the transforms has improved the processing of images. In the codification by transform, an image is divided in blocks and each image is identified to a set of coefficients. This codification takes advantage of the statistic dependence of the images to reduce the amount of data. The project develops a study of the evolution along the years of the different standards in video codification. In addition, the hybrid encoder and the standard HEVC are analyzed more in depth. The final objective of this end of degree project is the realization of the nucleus from a specific processor for the execution of the inverse cosine transform in a decoder of video that is compatible with the standard HEVC. This objective is reached following a series of stages, in which requirements are added. This system allows the hardware designer to acquire a deeper experience and knowledge of the final architecture.
Resumo:
The inverter in a photovoltaic system assures two essential functions. The first is to track the maximum power point of the system IV curve throughout variable environmental conditions. The second is to convert DC power delivered by the PV panels into AC power. Nowadays, in order to qualify inverters, manufacturers and certifying organisms use mainly European and/or CEC efficiency standards. The question arises if these are still representative of CPV system behaviour. We propose to use a set of CPV – specific weighted average and a representative dynamic response to have a better determination of the static and dynamic MPPT efficiencies. Four string-sized commercial inverters used in real CPV plants have been tested.
Resumo:
The current phylogenetic hypothesis for the evolution and biogeography of fiddler crabs relies on the assumption that complex behavioral traits are assumed to also be evolutionary derived. Indo-west Pacific fiddler crabs have simpler reproductive social behavior and are more marine and were thought to be ancestral to the more behaviorally complex and more terrestrial American species. It was also hypothesized that the evolution of more complex social and reproductive behavior was associated with the colonization of the higher intertidal zones. Our phylogenetic analysis, based upon a set of independent molecular characters, however, demonstrates how widely entrenched ideas about evolution and biogeography led to a reasonable, but apparently incorrect, conclusion about the evolutionary trends within this pantropical group of crustaceans. Species bearing the set of "derived traits" are phylogenetically ancestral, suggesting an alternative evolutionary scenario: the evolution of reproductive behavioral complexity in fiddler crabs may have arisen multiple times during their evolution. The evolution of behavioral complexity may have arisen by coopting of a series of other adaptations for high intertidal living and antipredator escape. A calibration of rates of molecular evolution from populations on either side of the Isthmus of Panama suggest a sequence divergence rate for 16S rRNA of 0.9% per million years. The divergence between the ancestral clade and derived forms is estimated to be approximately 22 million years ago, whereas the divergence between the American and Indo-west Pacific is estimated to be approximately 17 million years ago.
Resumo:
From a set of gonioapparent automotive samples from different manufacturers we selected 28 low-chroma color pairs with relatively small color differences predominantly in lightness. These color pairs were visually assessed with a gray scale at six different viewing angles by a panel of 10 observers. Using the Standardized Residual Sum of Squares (STRESS) index, the results of our visual experiment were tested against predictions made by 12 modern color-difference formulas. From a weighted STRESS index accounting for the uncertainty in visual assessments, the best prediction of our whole experiment was achieved using AUDI2000, CAM02-SCD, CAM02-UCS and OSA-GP-Euclidean color-difference formulas, which were no statistically significant different among them. A two-step optimization of the original AUDI2000 color-difference formula resulted in a modified AUDI2000 formula which performed both, significantly better than the original formula and below the experimental inter-observer variability. Nevertheless the proposal of a new revised AUDI2000 color-difference formula requires additional experimental data.
Resumo:
This paper proposes an adaptive algorithm for clustering cumulative probability distribution functions (c.p.d.f.) of a continuous random variable, observed in different populations, into the minimum homogeneous clusters, making no parametric assumptions about the c.p.d.f.’s. The distance function for clustering c.p.d.f.’s that is proposed is based on the Kolmogorov–Smirnov two sample statistic. This test is able to detect differences in position, dispersion or shape of the c.p.d.f.’s. In our context, this statistic allows us to cluster the recorded data with a homogeneity criterion based on the whole distribution of each data set, and to decide whether it is necessary to add more clusters or not. In this sense, the proposed algorithm is adaptive as it automatically increases the number of clusters only as necessary; therefore, there is no need to fix in advance the number of clusters. The output of the algorithm are the common c.p.d.f. of all observed data in the cluster (the centroid) and, for each cluster, the Kolmogorov–Smirnov statistic between the centroid and the most distant c.p.d.f. The proposed algorithm has been used for a large data set of solar global irradiation spectra distributions. The results obtained enable to reduce all the information of more than 270,000 c.p.d.f.’s in only 6 different clusters that correspond to 6 different c.p.d.f.’s.