25 resultados para Matching to sample

em CentAUR: Central Archive University of Reading - UK


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a case study to illustrate the range of decisions involved in designing a sampling strategy for a complex, longitudinal research study. It is based on experience from the Young Lives project and identifies the approaches used to sample children for longitudinal follow-up in four less developed countries (LDCs). The rationale for decisions made and the resulting benefits, and limitations, of the approaches adopted are discussed. Of particular importance is the choice of sampling approach to yield useful analysis; specific examples are presented of how this informed the design of the Young Lives sampling strategy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a simple Bayesian approach to sample size determination in clinical trials. It is required that the trial should be large enough to ensure that the data collected will provide convincing evidence either that an experimental treatment is better than a control or that it fails to improve upon control by some clinically relevant difference. The method resembles standard frequentist formulations of the problem, and indeed in certain circumstances involving 'non-informative' prior information it leads to identical answers. In particular, unlike many Bayesian approaches to sample size determination, use is made of an alternative hypothesis that an experimental treatment is better than a control treatment by some specified magnitude. The approach is introduced in the context of testing whether a single stream of binary observations are consistent with a given success rate p(0). Next the case of comparing two independent streams of normally distributed responses is considered, first under the assumption that their common variance is known and then for unknown variance. Finally, the more general situation in which a large sample is to be collected and analysed according to the asymptotic properties of the score statistic is explored. Copyright (C) 2007 John Wiley & Sons, Ltd.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Giant planets helped to shape the conditions we see in the Solar System today and they account for more than 99% of the mass of the Sun’s planetary system. They can be subdivided into the Ice Giants (Uranus and Neptune) and the Gas Giants (Jupiter and Saturn), which differ from each other in a number of fundamental ways. Uranus, in particular is the most challenging to our understanding of planetary formation and evolution, with its large obliquity, low self-luminosity, highly asymmetrical internal field, and puzzling internal structure. Uranus also has a rich planetary system consisting of a system of inner natural satellites and complex ring system, five major natural icy satellites, a system of irregular moons with varied dynamical histories, and a highly asymmetrical magnetosphere. Voyager 2 is the only spacecraft to have explored Uranus, with a flyby in 1986, and no mission is currently planned to this enigmatic system. However, a mission to the uranian system would open a new window on the origin and evolution of the Solar System and would provide crucial information on a wide variety of physicochemical processes in our Solar System. These have clear implications for understanding exoplanetary systems. In this paper we describe the science case for an orbital mission to Uranus with an atmospheric entry probe to sample the composition and atmospheric physics in Uranus’ atmosphere. The characteristics of such an orbiter and a strawman scientific payload are described and we discuss the technical challenges for such a mission. This paper is based on a white paper submitted to the European Space Agency’s call for science themes for its large-class mission programme in 2013.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This workshop paper reports recent developments to a vision system for traffic interpretation which relies extensively on the use of geometrical and scene context. Firstly, a new approach to pose refinement is reported, based on forces derived from prominent image derivatives found close to an initial hypothesis. Secondly, a parameterised vehicle model is reported, able to represent different vehicle classes. This general vehicle model has been fitted to sample data, and subjected to a Principal Component Analysis to create a deformable model of common car types having 6 parameters. We show that the new pose recovery technique is also able to operate on the PCA model, to allow the structure of an initial vehicle hypothesis to be adapted to fit the prevailing context. We report initial experiments with the model, which demonstrate significant improvements to pose recovery.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A traditional method of validating the performance of a flood model when remotely sensed data of the flood extent are available is to compare the predicted flood extent to that observed. The performance measure employed often uses areal pattern-matching to assess the degree to which the two extents overlap. Recently, remote sensing of flood extents using synthetic aperture radar (SAR) and airborne scanning laser altimetry (LIDAR) has made more straightforward the synoptic measurement of water surface elevations along flood waterlines, and this has emphasised the possibility of using alternative performance measures based on height. This paper considers the advantages that can accrue from using a performance measure based on waterline elevations rather than one based on areal patterns of wet and dry pixels. The two measures were compared for their ability to estimate flood inundation uncertainty maps from a set of model runs carried out to span the acceptable model parameter range in a GLUE-based analysis. A 1 in 5-year flood on the Thames in 1992 was used as a test event. As is typical for UK floods, only a single SAR image of observed flood extent was available for model calibration and validation. A simple implementation of a two-dimensional flood model (LISFLOOD-FP) was used to generate model flood extents for comparison with that observed. The performance measure based on height differences of corresponding points along the observed and modelled waterlines was found to be significantly more sensitive to the channel friction parameter than the measure based on areal patterns of flood extent. The former was able to restrict the parameter range of acceptable model runs and hence reduce the number of runs necessary to generate an inundation uncertainty map. A result of this was that there was less uncertainty in the final flood risk map. The uncertainty analysis included the effects of uncertainties in the observed flood extent as well as in model parameters. The height-based measure was found to be more sensitive when increased heighting accuracy was achieved by requiring that observed waterline heights varied slowly along the reach. The technique allows for the decomposition of the reach into sections, with different effective channel friction parameters used in different sections, which in this case resulted in lower r.m.s. height differences between observed and modelled waterlines than those achieved by runs using a single friction parameter for the whole reach. However, a validation of the modelled inundation uncertainty using the calibration event showed a significant difference between the uncertainty map and the observed flood extent. While this was true for both measures, the difference was especially significant for the height-based one. This is likely to be due to the conceptually simple flood inundation model and the coarse application resolution employed in this case. The increased sensitivity of the height-based measure may lead to an increased onus being placed on the model developer in the production of a valid model

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The prediction of climate variability and change requires the use of a range of simulation models. Multiple climate model simulations are needed to sample the inherent uncertainties in seasonal to centennial prediction. Because climate models are computationally expensive, there is a tradeoff between complexity, spatial resolution, simulation length, and ensemble size. The methods used to assess climate impacts are examined in the context of this trade-off. An emphasis on complexity allows simulation of coupled mechanisms, such as the carbon cycle and feedbacks between agricultural land management and climate. In addition to improving skill, greater spatial resolution increases relevance to regional planning. Greater ensemble size improves the sampling of probabilities. Research from major international projects is used to show the importance of synergistic research efforts. The primary climate impact examined is crop yield, although many of the issues discussed are relevant to hydrology and health modeling. Methods used to bridge the scale gap between climate and crop models are reviewed. Recent advances include large-area crop modeling, quantification of uncertainty in crop yield, and fully integrated crop–climate modeling. The implications of trends in computer power, including supercomputers, are also discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Investigation of preferred structures of planetary wave dynamics is addressed using multivariate Gaussian mixture models. The number of components in the mixture is obtained using order statistics of the mixing proportions, hence avoiding previous difficulties related to sample sizes and independence issues. The method is first applied to a few low-order stochastic dynamical systems and data from a general circulation model. The method is next applied to winter daily 500-hPa heights from 1949 to 2003 over the Northern Hemisphere. A spatial clustering algorithm is first applied to the leading two principal components (PCs) and shows significant clustering. The clustering is particularly robust for the first half of the record and less for the second half. The mixture model is then used to identify the clusters. Two highly significant extratropical planetary-scale preferred structures are obtained within the first two to four EOF state space. The first pattern shows a Pacific-North American (PNA) pattern and a negative North Atlantic Oscillation (NAO), and the second pattern is nearly opposite to the first one. It is also observed that some subspaces show multivariate Gaussianity, compatible with linearity, whereas others show multivariate non-Gaussianity. The same analysis is also applied to two subperiods, before and after 1978, and shows a similar regime behavior, with a slight stronger support for the first subperiod. In addition a significant regime shift is also observed between the two periods as well as a change in the shape of the distribution. The patterns associated with the regime shifts reflect essentially a PNA pattern and an NAO pattern consistent with the observed global warming effect on climate and the observed shift in sea surface temperature around the mid-1970s.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background The gut and immune system form a complex integrated structure that has evolved to provide effective digestion and defence against ingested toxins and pathogenic bacteria. However, great variation exists in what is considered normal healthy gut and immune function. Thus, whilst it is possible to measure many aspects of digestion and immunity, it is more difficult to interpret the benefits to individuals of variation within what is considered to be a normal range. Nevertheless, it is important to set standards for optimal function for use both by the consumer, industry and those concerned with the public health. The digestive tract is most frequently the object of functional and health claims and a large market already exists for gut-functional foods worldwide. Aim To define normal function of the gut and immune system and describe available methods of measuring it. Results We have defined normal bowel habit and transit time, identified their role as risk factors for disease and how they may be measured. Similarly, we have tried to define what is a healthy gut flora in terms of the dominant genera and their metabolism and listed the many, varied and novel methods for determining these parameters. It has proved less easy to provide boundaries for what constitutes optimal or improved gastric emptying, gut motility, nutrient and water absorption and the function of organs such as the liver, gallbladder and pancreas. The many tests of these functions are described. We have discussed gastrointestinal well being. Sensations arising from the gut can be both pleasant and unpleasant. However, the characteristics of well being are ill defined and merge imperceptibly from acceptable to unacceptable, a state that is subjective. Nevertheless, we feel this is an important area for future work and method development. The immune system is even more difficult to make quantitative judgements about. When it is defective, then clinical problems ensure, but this is an uncommon state. The innate and adaptive immune systems work synergistically together and comprise many cellular and humoral factors. The adaptive system is extremely sophisticated and between the two arms of immunity there is great redundancy, which provides robust defences. New aspects of immune function are discovered regularly. It is not clear whether immune function can be "improved". Measuring aspects of immune function is possible but there is no one test that will define either the status or functional capacity of the immune system. Human studies are often limited by the ability to sample only blood or secretions such as saliva but it should be remembered that only 2% of lymphocytes circulate at any given time, which limits interpretation of data. We recommend assessing the functional capacity of the immune system by: measuring specific cell functions ex vivo, measuring in vivo responses to challenge, e. g. change in antibody in blood or response to antigens, determining the incidence and severity of infection in target populations during naturally occurring episodes or in response to attenuated pathogens.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Eye-movements have long been considered a problem when trying to understand the visual control of locomotion. They transform the retinal image from a simple expanding pattern of moving texture elements (pure optic flow), into a complex combination of translation and rotation components (retinal flow). In this article we investigate whether there are measurable advantages to having an active free gaze, over a static gaze or tracking gaze, when steering along a winding path. We also examine patterns of free gaze behavior to determine preferred gaze strategies during active locomotion. Participants were asked to steer along a computer-simulated textured roadway with free gaze, fixed gaze, or gaze tracking the center of the roadway. Deviation of position from the center of the road was recorded along with their point of gaze. It was found that visually tracking the middle of the road produced smaller steering errors than for fixed gaze. Participants performed best at the steering task when allowed to sample naturally from the road ahead with free gaze. There was some variation in the gaze strategies used, but sampling was predominantly of areas proximal to the center of the road. These results diverge from traditional models of flow analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In any data mining applications, automated text and text and image retrieval of information is needed. This becomes essential with the growth of the Internet and digital libraries. Our approach is based on the latent semantic indexing (LSI) and the corresponding term-by-document matrix suggested by Berry and his co-authors. Instead of using deterministic methods to find the required number of first "k" singular triplets, we propose a stochastic approach. First, we use Monte Carlo method to sample and to build much smaller size term-by-document matrix (e.g. we build k x k matrix) from where we then find the first "k" triplets using standard deterministic methods. Second, we investigate how we can reduce the problem to finding the "k"-largest eigenvalues using parallel Monte Carlo methods. We apply these methods to the initial matrix and also to the reduced one. The algorithms are running on a cluster of workstations under MPI and results of the experiments arising in textual retrieval of Web documents as well as comparison of the stochastic methods proposed are presented. (C) 2003 IMACS. Published by Elsevier Science B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This volume reports on the excavations from 2002 to 2005 designed to investigate this transition, with the focus on the origins of Bishopstone village. Excavations adjacent to St Andrew’s churchyard revealed a dense swathe of later Anglo-Saxon (8th- to late 10th-/early 11th-century) habitation, including a planned complex of ‘timber halls’, and a unique cellared tower. The occupation encroached upon a pre-Conquest cemetery of 43 inhumations. The report provides a comprehensive analysis, interpretation and academic contextualisation of the archaeological discoveries brought to light by these excavations, the first to sample a later Anglo-Saxon rural settlement in East Sussex on an extensive scale. The inter-disciplinary approach appraises the historical and topographical evidence alongside that recovered during the excavations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The goal of this paper is to study and further develop the orthogonality sampling or stationary waves algorithm for the detection of the location and shape of objects from the far field pattern of scattered waves in electromagnetics or acoustics. Orthogonality sampling can be seen as a special beam forming algorithm with some links to the point source method and to the linear sampling method. The basic idea of orthogonality sampling is to sample the space under consideration by calculating scalar products of the measured far field pattern , with a test function for all y in a subset Q of the space , m = 2, 3. The way in which this is carried out is important to extract the information which the scattered fields contain. The theoretical foundation of orthogonality sampling is only partly resolved, and the goal of this work is to initiate further research by numerical demonstration of the high potential of the approach. We implement the method for a two-dimensional setting for the Helmholtz equation, which represents electromagnetic scattering when the setup is independent of the third coordinate. We show reconstructions of the location and shape of objects from measurements of the scattered field for one or several directions of incidence and one or many frequencies or wave numbers, respectively. In particular, we visualize the indicator function both with the Dirichlet and Neumann boundary condition and for complicated inhomogeneous media.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Insect pollinators provide a critical ecosystem service by pollinating many wild flowers and crops. It is therefore essential to be able to effectively survey and monitor pollinator communities across a range of habitats, and in particular, sample the often stratified parts of the habitats where insects are found. To date, a wide array of sampling methods have been used to collect insect pollinators, but no single method has been used effectively to sample across habitat types and throughout the spatial structure of habitats. Here we present a method of ‘aerial pan-trapping’ that allows insect pollinators to be sampled across the vertical strata from the canopy of forests to agro-ecosystems. We surveyed and compared the species richness and abundance of a wide range of insect pollinators in agricultural, secondary regenerating forest and primary forest habitats in Ghana to evaluate the usefulness of this approach. In addition to confirming the efficacy of the method at heights of up to 30 metres and the effects of trap color on catch, we found greatest insect abundance in agricultural land and higher bee abundance and species richness in undisturbed forest compared to secondary forest.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The components of many signaling pathways have been identified and there is now a need to conduct quantitative data-rich temporal experiments for systems biology and modeling approaches to better understand pathway dynamics and regulation. Here we present a modified Western blotting method that allows the rapid and reproducible quantification and analysis of hundreds of data points per day on proteins and their phosphorylation state at individual sites. The approach is of particular use where samples show a high degree of sample-to-sample variability such as primary cells from multiple donors. We present a case study on the analysis of >800 phosphorylation data points from three phosphorylation sites in three signaling proteins over multiple time points from platelets isolated from ten donors, demonstrating the technique's potential to determine kinetic and regulatory information from limited cell numbers and to investigate signaling variation within a population. We envisage the approach being of use in the analysis of many cellular processes such as signaling pathway dynamics to identify regulatory feedback loops and the investigation of potential drug/inhibitor responses, using primary cells and tissues, to generate information about how a cell's physiological state changes over time.