45 resultados para Yield curve data sets

em University of Queensland eSpace - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Retrieving large amounts of information over wide area networks, including the Internet, is problematic due to issues arising from latency of response, lack of direct memory access to data serving resources, and fault tolerance. This paper describes a design pattern for solving the issues of handling results from queries that return large amounts of data. Typically these queries would be made by a client process across a wide area network (or Internet), with one or more middle-tiers, to a relational database residing on a remote server. The solution involves implementing a combination of data retrieval strategies, including the use of iterators for traversing data sets and providing an appropriate level of abstraction to the client, double-buffering of data subsets, multi-threaded data retrieval, and query slicing. This design has recently been implemented and incorporated into the framework of a commercial software product developed at Oracle Corporation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the context of cancer diagnosis and treatment, we consider the problem of constructing an accurate prediction rule on the basis of a relatively small number of tumor tissue samples of known type containing the expression data on very many (possibly thousands) genes. Recently, results have been presented in the literature suggesting that it is possible to construct a prediction rule from only a few genes such that it has a negligible prediction error rate. However, in these results the test error or the leave-one-out cross-validated error is calculated without allowance for the selection bias. There is no allowance because the rule is either tested on tissue samples that were used in the first instance to select the genes being used in the rule or because the cross-validation of the rule is not external to the selection process; that is, gene selection is not performed in training the rule at each stage of the cross-validation process. We describe how in practice the selection bias can be assessed and corrected for by either performing a cross-validation or applying the bootstrap external to the selection process. We recommend using 10-fold rather than leave-one-out cross-validation, and concerning the bootstrap, we suggest using the so-called. 632+ bootstrap error estimate designed to handle overfitted prediction rules. Using two published data sets, we demonstrate that when correction is made for the selection bias, the cross-validated error is no longer zero for a subset of only a few genes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To simulate cropping systems, crop models must not only give reliable predictions of yield across a wide range of environmental conditions, they must also quantify water and nutrient use well, so that the status of the soil at maturity is a good representation of the starting conditions for the next cropping sequence. To assess the suitability for this task a range of crop models, currently used in Australia, were tested. The models differed in their design objectives, complexity and structure and were (i) tested on diverse, independent data sets from a wide range of environments and (ii) model components were further evaluated with one detailed data set from a semi-arid environment. All models were coded into the cropping systems shell APSIM, which provides a common soil water and nitrogen balance. Crop development was input, thus differences between simulations were caused entirely by difference in simulating crop growth. Under nitrogen non-limiting conditions between 73 and 85% of the observed kernel yield variation across environments was explained by the models. This ranged from 51 to 77% under varying nitrogen supply. Water and nitrogen effects on leaf area index were predicted poorly by all models resulting in erroneous predictions of dry matter accumulation and water use. When measured light interception was used as input, most models improved in their prediction of dry matter and yield. This test highlighted a range of compensating errors in all modelling approaches. Time course and final amount of water extraction was simulated well by two models, while others left up to 25% of potentially available soil water in the profile. Kernel nitrogen percentage was predicted poorly by all models due to its sensitivity to small dry matter changes. Yield and dry matter could be estimated adequately for a range of environmental conditions using the general concepts of radiation use efficiency and transpiration efficiency. However, leaf area and kernel nitrogen dynamics need to be improved to achieve better estimates of water and nitrogen use if such models are to be use to evaluate cropping systems. (C) 1998 Elsevier Science B.V.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two major factors are likely to impact the utilisation of remotely sensed data in the near future: (1)an increase in the number and availability of commercial and non-commercial image data sets with a range of spatial, spectral and temporal dimensions, and (2) increased access to image display and analysis software through GIS. A framework was developed to provide an objective approach to selecting remotely sensed data sets for specific environmental monitoring problems. Preliminary applications of the framework have provided successful approaches for monitoring disturbed and restored wetlands in southern California.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We tested the effects of four data characteristics on the results of reserve selection algorithms. The data characteristics were nestedness of features (land types in this case), rarity of features, size variation of sites (potential reserves) and size of data sets (numbers of sites and features). We manipulated data sets to produce three levels, with replication, of each of these data characteristics while holding the other three characteristics constant. We then used an optimizing algorithm and three heuristic algorithms to select sites to solve several reservation problems. We measured efficiency as the number or total area of selected sites, indicating the relative cost of a reserve system. Higher nestedness increased the efficiency of all algorithms (reduced the total cost of new reserves). Higher rarity reduced the efficiency of all algorithms (increased the total cost of new reserves). More variation in site size increased the efficiency of all algorithms expressed in terms of total area of selected sites. We measured the suboptimality of heuristic algorithms as the percentage increase of their results over optimal (minimum possible) results. Suboptimality is a measure of the reliability of heuristics as indicative costing analyses. Higher rarity reduced the suboptimality of heuristics (increased their reliability) and there is some evidence that more size variation did the same for the total area of selected sites. We discuss the implications of these results for the use of reserve selection algorithms as indicative and real-world planning tools.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motivation: This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. Results: The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Back ground. Based on the well-described excess of schizophrenia births in winter and spring, we hypothesised that individuals with schizophrenia (a) would be more likely to be born during periods of decreased perinatal sunshine, and (b) those born during periods of less sunshine would have an earlier age of first registration. Methods. We undertook an ecological analysis of long-term trends in perinatal sunshine duration and schizophrenia birth rates based on two mental health registers (Queensland. Australia n = 6630; The Netherlands n = 24, 474). For each of the 480 months between 1931 and 1970, the agreement between slopes of the trends in psychosis and long-term sunshine duration series were assessed. Age at first registration was assessed by quartiles of long-term trends in perinatal sunshine duration, Males and females were assessed separately. Results. Both the Dutch and Australian data showed a statistically significant association between falling long-term trends in sunshine duration around the time of birth and rising schizophrenia birth rates for males only. In both the Dutch and Australian data there were significant associations between earlier age of first registration and reduced long-term trends in sunshine duration around the time of birth for both males and females, Conclusions. A measure of long-term trends in perinatal sunshine duration was associated with two epidemiological features of schizophrenia in two separate data sets. Exposures related to sunshine duration warrant further consideration in schizophrenia research. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Remotely sensed data have been used extensively for environmental monitoring and modeling at a number of spatial scales; however, a limited range of satellite imaging systems often. constrained the scales of these analyses. A wider variety of data sets is now available, allowing image data to be selected to match the scale of environmental structure(s) or process(es) being examined. A framework is presented for use by environmental scientists and managers, enabling their spatial data collection needs to be linked to a suitable form of remotely sensed data. A six-step approach is used, combining image spatial analysis and scaling tools, within the context of hierarchy theory. The main steps involved are: (1) identification of information requirements for the monitoring or management problem; (2) development of ideal image dimensions (scene model), (3) exploratory analysis of existing remotely sensed data using scaling techniques, (4) selection and evaluation of suitable remotely sensed data based on the scene model, (5) selection of suitable spatial analytic techniques to meet information requirements, and (6) cost-benefit analysis. Results from a case study show that the framework provided an objective mechanism to identify relevant aspects of the monitoring problem and environmental characteristics for selecting remotely sensed data and analysis techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An investigation was conducted to evaluate the impact of experimental designs and spatial analyses (single-trial models) of the response to selection for grain yield in the northern grains region of Australia (Queensland and northern New South Wales). Two sets of multi-environment experiments were considered. One set, based on 33 trials conducted from 1994 to 1996, was used to represent the testing system of the wheat breeding program and is referred to as the multi-environment trial (MET). The second set, based on 47 trials conducted from 1986 to 1993, sampled a more diverse set of years and management regimes and was used to represent the target population of environments (TPE). There were 18 genotypes in common between the MET and TPE sets of trials. From indirect selection theory, the phenotypic correlation coefficient between the MET and TPE single-trial adjusted genotype means [r(p(MT))] was used to determine the effect of the single-trial model on the expected indirect response to selection for grain yield in the TPE based on selection in the MET. Five single-trial models were considered: randomised complete block (RCB), incomplete block (IB), spatial analysis (SS), spatial analysis with a measurement error (SSM) and a combination of spatial analysis and experimental design information to identify the preferred (PF) model. Bootstrap-resampling methodology was used to construct multiple MET data sets, ranging in size from 2 to 20 environments per MET sample. The size and environmental composition of the MET and the single-trial model influenced the r(p(MT)). On average, the PF model resulted in a higher r(p(MT)) than the IB, SS and SSM models, which were in turn superior to the RCB model for MET sizes based on fewer than ten environments. For METs based on ten or more environments, the r(p(MT)) was similar for all single-trial models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mechanical behavior of the vertebrate skull is often modeled using free-body analysis of simple geometric structures and, more recently, finite-element (FE) analysis. In this study, we compare experimentally collected in vivo bone strain orientations and magnitudes from the cranium of the American alligator with those extrapolated from a beam model and extracted from an FE model. The strain magnitudes predicted from beam and FE skull models bear little similarity to relative and absolute strain magnitudes recorded during in vivo biting experiments. However, quantitative differences between principal strain orientations extracted from the FE skull model and recorded during the in vivo experiments were smaller, and both generally matched expectations from the beam model. The differences in strain magnitude between the data sets may be attributable to the level of resolution of the models, the material properties used in the FE model, and the loading conditions (i.e., external forces and constraints). This study indicates that FE models and modeling of skulls as simple engineering structures may give a preliminary idea of how these structures are loaded, but whenever possible, modeling results should be verified with either in vitro or preferably in vivo testing, especially if precise knowledge of strain magnitudes is desired. (c) 2005 Wiley-Liss, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Comprehensive published radiocarbon data from selected atmospheric records, tree rings, and recent organic matter were analyzed and grouped into 4 different zones (three for the Northern Hemisphere and one for the whole Southern Hemisphere). These C-14 data for the summer season of each hemisphere were employed to construct zonal, hemispheric, and global data sets for use in regional and global carbon model calculations including calibrating and comparing carbon cycle models. In addition, extended monthly atmospheric C-14 data sets for 4 different zones were compiled for age calibration purposes. This is the first time these data sets were constructed to facilitate the dating of recent organic material using the bomb C-14 curves. The distribution of bomb C-14 reflects the major zones of atmospheric circulation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the rapid increase in both centralized video archives and distributed WWW video resources, content-based video retrieval is gaining its importance. To support such applications efficiently, content-based video indexing must be addressed. Typically, each video is represented by a sequence of frames. Due to the high dimensionality of frame representation and the large number of frames, video indexing introduces an additional degree of complexity. In this paper, we address the problem of content-based video indexing and propose an efficient solution, called the Ordered VA-File (OVA-File) based on the VA-file. OVA-File is a hierarchical structure and has two novel features: 1) partitioning the whole file into slices such that only a small number of slices are accessed and checked during k Nearest Neighbor (kNN) search and 2) efficient handling of insertions of new vectors into the OVA-File, such that the average distance between the new vectors and those approximations near that position is minimized. To facilitate a search, we present an efficient approximate kNN algorithm named Ordered VA-LOW (OVA-LOW) based on the proposed OVA-File. OVA-LOW first chooses possible OVA-Slices by ranking the distances between their corresponding centers and the query vector, and then visits all approximations in the selected OVA-Slices to work out approximate kNN. The number of possible OVA-Slices is controlled by a user-defined parameter delta. By adjusting delta, OVA-LOW provides a trade-off between the query cost and the result quality. Query by video clip consisting of multiple frames is also discussed. Extensive experimental studies using real video data sets were conducted and the results showed that our methods can yield a significant speed-up over an existing VA-file-based method and iDistance with high query result quality. Furthermore, by incorporating temporal correlation of video content, our methods achieved much more efficient performance.