94 resultados para Implicit calibration
Resumo:
Age is a critical determinant of an adult female mosquito's ability to transmit a range of human pathogens. Despite its central importance, relatively few methods exist with which to accurately determine chronological age of field-caught mosquitoes. This fact is a major constraint on our ability to fully understand the relative importance of vector longevity to disease transmission in different ecological contexts. It also limits our ability to evaluate novel disease control strategies that specifically target mosquito longevity. We report the development of a transcriptional profiling approach to determine age of adult female Aedes aegypti under field conditions. We demonstrate that this approach surpasses current cuticular hydrocarbon methods for both accuracy of predicted age as well as the upper limits at which age can be reliably predicted. The method is based on genes that display age-dependent expression in a range of dipteran insects and, as such, is likely to be broadly applicable to other disease vectors.
Resumo:
We report complex ac magnetic susceptibility measurements of a superconducting transition in very high-quality single-crystal alpha-uranium using microfabricated coplanar magnetometers. We identify an onset of superconductivity at Tapproximate to0.7 K in both the real and imaginary components of the susceptibility which is confirmed by resistivity data. A superconducting volume fraction argument, based on a comparison with a calibration YBa2Cu3O7-delta sample, indicates that superconductivity in these samples may be filamentary. Our data also demonstrate the sensitivity of the coplanar micro-magnetometers, which are ideally suited to measurements in pulsed magnetic fields exceeding 100 T.
Resumo:
Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).
Resumo:
Polytomous Item Response Theory Models provides a unified, comprehensive introduction to the range of polytomous models available within item response theory (IRT). It begins by outlining the primary structural distinction between the two major types of polytomous IRT models. This focuses on the two types of response probability that are unique to polytomous models and their associated response functions, which are modeled differently by the different types of IRT model. It describes, both conceptually and mathematically, the major specific polytomous models, including the Nominal Response Model, the Partial Credit Model, the Rating Scale model, and the Graded Response Model. Important variations, such as the Generalized Partial Credit Model are also described as are less common variations, such as the Rating Scale version of the Graded Response Model. Relationships among the models are also investigated and the operation of measurement information is described for each major model. Practical examples of major models using real data are provided, as is a chapter on choosing an appropriate model. Figures are used throughout to illustrate important elements as they are described.
Resumo:
Previous work has identified several short-comings in the ability of four spring wheat and one barley model to simulate crop processes and resource utilization. This can have important implications when such models are used within systems models where final soil water and nitrogen conditions of one crop define the starting conditions of the following crop. In an attempt to overcome these limitations and to reconcile a range of modelling approaches, existing model components that worked demonstrably well were combined with new components for aspects where existing capabilities were inadequate. This resulted in the Integrated Wheat Model (I_WHEAT), which was developed as a module of the cropping systems model APSIM. To increase predictive capability of the model, process detail was reduced, where possible, by replacing groups of processes with conservative, biologically meaningful parameters. I_WHEAT does not contain a soil water or soil nitrogen balance. These are present as other modules of APSIM. In I_WHEAT, yield is simulated using a linear increase in harvest index whereby nitrogen or water limitations can lead to early termination of grainfilling and hence cessation of harvest index increase. Dry matter increase is calculated either from the amount of intercepted radiation and radiation conversion efficiency or from the amount of water transpired and transpiration efficiency, depending on the most limiting resource. Leaf area and tiller formation are calculated from thermal time and a cultivar specific phyllochron interval. Nitrogen limitation first reduces leaf area and then affects radiation conversion efficiency as it becomes more severe. Water or nitrogen limitations result in reduced leaf expansion, accelerated leaf senescence or tiller death. This reduces the radiation load on the crop canopy (i.e. demand for water) and can make nitrogen available for translocation to other organs. Sensitive feedbacks between light interception and dry matter accumulation are avoided by having environmental effects acting directly on leaf area development, rather than via biomass production. This makes the model more stable across environments without losing the interactions between the different external influences. When comparing model output with models tested previously using data from a wide range of agro-climatic conditions, yield and biomass predictions were equal to the best of those models, but improvements could be demonstrated for simulating leaf area dynamics in response to water and nitrogen supply, kernel nitrogen content, and total water and nitrogen use. I_WHEAT does not require calibration for any of the environments tested. Further model improvement should concentrate on improving phenology simulations, a more thorough derivation of coefficients to describe leaf area development and a better quantification of some processes related to nitrogen dynamics. (C) 1998 Elsevier Science B.V.
Resumo:
Bulk density of undisturbed soil samples can be measured using computed tomography (CT) techniques with a spatial resolution of about 1 mm. However, this technique may not be readily accessible. On the other hand, x-ray radiographs have only been considered as qualitative images to describe morphological features. A calibration procedure was set up to generate two-dimensional, high-resolution bulk density images from x-ray radiographs made with a conventional x-ray diffraction apparatus. Test bricks were made to assess the accuracy of the method. Slices of impregnated soil samples were made using hardsetting seedbeds that had been gamma scanned at 5-mm depth increments in a previous study. The calibration procedure involved three stages: (i) calibration of the image grey levels in terms of glass thickness using a staircase made from glass cover slips, (ii) measurement of ratio between the soil and resin mass attenuation coefficients and the glass mass attenuation coefficient, using compacted bricks of known thickness and bulk density, and (iii) image correction accounting for the heterogeneity of the irradiation field. The procedure was simple, rapid, and the equipment was easily accessible. The accuracy of the bulk density determination was good (mean relative error 0.015), The bulk density images showed a good spatial resolution, so that many structural details could be observed. The depth functions were consistent with both the global shrinkage and the gamma probe data previously obtained. The suggested method would be easily applied to the new fuzzy set approach of soil structure, which requires generation of bulk density images. Also, it would be an invaluable tool for studies requiring high-resolution bulk density measurement, such as studies on soil surface crusts.
Resumo:
Subcycling algorithms which employ multiple timesteps have been previously proposed for explicit direct integration of first- and second-order systems of equations arising in finite element analysis, as well as for integration using explicit/implicit partitions of a model. The author has recently extended this work to implicit/implicit multi-timestep partitions of both first- and second-order systems. In this paper, improved algorithms for multi-timestep implicit integration are introduced, that overcome some weaknesses of those proposed previously. In particular, in the second-order case, improved stability is obtained. Some of the energy conservation properties of the Newmark family of algorithms are shown to be preserved in the new multi-timestep extensions of the Newmark method. In the first-order case, the generalized trapezoidal rule is extended to multiple timesteps, in a simple way that permits an implicit/implicit partition. Explicit special cases of the present algorithms exist. These are compared to algorithms proposed previously. (C) 1998 John Wiley & Sons, Ltd.
Resumo:
We modified the noninvasive, in vivo technique for strain application in the tibiae of rats (Turner et al,, Bone 12:73-79, 1991), The original model applies four-point bending to right tibiae via an open-loop, stepper-motor-driven spring linkage, Depending on the magnitude of applied load, the model produces new bone formation at periosteal (Ps) or endocortical surfaces (Ec.S). Due to the spring linkage, however, the range of frequencies at which loads can be applied is limited. The modified system replaces this design with an electromagnetic vibrator. A load transducer in series with the loading points allows calibration, the loaders' position to be adjusted, and cyclic loading completed under load central as a closed servo-loop. Two experiments were conducted to validate the modified system: (1) a strain gauge was applied to the lateral surface of the right tibia of 5 adult female rats and strains measured at applied loads from 10 to 60 N; and (2) the bone formation response was determined in 28 adult female Sprague-Dawley rats. Loading was applied as a haversine wave with a frequency of 2 Hz for 18 sec, every second day for 10 days. Peak bending loads mere applied at 33, 40, 52, and 64 N, and a sham-loading group tr as included at 64 N, Strains in the tibiae were linear between 10 and 60 N, and the average peak strain at the Ps.S at 60 N was 2664 +/- 250 microstrain, consistent with the results of Turner's group. Lamellar bone formation was stimulated at the Ec.S by applied bending, but not by sham loading. Bending strains above a loading threshold of 40 N increased Ec Lamellar hone formation rate, bone forming surface, and mineral apposition rate with a dose response similar to that reported by Turner et al, (J Bone Miner Res 9:87-97, 1994). We conclude that the modified loading system offers precision for applied loads of between 0 and 70 N, versatility in the selection of loading rates up to 20 Hz, and a reproducible bone formation response in the rat tibia, Adjustment of the loader also enables study of mechanical usage in murine tibia, an advantage with respect to the increasing variety of transgenic strains available in bone and mineral research. (Bone 23:307-310; 1998) (C) 1998 by Elsevier Science Inc. All rights reserved.
Resumo:
Methods employing continuum approximation in describing the deformation of layered materials possess a clear advantage over explicit models, However, the conventional implicit models based on the theory of anisotropic continua suffers from certain difficulties associated with interface slip and internal instabilities. These difficulties can be remedied by considering the bending stiffness of the layers. This implies the introduction of moment (couple) stresses and internal rotations, which leads to a Cosserat-type theory. In the present model, the behaviour of the layered material is assumed to be linearly elastic; the interfaces are assumed to be elastic perfectly plastic. Conditions of slip or no slip at the interfaces are detected by a Coulomb criterion with tension cut off at zero normal stress. The theory is valid for large deformation analysis. The model is incorporated into the finite element program AFENA and validated against analytical solutions of elementary buckling problems in layered medium. A problem associated with buckling of the roof and the floor of a rectangular excavation in jointed rock mass under high horizontal in situ stresses is considered as the main application of the theory. Copyright (C) 1999 John Wiley & Sons, Ltd.
Resumo:
The Fornax Spectroscopic Survey will use the Two degree Field spectrograph (2dF) of the Angle-Australian Telescope to obtain spectra for a complete sample of all 14000 objects with 16.5 less than or equal to b(j) less than or equal to 19.7 in a 12 square degree area centred on the Fornax Cluster. The aims of this project include the study of dwarf galaxies in the cluster (both known low surface brightness objects and putative normal surface brightness dwarfs) and a comparison sample of background field galaxies. We will also measure quasars and other active galaxies, any previously unrecognised compact galaxies and a large sample of Galactic stars. By selecting all objects-both stars and galaxies-independent of morphology, we cover a much larger range of surface brightness and scale size than previous surveys. In this paper we first describe the design of the survey. Our targets are selected from UK Schmidt Telescope sky survey plates digitised by the Automated Plate Measuring (APM) facility. We then describe the photometric and astrometric calibration of these data and show that the APM astrometry is accurate enough for use with the 2dF. We also describe a general approach to object identification using cross-correlations which allows us to identify and classify both stellar and galaxy spectra. We present results from the first 2dF field. Redshift distributions and velocity structures are shown for all observed objects in the direction of Fornax, including Galactic stars? galaxies in and around the Fornax Cluster, and for the background galaxy population. The velocity data for the stars show the contributions from the different Galactic components, plus a small tail to high velocities. We find no galaxies in the foreground to the cluster in our 2dF field. The Fornax Cluster is clearly defined kinematically. The mean velocity from the 26 cluster members having reliable redshifts is 1560 +/- 80 km s(-1). They show a velocity dispersion of 380 +/- 50 km s(-1). Large-scale structure can be traced behind the cluster to a redshift beyond z = 0.3. Background compact galaxies and low surface brightness galaxies are found to follow the general galaxy distribution.
Resumo:
The public-health attention given to deaths caused by illicit drug use in general, and by drug overdose in particular, should be commensurate with their contribution to premature death. For too long these deaths have been regarded as an unavoidable hazard of illicit drug use, their neglect abetted by the implicit view that the lives of illicit drug users are less deserving of being saved than those of others. In its report published this week,1 the UK Advisory Council on the Misuse of Drugs (ACMD) has rejected these implicit assumptions. Its view is that “drug-related deaths can, will and must in the near future be radically reduced in number”. It points out that the effort that society expends on preventing premature deaths “should apply no less to drug misusers than it does to other classes of people”.1
Resumo:
Recent research has begun to provide support for the assumptions that memories are stored as a composite and are accessed in parallel (Tehan & Humphreys, 1998). New predictions derived from these assumptions and from the Chappell and Humphreys (1994) implementation of these assumptions were tested. In three experiments, subjects studied relatively short lists of words. Some of the Lists contained two similar targets (thief and theft) or two dissimilar targets (thief and steal) associated with the same cue (ROBBERY). AS predicted, target similarity affected performance in cued recall but not free association. Contrary to predictions, two spaced presentations of a target did not improve performance in free association. Two additional experiments confirmed and extended this finding. Several alternative explanations for the target similarity effect, which incorporate assumptions about separate representations and sequential search, are rejected. The importance of the finding that, in at least one implicit memory paradigm, repetition does not improve performance is also discussed.
Resumo:
Urbanization and the ability to manage for a sustainable future present numerous challenges for geographers and planners in metropolitan regions. Remotely sensed data are inherently suited to provide information on urban land cover characteristics, and their change over time, at various spatial and temporal scales. Data models for establishing the range of urban land cover types and their biophysical composition (vegetation, soil, and impervious surfaces) are integrated to provide a hierarchical approach to classifying land cover within urban environments. These data also provide an essential component for current simulation models of urban growth patterns, as both calibration and validation data. The first stages of the approach have been applied to examine urban growth between 1988 and 1995 for a rapidly developing area in southeast Queensland, Australia. Landsat Thematic Mapper image data provided accurate (83% adjusted overall accuracy) classification of broad land cover types and their change over time. The combination of commonly available remotely sensed data, image processing methods, and emerging urban growth models highlights an important application for current and next generation moderate spatial resolution image data in studies of urban environments.
Resumo:
Hydrothermal alteration of a quartz-K-feldspar rock is simulated numerically by coupling fluid flow and chemical reactions. Introduction of CO2 gas generates an acidic fluid and produces secondary quartz, muscovite and/or pyrophyllite at constant temperature and pressure of 300 degrees C and 200 MPa. The precipitation and/or dissolution of the secondary minerals is controlled by either mass-action relations or rate laws. In our simulations the mass of the primary elements are conserved and the mass-balance equations are solved sequentially using an implicit scheme in a finite-element code. The pore-fluid velocity is assumed to be constant. The change of rock volume due to the dissolution or precipitation of the minerals, which is directly related to their molar volume, is taken into account. Feedback into the rock porosity and the reaction rates is included in the model. The model produces zones of pyrophyllite quartz and muscovite due to the dissolution of K-feldspar. Our model simulates, in a simplified way, the acid-induced alteration assemblages observed in various guises in many significant mineral deposits. The particular aluminosilicate minerals produced in these experiments are associated with the gold deposits of the Witwatersrand Basin.