997 resultados para Data curvature


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cu K-edge EXAFS spectra of Cu-Ni/Al2O3 and Cu-ZnO catalysts, both of which contain more than one Cu species, have been analysed making use of an additive relation for the EXAFS function. The analysis, which also makes use of residual spectra for identifying the species, shows good agreement between experimental and calculated spectra.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent laboratory investigations have shown that rotation and (streamwise) curvature can have spectacular effects on momentum transport in turbulent shear flows. A simple model that takes account of these effects (based on an analogy with buoyant flows) utilises counterparts of the Richardson number Rg and the Monin-Oboukhov length. Estimates of Rg for meanders in ocean currents like the Gulf Stream show it to be of order 1 or more, while laboratory investigations reveal strong effects even at |Rg|∼0·1. These considerations lead to the conclusion that at a cyclonic bend in the Gulf Stream, a highly unstable flow in the outer half of the jet rides over a highly stable flow in the inner half. It is conjectured that the discrepancies noticed between observation and the various theories of Gulf Stream meanders, and such phenomena as the observed detachment of eddies from the Gulf Stream, may be due to the effects of curvature and rotation on turbulent transport.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective Vast amounts of injury narratives are collected daily and are available electronically in real time and have great potential for use in injury surveillance and evaluation. Machine learning algorithms have been developed to assist in identifying cases and classifying mechanisms leading to injury in a much timelier manner than is possible when relying on manual coding of narratives. The aim of this paper is to describe the background, growth, value, challenges and future directions of machine learning as applied to injury surveillance. Methods This paper reviews key aspects of machine learning using injury narratives, providing a case study to demonstrate an application to an established human-machine learning approach. Results The range of applications and utility of narrative text has increased greatly with advancements in computing techniques over time. Practical and feasible methods exist for semi-automatic classification of injury narratives which are accurate, efficient and meaningful. The human-machine learning approach described in the case study achieved high sensitivity and positive predictive value and reduced the need for human coding to less than one-third of cases in one large occupational injury database. Conclusion The last 20 years have seen a dramatic change in the potential for technological advancements in injury surveillance. Machine learning of ‘big injury narrative data’ opens up many possibilities for expanded sources of data which can provide more comprehensive, ongoing and timely surveillance to inform future injury prevention policy and practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[From Preface] The Consumer Expenditure Survey is among the oldest publications of the Bureau of Labor Statistics. With information on the expenditures, incomes, and demographic characteristics of households, the survey documents the spending patterns and economic status of American families. This report offers a new approach to the use of Consumer Expenditure Survey data. Normally, the survey presents an indepth look at American households at a specific point in time, the reference period being a calendar year. Here, the authors use consumer expenditure data longitudinally and draw on information from decennial census reports to present a 100-year history of significant changes in consumer spending, economic status, and family demographics in the country as a whole, as well as in New York City and Boston.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[Excerpt] The effects of framing on decisions has been widely studied, producing research that suggests individuals respond to framing in predictable and fairly consistent ways (Bazerman, 1984, 1990; Tversky & Kahneman, 1986; Thaler, 1980). The essential finding from this body of research is that "individuals treat risks concerning perceived gains (for example, saving jobs and plants) differently from risks concerning perceived losses (losing jobs and plants)" (Bazerman, 1990, pp. 49-50). Specifically, individuals tend to avoid risks concerning gains, and seek risks concerning losses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We obtain stringent bounds in the < r(2)>(K pi)(S)-c plane where these are the scalar radius and the curvature parameters of the scalar K pi form factor, respectively, using analyticity and dispersion relation constraints, the knowledge of the form factor from the well-known Callan-Treiman point m(K)(2)-m(pi)(2), as well as at m(pi)(2)-m(K)(2), which we call the second Callan-Treiman point. The central values of these parameters from a recent determination are accomodated in the allowed region provided the higher loop corrections to the value of th form factor at the second Callan-Treiman point reduce the one-loop result by about 3% with F-K/F-pi = 1.21. Such a variation in magnitude at the second Callan-Treiman point yields 0.12 fm(2) less than or similar to < r(2)>(K pi)(S) less than or similar to 0.21 fm(2) and 0.56 GeV-4 less than or similar to c less than or similar to 1.47 GeV-4 and a strong correlation between them. A smaller value of F-K/F-pi shifts both bounds to lower values.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To protect terrestrial ecosystems and humans from contaminants many countries and jurisdictions have developed soil quality guidelines (SQGs). This study proposes a new framework to derive SQGs and guidelines for amended soils and uses a case study based on phytotoxicity data of copper (Cu) and zinc (Zn) from field studies to illustrate how the framework could be applied. The proposed framework uses normalisation relationships to account for the effects of soil properties on toxicity data followed by a species sensitivity distribution (SSD) method to calculate a soil added contaminant limit (soil ACL) for a standard soil. The normalisation equations are then used to calculate soil ACLs for other soils. A soil amendment availability factor (SAAF) is then calculated as the toxicity and bioavailability of pure contaminants and contaminants in amendments can be different. The SAAF is used to modify soil ACLs to ACLs for amended soils. The framework was then used to calculate soil ACLs for copper (Cu) and zinc (Zn). For soils with pH of 4-8 and OC content of 1-6%, the ACLs range from 8 mg/kg to 970 mg/kg added Cu. The SAAF for Cu was pH dependant and varied from 1.44 at pH 4 to 2.15 at pH 8. For soils with pH of 4-8 and OC content of 1-6%, the ACLs for amended soils range from 11 mg/kg to 2080 mg/kg added Cu. For soils with pH of 4-8 and a CEC from 5-60, the ACLs for Zn ranged from 21 to 1470 mg/kg added Zn. A SAAF of one was used for Zn as it concentrations in plant tissue and soil to water partitioning showed no difference between biosolids and soluble Zn salt treatments, indicating that Zn from biosolids and Zn salts are equally bioavailable to plants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Standard methods for quantifying IncuCyte ZOOM™ assays involve measurements that quantify how rapidly the initially-vacant area becomes re-colonised with cells as a function of time. Unfortunately, these measurements give no insight into the details of the cellular-level mechanisms acting to close the initially-vacant area. We provide an alternative method enabling us to quantify the role of cell motility and cell proliferation separately. To achieve this we calibrate standard data available from IncuCyte ZOOM™ images to the solution of the Fisher-Kolmogorov model. Results: The Fisher-Kolmogorov model is a reaction-diffusion equation that has been used to describe collective cell spreading driven by cell migration, characterised by a cell diffusivity, D, and carrying capacity limited proliferation with proliferation rate, λ, and carrying capacity density, K. By analysing temporal changes in cell density in several subregions located well-behind the initial position of the leading edge we estimate λ and K. Given these estimates, we then apply automatic leading edge detection algorithms to the images produced by the IncuCyte ZOOM™ assay and match this data with a numerical solution of the Fisher-Kolmogorov equation to provide an estimate of D. We demonstrate this method by applying it to interpret a suite of IncuCyte ZOOM™ assays using PC-3 prostate cancer cells and obtain estimates of D, λ and K. Comparing estimates of D, λ and K for a control assay with estimates of D, λ and K for assays where epidermal growth factor (EGF) is applied in varying concentrations confirms that EGF enhances the rate of scratch closure and that this stimulation is driven by an increase in D and λ, whereas K is relatively unaffected by EGF. Conclusions: Our approach for estimating D, λ and K from an IncuCyte ZOOM™ assay provides more detail about cellular-level behaviour than standard methods for analysing these assays. In particular, our approach can be used to quantify the balance of cell migration and cell proliferation and, as we demonstrate, allow us to quantify how the addition of growth factors affects these processes individually.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of near infrared (NIR) hyperspectral imaging and hyperspectral image analysis for distinguishing between hard, intermediate and soft maize kernels from inbred lines was evaluated. NIR hyperspectral images of two sets (12 and 24 kernels) of whole maize kernels were acquired using a Spectral Dimensions MatrixNIR camera with a spectral range of 960-1662 nm and a sisuChema SWIR (short wave infrared) hyperspectral pushbroom imaging system with a spectral range of 1000-2498 nm. Exploratory principal component analysis (PCA) was used on absorbance images to remove background, bad pixels and shading. On the cleaned images. PCA could be used effectively to find histological classes including glassy (hard) and floury (soft) endosperm. PCA illustrated a distinct difference between glassy and floury endosperm along principal component (PC) three on the MatrixNIR and PC two on the sisuChema with two distinguishable clusters. Subsequently partial least squares discriminant analysis (PLS-DA) was applied to build a classification model. The PLS-DA model from the MatrixNIR image (12 kernels) resulted in root mean square error of prediction (RMSEP) value of 0.18. This was repeated on the MatrixNIR image of the 24 kernels which resulted in RMSEP of 0.18. The sisuChema image yielded RMSEP value of 0.29. The reproducible results obtained with the different data sets indicate that the method proposed in this paper has a real potential for future classification uses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data-flow analysis is an integral part of any aggressive optimizing compiler. We propose a framework for improving the precision of data-flow analysis in the presence of complex control-flow. W initially perform data-flow analysis to determine those control-flow merges which cause the loss in data-flow analysis precision. The control-flow graph of the program is then restructured such that performing data-flow analysis on the resulting restructured graph gives more precise results. The proposed framework is both simple, involving the familiar notion of product automata, and also general, since it is applicable to any forward data-flow analysis. Apart from proving that our restructuring process is correct, we also show that restructuring is effective in that it necessarily leads to more optimization opportunities. Furthermore, the framework handles the trade-off between the increase in data-flow precision and the code size increase inherent in the restructuring. We show that determining an optimal restructuring is NP-hard, and propose and evaluate a greedy strategy. The framework has been implemented in the Scale research compiler, and instantiated for the specific problem of Constant Propagation. On the SPECINT 2000 benchmark suite we observe an average speedup of 4% in the running times over Wegman-Zadeck conditional constant propagation algorithm and 2% over a purely path profile guided approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A computationally efficient agglomerative clustering algorithm based on multilevel theory is presented. Here, the data set is divided randomly into a number of partitions. The samples of each such partition are clustered separately using hierarchical agglomerative clustering algorithm to form sub-clusters. These are merged at higher levels to get the final classification. This algorithm leads to the same classification as that of hierarchical agglomerative clustering algorithm when the clusters are well separated. The advantages of this algorithm are short run time and small storage requirement. It is observed that the savings, in storage space and computation time, increase nonlinearly with the sample size.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the effects of different types and quality of data on bioclimatic modeling predictions is vital to ascertaining the value of existing models, and to improving future models. Bioclimatic models were constructed using the CLIMEX program, using different data types – seasonal dynamics, geographic (overseas) distribution, and a combination of the two – for two biological control agents for the major weed Lantana camara L. in Australia. The models for one agent, Teleonemia scrupulosa Stål (Hemiptera:Tingidae) were based on a higher quality and quantity of data than the models for the other agent, Octotoma scabripennis Guérin-Méneville (Coleoptera: Chrysomelidae). Predictions of the geographic distribution for Australia showed that T. scrupulosa models exhibited greater accuracy with a progressive improvement from seasonal dynamics data, to the model based on overseas distribution, and finally the model combining the two data types. In contrast, O. scabripennis models were of low accuracy, and showed no clear trends across the various model types. These case studies demonstrate the importance of high quality data for developing models, and of supplementing distributional data with species seasonal dynamics data wherever possible. Seasonal dynamics data allows the modeller to focus on the species response to climatic trends, while distributional data enables easier fitting of stress parameters by restricting the species envelope to the described distribution. It is apparent that CLIMEX models based on low quality seasonal dynamics data, together with a small quantity of distributional data, are of minimal value in predicting the spatial extent of species distribution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objectives of this study were to predict the potential distribution, relative abundance and probability of habitat use by feral camels in southern Northern Territory. Aerial survey data were used to model habitat association. The characteristics of ‘used’ (where camels were observed) v. ‘unused’ (pseudo-absence) sites were compared. Habitat association and abundance were modelled using generalised additive model (GAM) methods. The models predicted habitat suitability and the relative abundance of camels in southern Northern Territory. The habitat suitability maps derived in the present study indicate that camels have suitable habitat in most areas of southern Northern Territory. The index of abundance model identified areas of relatively high camel abundance. Identifying preferred habitats and areas of high abundance can help focus control efforts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Raw data from SeaScan™ transects off Wide Bay (south Queensland) taken in August 2007 as part of a study of ecological factors influencing the distribution of spanner crabs (Ranina ranina). The dataset (comma-delimited ascii file) comprises the following fields: 1. record number 2. date-time (GMT) 3. date-time (AEST) 4. latitude (signed decimal degrees) 5. longitude (decimal degrees) 6. speed over ground (knots) 7. depth (m) 8. seabed roughness (v) 9. hardness (v) Indices of roughness and hardness (from the first and second echoes respectively) were obtained using a SeaScan™ 100 system (un-referenced) on board the Research Vessel Tom Marshall, with the ship’s Furuno FCV 1100 echo sounder and 1 kW, 50 kHz transducer. Generally vessel speed was kept below about 14 kt (typically ~12 kt), and the echo-sounder range set to 80 m. The data were filtered to remove errors due to data drop-out, straying beyond system depth limits (min. 10 m), or transducer interference.