986 resultados para data disclosure
Resumo:
Cu K-edge EXAFS spectra of Cu-Ni/Al2O3 and Cu-ZnO catalysts, both of which contain more than one Cu species, have been analysed making use of an additive relation for the EXAFS function. The analysis, which also makes use of residual spectra for identifying the species, shows good agreement between experimental and calculated spectra.
Resumo:
Objective Vast amounts of injury narratives are collected daily and are available electronically in real time and have great potential for use in injury surveillance and evaluation. Machine learning algorithms have been developed to assist in identifying cases and classifying mechanisms leading to injury in a much timelier manner than is possible when relying on manual coding of narratives. The aim of this paper is to describe the background, growth, value, challenges and future directions of machine learning as applied to injury surveillance. Methods This paper reviews key aspects of machine learning using injury narratives, providing a case study to demonstrate an application to an established human-machine learning approach. Results The range of applications and utility of narrative text has increased greatly with advancements in computing techniques over time. Practical and feasible methods exist for semi-automatic classification of injury narratives which are accurate, efficient and meaningful. The human-machine learning approach described in the case study achieved high sensitivity and positive predictive value and reduced the need for human coding to less than one-third of cases in one large occupational injury database. Conclusion The last 20 years have seen a dramatic change in the potential for technological advancements in injury surveillance. Machine learning of ‘big injury narrative data’ opens up many possibilities for expanded sources of data which can provide more comprehensive, ongoing and timely surveillance to inform future injury prevention policy and practice.
Resumo:
[From Preface] The Consumer Expenditure Survey is among the oldest publications of the Bureau of Labor Statistics. With information on the expenditures, incomes, and demographic characteristics of households, the survey documents the spending patterns and economic status of American families. This report offers a new approach to the use of Consumer Expenditure Survey data. Normally, the survey presents an indepth look at American households at a specific point in time, the reference period being a calendar year. Here, the authors use consumer expenditure data longitudinally and draw on information from decennial census reports to present a 100-year history of significant changes in consumer spending, economic status, and family demographics in the country as a whole, as well as in New York City and Boston.
Resumo:
[Excerpt] The effects of framing on decisions has been widely studied, producing research that suggests individuals respond to framing in predictable and fairly consistent ways (Bazerman, 1984, 1990; Tversky & Kahneman, 1986; Thaler, 1980). The essential finding from this body of research is that "individuals treat risks concerning perceived gains (for example, saving jobs and plants) differently from risks concerning perceived losses (losing jobs and plants)" (Bazerman, 1990, pp. 49-50). Specifically, individuals tend to avoid risks concerning gains, and seek risks concerning losses.
Application of phytotoxicity data to a new Australian soil quality guideline framework for biosolids
Resumo:
To protect terrestrial ecosystems and humans from contaminants many countries and jurisdictions have developed soil quality guidelines (SQGs). This study proposes a new framework to derive SQGs and guidelines for amended soils and uses a case study based on phytotoxicity data of copper (Cu) and zinc (Zn) from field studies to illustrate how the framework could be applied. The proposed framework uses normalisation relationships to account for the effects of soil properties on toxicity data followed by a species sensitivity distribution (SSD) method to calculate a soil added contaminant limit (soil ACL) for a standard soil. The normalisation equations are then used to calculate soil ACLs for other soils. A soil amendment availability factor (SAAF) is then calculated as the toxicity and bioavailability of pure contaminants and contaminants in amendments can be different. The SAAF is used to modify soil ACLs to ACLs for amended soils. The framework was then used to calculate soil ACLs for copper (Cu) and zinc (Zn). For soils with pH of 4-8 and OC content of 1-6%, the ACLs range from 8 mg/kg to 970 mg/kg added Cu. The SAAF for Cu was pH dependant and varied from 1.44 at pH 4 to 2.15 at pH 8. For soils with pH of 4-8 and OC content of 1-6%, the ACLs for amended soils range from 11 mg/kg to 2080 mg/kg added Cu. For soils with pH of 4-8 and a CEC from 5-60, the ACLs for Zn ranged from 21 to 1470 mg/kg added Zn. A SAAF of one was used for Zn as it concentrations in plant tissue and soil to water partitioning showed no difference between biosolids and soluble Zn salt treatments, indicating that Zn from biosolids and Zn salts are equally bioavailable to plants.
Resumo:
The standard land contracts in Queensland require a seller of land to disclose to a buyer not only registered encumbrances, but also statutory encumbrances affecting the land. Whether a statute creates a statutory encumbrance over the title to the property is therefore a key question for a seller when completing a contract. This article examines relevant case law and provides some guidelines for when a statute creates a statutory encumbrance that should be disclosed to a buyer as a defect in title.
Resumo:
Background: Standard methods for quantifying IncuCyte ZOOM™ assays involve measurements that quantify how rapidly the initially-vacant area becomes re-colonised with cells as a function of time. Unfortunately, these measurements give no insight into the details of the cellular-level mechanisms acting to close the initially-vacant area. We provide an alternative method enabling us to quantify the role of cell motility and cell proliferation separately. To achieve this we calibrate standard data available from IncuCyte ZOOM™ images to the solution of the Fisher-Kolmogorov model. Results: The Fisher-Kolmogorov model is a reaction-diffusion equation that has been used to describe collective cell spreading driven by cell migration, characterised by a cell diffusivity, D, and carrying capacity limited proliferation with proliferation rate, λ, and carrying capacity density, K. By analysing temporal changes in cell density in several subregions located well-behind the initial position of the leading edge we estimate λ and K. Given these estimates, we then apply automatic leading edge detection algorithms to the images produced by the IncuCyte ZOOM™ assay and match this data with a numerical solution of the Fisher-Kolmogorov equation to provide an estimate of D. We demonstrate this method by applying it to interpret a suite of IncuCyte ZOOM™ assays using PC-3 prostate cancer cells and obtain estimates of D, λ and K. Comparing estimates of D, λ and K for a control assay with estimates of D, λ and K for assays where epidermal growth factor (EGF) is applied in varying concentrations confirms that EGF enhances the rate of scratch closure and that this stimulation is driven by an increase in D and λ, whereas K is relatively unaffected by EGF. Conclusions: Our approach for estimating D, λ and K from an IncuCyte ZOOM™ assay provides more detail about cellular-level behaviour than standard methods for analysing these assays. In particular, our approach can be used to quantify the balance of cell migration and cell proliferation and, as we demonstrate, allow us to quantify how the addition of growth factors affects these processes individually.
Resumo:
The use of near infrared (NIR) hyperspectral imaging and hyperspectral image analysis for distinguishing between hard, intermediate and soft maize kernels from inbred lines was evaluated. NIR hyperspectral images of two sets (12 and 24 kernels) of whole maize kernels were acquired using a Spectral Dimensions MatrixNIR camera with a spectral range of 960-1662 nm and a sisuChema SWIR (short wave infrared) hyperspectral pushbroom imaging system with a spectral range of 1000-2498 nm. Exploratory principal component analysis (PCA) was used on absorbance images to remove background, bad pixels and shading. On the cleaned images. PCA could be used effectively to find histological classes including glassy (hard) and floury (soft) endosperm. PCA illustrated a distinct difference between glassy and floury endosperm along principal component (PC) three on the MatrixNIR and PC two on the sisuChema with two distinguishable clusters. Subsequently partial least squares discriminant analysis (PLS-DA) was applied to build a classification model. The PLS-DA model from the MatrixNIR image (12 kernels) resulted in root mean square error of prediction (RMSEP) value of 0.18. This was repeated on the MatrixNIR image of the 24 kernels which resulted in RMSEP of 0.18. The sisuChema image yielded RMSEP value of 0.29. The reproducible results obtained with the different data sets indicate that the method proposed in this paper has a real potential for future classification uses.
Resumo:
Data-flow analysis is an integral part of any aggressive optimizing compiler. We propose a framework for improving the precision of data-flow analysis in the presence of complex control-flow. W initially perform data-flow analysis to determine those control-flow merges which cause the loss in data-flow analysis precision. The control-flow graph of the program is then restructured such that performing data-flow analysis on the resulting restructured graph gives more precise results. The proposed framework is both simple, involving the familiar notion of product automata, and also general, since it is applicable to any forward data-flow analysis. Apart from proving that our restructuring process is correct, we also show that restructuring is effective in that it necessarily leads to more optimization opportunities. Furthermore, the framework handles the trade-off between the increase in data-flow precision and the code size increase inherent in the restructuring. We show that determining an optimal restructuring is NP-hard, and propose and evaluate a greedy strategy. The framework has been implemented in the Scale research compiler, and instantiated for the specific problem of Constant Propagation. On the SPECINT 2000 benchmark suite we observe an average speedup of 4% in the running times over Wegman-Zadeck conditional constant propagation algorithm and 2% over a purely path profile guided approach.
Resumo:
A computationally efficient agglomerative clustering algorithm based on multilevel theory is presented. Here, the data set is divided randomly into a number of partitions. The samples of each such partition are clustered separately using hierarchical agglomerative clustering algorithm to form sub-clusters. These are merged at higher levels to get the final classification. This algorithm leads to the same classification as that of hierarchical agglomerative clustering algorithm when the clusters are well separated. The advantages of this algorithm are short run time and small storage requirement. It is observed that the savings, in storage space and computation time, increase nonlinearly with the sample size.