982 resultados para periodic data


Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE Corneal confocal microscopy is a novel diagnostic technique for the detection of nerve damage and repair in a range of peripheral neuropathies, in particular diabetic neuropathy. Normative reference values are required to enable clinical translation and wider use of this technique. We have therefore undertaken a multicenter collaboration to provide worldwide age-adjusted normative values of corneal nerve fiber parameters. RESEARCH DESIGN AND METHODS A total of 1,965 corneal nerve images from 343 healthy volunteers were pooled from six clinical academic centers. All subjects underwent examination with the Heidelberg Retina Tomograph corneal confocal microscope. Images of the central corneal subbasal nerve plexus were acquired by each center using a standard protocol and analyzed by three trained examiners using manual tracing and semiautomated software (CCMetrics). Age trends were established using simple linear regression, and normative corneal nerve fiber density (CNFD), corneal nerve fiber branch density (CNBD), corneal nerve fiber length (CNFL), and corneal nerve fiber tortuosity (CNFT) reference values were calculated using quantile regression analysis. RESULTS There was a significant linear age-dependent decrease in CNFD (-0.164 no./mm(2) per year for men, P < 0.01, and -0.161 no./mm(2) per year for women, P < 0.01). There was no change with age in CNBD (0.192 no./mm(2) per year for men, P = 0.26, and -0.050 no./mm(2) per year for women, P = 0.78). CNFL decreased in men (-0.045 mm/mm(2) per year, P = 0.07) and women (-0.060 mm/mm(2) per year, P = 0.02). CNFT increased with age in men (0.044 per year, P < 0.01) and women (0.046 per year, P < 0.01). Height, weight, and BMI did not influence the 5th percentile normative values for any corneal nerve parameter. CONCLUSIONS This study provides robust worldwide normative reference values for corneal nerve parameters to be used in research and clinical practice in the study of diabetic and other peripheral neuropathies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To facilitate marketing and export, the Australian macadamia industry requires accurate crop forecasts. Each year, two levels of crop predictions are produced for this industry. The first is an overall longer-term forecast based on tree census data of growers in the Australian Macadamia Society (AMS). This data set currently accounts for around 70% of total production, and is supplemented by our best estimates of non-AMS orchards. Given these total tree numbers, average yields per tree are needed to complete the long-term forecasts. Yields from regional variety trials were initially used, but were found to be consistently higher than the average yields that growers were obtaining. Hence, a statistical model was developed using growers' historical yields, also taken from the AMS database. This model accounted for the effects of tree age, variety, year, region and tree spacing, and explained 65% of the total variation in the yield per tree data. The second level of crop prediction is an annual climate adjustment of these overall long-term estimates, taking into account the expected effects on production of the previous year's climate. This adjustment is based on relative historical yields, measured as the percentage deviance between expected and actual production. The dominant climatic variables are observed temperature, evaporation, solar radiation and modelled water stress. Initially, a number of alternate statistical models showed good agreement within the historical data, with jack-knife cross-validation R2 values of 96% or better. However, forecasts varied quite widely between these alternate models. Exploratory multivariate analyses and nearest-neighbour methods were used to investigate these differences. For 2001-2003, the overall forecasts were in the right direction (when compared with the long-term expected values), but were over-estimates. In 2004 the forecast was well under the observed production, and in 2005 the revised models produced a forecast within 5.1% of the actual production. Over the first five years of forecasting, the absolute deviance for the climate-adjustment models averaged 10.1%, just outside the targeted objective of 10%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cu K-edge EXAFS spectra of Cu-Ni/Al2O3 and Cu-ZnO catalysts, both of which contain more than one Cu species, have been analysed making use of an additive relation for the EXAFS function. The analysis, which also makes use of residual spectra for identifying the species, shows good agreement between experimental and calculated spectra.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective Vast amounts of injury narratives are collected daily and are available electronically in real time and have great potential for use in injury surveillance and evaluation. Machine learning algorithms have been developed to assist in identifying cases and classifying mechanisms leading to injury in a much timelier manner than is possible when relying on manual coding of narratives. The aim of this paper is to describe the background, growth, value, challenges and future directions of machine learning as applied to injury surveillance. Methods This paper reviews key aspects of machine learning using injury narratives, providing a case study to demonstrate an application to an established human-machine learning approach. Results The range of applications and utility of narrative text has increased greatly with advancements in computing techniques over time. Practical and feasible methods exist for semi-automatic classification of injury narratives which are accurate, efficient and meaningful. The human-machine learning approach described in the case study achieved high sensitivity and positive predictive value and reduced the need for human coding to less than one-third of cases in one large occupational injury database. Conclusion The last 20 years have seen a dramatic change in the potential for technological advancements in injury surveillance. Machine learning of ‘big injury narrative data’ opens up many possibilities for expanded sources of data which can provide more comprehensive, ongoing and timely surveillance to inform future injury prevention policy and practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[From Preface] The Consumer Expenditure Survey is among the oldest publications of the Bureau of Labor Statistics. With information on the expenditures, incomes, and demographic characteristics of households, the survey documents the spending patterns and economic status of American families. This report offers a new approach to the use of Consumer Expenditure Survey data. Normally, the survey presents an indepth look at American households at a specific point in time, the reference period being a calendar year. Here, the authors use consumer expenditure data longitudinally and draw on information from decennial census reports to present a 100-year history of significant changes in consumer spending, economic status, and family demographics in the country as a whole, as well as in New York City and Boston.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[Excerpt] The effects of framing on decisions has been widely studied, producing research that suggests individuals respond to framing in predictable and fairly consistent ways (Bazerman, 1984, 1990; Tversky & Kahneman, 1986; Thaler, 1980). The essential finding from this body of research is that "individuals treat risks concerning perceived gains (for example, saving jobs and plants) differently from risks concerning perceived losses (losing jobs and plants)" (Bazerman, 1990, pp. 49-50). Specifically, individuals tend to avoid risks concerning gains, and seek risks concerning losses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To protect terrestrial ecosystems and humans from contaminants many countries and jurisdictions have developed soil quality guidelines (SQGs). This study proposes a new framework to derive SQGs and guidelines for amended soils and uses a case study based on phytotoxicity data of copper (Cu) and zinc (Zn) from field studies to illustrate how the framework could be applied. The proposed framework uses normalisation relationships to account for the effects of soil properties on toxicity data followed by a species sensitivity distribution (SSD) method to calculate a soil added contaminant limit (soil ACL) for a standard soil. The normalisation equations are then used to calculate soil ACLs for other soils. A soil amendment availability factor (SAAF) is then calculated as the toxicity and bioavailability of pure contaminants and contaminants in amendments can be different. The SAAF is used to modify soil ACLs to ACLs for amended soils. The framework was then used to calculate soil ACLs for copper (Cu) and zinc (Zn). For soils with pH of 4-8 and OC content of 1-6%, the ACLs range from 8 mg/kg to 970 mg/kg added Cu. The SAAF for Cu was pH dependant and varied from 1.44 at pH 4 to 2.15 at pH 8. For soils with pH of 4-8 and OC content of 1-6%, the ACLs for amended soils range from 11 mg/kg to 2080 mg/kg added Cu. For soils with pH of 4-8 and a CEC from 5-60, the ACLs for Zn ranged from 21 to 1470 mg/kg added Zn. A SAAF of one was used for Zn as it concentrations in plant tissue and soil to water partitioning showed no difference between biosolids and soluble Zn salt treatments, indicating that Zn from biosolids and Zn salts are equally bioavailable to plants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Standard methods for quantifying IncuCyte ZOOM™ assays involve measurements that quantify how rapidly the initially-vacant area becomes re-colonised with cells as a function of time. Unfortunately, these measurements give no insight into the details of the cellular-level mechanisms acting to close the initially-vacant area. We provide an alternative method enabling us to quantify the role of cell motility and cell proliferation separately. To achieve this we calibrate standard data available from IncuCyte ZOOM™ images to the solution of the Fisher-Kolmogorov model. Results: The Fisher-Kolmogorov model is a reaction-diffusion equation that has been used to describe collective cell spreading driven by cell migration, characterised by a cell diffusivity, D, and carrying capacity limited proliferation with proliferation rate, λ, and carrying capacity density, K. By analysing temporal changes in cell density in several subregions located well-behind the initial position of the leading edge we estimate λ and K. Given these estimates, we then apply automatic leading edge detection algorithms to the images produced by the IncuCyte ZOOM™ assay and match this data with a numerical solution of the Fisher-Kolmogorov equation to provide an estimate of D. We demonstrate this method by applying it to interpret a suite of IncuCyte ZOOM™ assays using PC-3 prostate cancer cells and obtain estimates of D, λ and K. Comparing estimates of D, λ and K for a control assay with estimates of D, λ and K for assays where epidermal growth factor (EGF) is applied in varying concentrations confirms that EGF enhances the rate of scratch closure and that this stimulation is driven by an increase in D and λ, whereas K is relatively unaffected by EGF. Conclusions: Our approach for estimating D, λ and K from an IncuCyte ZOOM™ assay provides more detail about cellular-level behaviour than standard methods for analysing these assays. In particular, our approach can be used to quantify the balance of cell migration and cell proliferation and, as we demonstrate, allow us to quantify how the addition of growth factors affects these processes individually.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of near infrared (NIR) hyperspectral imaging and hyperspectral image analysis for distinguishing between hard, intermediate and soft maize kernels from inbred lines was evaluated. NIR hyperspectral images of two sets (12 and 24 kernels) of whole maize kernels were acquired using a Spectral Dimensions MatrixNIR camera with a spectral range of 960-1662 nm and a sisuChema SWIR (short wave infrared) hyperspectral pushbroom imaging system with a spectral range of 1000-2498 nm. Exploratory principal component analysis (PCA) was used on absorbance images to remove background, bad pixels and shading. On the cleaned images. PCA could be used effectively to find histological classes including glassy (hard) and floury (soft) endosperm. PCA illustrated a distinct difference between glassy and floury endosperm along principal component (PC) three on the MatrixNIR and PC two on the sisuChema with two distinguishable clusters. Subsequently partial least squares discriminant analysis (PLS-DA) was applied to build a classification model. The PLS-DA model from the MatrixNIR image (12 kernels) resulted in root mean square error of prediction (RMSEP) value of 0.18. This was repeated on the MatrixNIR image of the 24 kernels which resulted in RMSEP of 0.18. The sisuChema image yielded RMSEP value of 0.29. The reproducible results obtained with the different data sets indicate that the method proposed in this paper has a real potential for future classification uses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data-flow analysis is an integral part of any aggressive optimizing compiler. We propose a framework for improving the precision of data-flow analysis in the presence of complex control-flow. W initially perform data-flow analysis to determine those control-flow merges which cause the loss in data-flow analysis precision. The control-flow graph of the program is then restructured such that performing data-flow analysis on the resulting restructured graph gives more precise results. The proposed framework is both simple, involving the familiar notion of product automata, and also general, since it is applicable to any forward data-flow analysis. Apart from proving that our restructuring process is correct, we also show that restructuring is effective in that it necessarily leads to more optimization opportunities. Furthermore, the framework handles the trade-off between the increase in data-flow precision and the code size increase inherent in the restructuring. We show that determining an optimal restructuring is NP-hard, and propose and evaluate a greedy strategy. The framework has been implemented in the Scale research compiler, and instantiated for the specific problem of Constant Propagation. On the SPECINT 2000 benchmark suite we observe an average speedup of 4% in the running times over Wegman-Zadeck conditional constant propagation algorithm and 2% over a purely path profile guided approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A computationally efficient agglomerative clustering algorithm based on multilevel theory is presented. Here, the data set is divided randomly into a number of partitions. The samples of each such partition are clustered separately using hierarchical agglomerative clustering algorithm to form sub-clusters. These are merged at higher levels to get the final classification. This algorithm leads to the same classification as that of hierarchical agglomerative clustering algorithm when the clusters are well separated. The advantages of this algorithm are short run time and small storage requirement. It is observed that the savings, in storage space and computation time, increase nonlinearly with the sample size.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the effects of different types and quality of data on bioclimatic modeling predictions is vital to ascertaining the value of existing models, and to improving future models. Bioclimatic models were constructed using the CLIMEX program, using different data types – seasonal dynamics, geographic (overseas) distribution, and a combination of the two – for two biological control agents for the major weed Lantana camara L. in Australia. The models for one agent, Teleonemia scrupulosa Stål (Hemiptera:Tingidae) were based on a higher quality and quantity of data than the models for the other agent, Octotoma scabripennis Guérin-Méneville (Coleoptera: Chrysomelidae). Predictions of the geographic distribution for Australia showed that T. scrupulosa models exhibited greater accuracy with a progressive improvement from seasonal dynamics data, to the model based on overseas distribution, and finally the model combining the two data types. In contrast, O. scabripennis models were of low accuracy, and showed no clear trends across the various model types. These case studies demonstrate the importance of high quality data for developing models, and of supplementing distributional data with species seasonal dynamics data wherever possible. Seasonal dynamics data allows the modeller to focus on the species response to climatic trends, while distributional data enables easier fitting of stress parameters by restricting the species envelope to the described distribution. It is apparent that CLIMEX models based on low quality seasonal dynamics data, together with a small quantity of distributional data, are of minimal value in predicting the spatial extent of species distribution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objectives of this study were to predict the potential distribution, relative abundance and probability of habitat use by feral camels in southern Northern Territory. Aerial survey data were used to model habitat association. The characteristics of ‘used’ (where camels were observed) v. ‘unused’ (pseudo-absence) sites were compared. Habitat association and abundance were modelled using generalised additive model (GAM) methods. The models predicted habitat suitability and the relative abundance of camels in southern Northern Territory. The habitat suitability maps derived in the present study indicate that camels have suitable habitat in most areas of southern Northern Territory. The index of abundance model identified areas of relatively high camel abundance. Identifying preferred habitats and areas of high abundance can help focus control efforts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Raw data from SeaScan™ transects off Wide Bay (south Queensland) taken in August 2007 as part of a study of ecological factors influencing the distribution of spanner crabs (Ranina ranina). The dataset (comma-delimited ascii file) comprises the following fields: 1. record number 2. date-time (GMT) 3. date-time (AEST) 4. latitude (signed decimal degrees) 5. longitude (decimal degrees) 6. speed over ground (knots) 7. depth (m) 8. seabed roughness (v) 9. hardness (v) Indices of roughness and hardness (from the first and second echoes respectively) were obtained using a SeaScan™ 100 system (un-referenced) on board the Research Vessel Tom Marshall, with the ship’s Furuno FCV 1100 echo sounder and 1 kW, 50 kHz transducer. Generally vessel speed was kept below about 14 kt (typically ~12 kt), and the echo-sounder range set to 80 m. The data were filtered to remove errors due to data drop-out, straying beyond system depth limits (min. 10 m), or transducer interference.