982 resultados para Applied Computing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Soil information is needed for managing the agricultural environment. The aim of this study was to apply artificial neural networks (ANNs) for the prediction of soil classes using orbital remote sensing products, terrain attributes derived from a digital elevation model and local geology information as data sources. This approach to digital soil mapping was evaluated in an area with a high degree of lithologic diversity in the Serra do Mar. The neural network simulator used in this study was JavaNNS and the backpropagation learning algorithm. For soil class prediction, different combinations of the selected discriminant variables were tested: elevation, declivity, aspect, curvature, curvature plan, curvature profile, topographic index, solar radiation, LS topographic factor, local geology information, and clay mineral indices, iron oxides and the normalized difference vegetation index (NDVI) derived from an image of a Landsat-7 Enhanced Thematic Mapper Plus (ETM+) sensor. With the tested sets, best results were obtained when all discriminant variables were associated with geological information (overall accuracy 93.2 - 95.6 %, Kappa index 0.924 - 0.951, for set 13). Excluding the variable profile curvature (set 12), overall accuracy ranged from 93.9 to 95.4 % and the Kappa index from 0.932 to 0.948. The maps based on the neural network classifier were consistent and similar to conventional soil maps drawn for the study area, although with more spatial details. The results show the potential of ANNs for soil class prediction in mountainous areas with lithological diversity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ABSTRACT In recent years, geotechnologies as remote and proximal sensing and attributes derived from digital terrain elevation models indicated to be very useful for the description of soil variability. However, these information sources are rarely used together. Therefore, a methodology for assessing and specialize soil classes using the information obtained from remote/proximal sensing, GIS and technical knowledge has been applied and evaluated. Two areas of study, in the State of São Paulo, Brazil, totaling approximately 28.000 ha were used for this work. First, in an area (area 1), conventional pedological mapping was done and from the soil classes found patterns were obtained with the following information: a) spectral information (forms of features and absorption intensity of spectral curves with 350 wavelengths -2,500 nm) of soil samples collected at specific points in the area (according to each soil type); b) obtaining equations for determining chemical and physical properties of the soil from the relationship between the results obtained in the laboratory by the conventional method, the levels of chemical and physical attributes with the spectral data; c) supervised classification of Landsat TM 5 images, in order to detect changes in the size of the soil particles (soil texture); d) relationship between classes relief soils and attributes. Subsequently, the obtained patterns were applied in area 2 obtain pedological classification of soils, but in GIS (ArcGIS). Finally, we developed a conventional pedological mapping in area 2 to which was compared with a digital map, ie the one obtained only with pre certain standards. The proposed methodology had a 79 % accuracy in the first categorical level of Soil Classification System, 60 % accuracy in the second category level and became less useful in the categorical level 3 (37 % accuracy).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For the last 2 decades, supertree reconstruction has been an active field of research and has seen the development of a large number of major algorithms. Because of the growing popularity of the supertree methods, it has become necessary to evaluate the performance of these algorithms to determine which are the best options (especially with regard to the supermatrix approach that is widely used). In this study, seven of the most commonly used supertree methods are investigated by using a large empirical data set (in terms of number of taxa and molecular markers) from the worldwide flowering plant family Sapindaceae. Supertree methods were evaluated using several criteria: similarity of the supertrees with the input trees, similarity between the supertrees and the total evidence tree, level of resolution of the supertree and computational time required by the algorithm. Additional analyses were also conducted on a reduced data set to test if the performance levels were affected by the heuristic searches rather than the algorithms themselves. Based on our results, two main groups of supertree methods were identified: on one hand, the matrix representation with parsimony (MRP), MinFlip, and MinCut methods performed well according to our criteria, whereas the average consensus, split fit, and most similar supertree methods showed a poorer performance or at least did not behave the same way as the total evidence tree. Results for the super distance matrix, that is, the most recent approach tested here, were promising with at least one derived method performing as well as MRP, MinFlip, and MinCut. The output of each method was only slightly improved when applied to the reduced data set, suggesting a correct behavior of the heuristic searches and a relatively low sensitivity of the algorithms to data set sizes and missing data. Results also showed that the MRP analyses could reach a high level of quality even when using a simple heuristic search strategy, with the exception of MRP with Purvis coding scheme and reversible parsimony. The future of supertrees lies in the implementation of a standardized heuristic search for all methods and the increase in computing power to handle large data sets. The latter would prove to be particularly useful for promising approaches such as the maximum quartet fit method that yet requires substantial computing power.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Skeletal muscle mitochondrial (Mito) and lipid droplet (Lipid) content are often measured in human translational studies. Stereological point counting allows computing Mito and Lipid volume density (Vd) from micrographs taken with transmission electron microscopes. Former studies are not specific as to the size of individual squares that make up the grids, making reproducibility difficult, particularly when different magnifications are used. Our objective was to determine which size grid would be best at predicting fractional volume efficiently without sacrificing reliability and to test a novel method to reduce sampling bias. Methods: ten subjects underwent vastus lateralis biopsies. Samples were fixed, embedded, and cut longitudinally in ultrathin sections of 60 nm. Twenty micrographs from the intramyofibrillar region were taken per subject at Ã-33,000 magnification. Different grid sizes were superimposed on each micrograph: 1,000 Ã- 1,000 nm, 500 Ã- 500 nm, and 250 Ã- 250 nm. Results: mean Mito and Lipid Vd were not statistically different across grids. Variability was greater when going from 1,000 Ã- 1,000 to 500 Ã- 500 nm grid than from 500 Ã- 500 to 250 Ã- 250 nm grid. Discussion: this study is the first to attempt to standardize grid size while keeping with the conventional stereology principles. This is all in hopes of producing replicable assessments that can be obtained universally across different studies looking at human skeletal muscle mitochondrial and lipid droplet content.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on measurement of blood concentrations. Maintaining concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. In the last decades computer programs have been developed to assist clinicians in this assignment. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Method: Literature and Internet search was performed to identify software. All programs were tested on common personal computer. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software's characteristics. Numbers of drugs handled vary widely and 8 programs offer the ability to the user to add its own drug model. 10 computer programs are able to compute Bayesian dosage adaptation based on a blood concentration (a posteriori adjustment) while 9 are also able to suggest a priori dosage regimen (prior to any blood concentration measurement), based on individual patient covariates, such as age, gender, weight. Among those applying Bayesian analysis, one uses the non-parametric approach. The top 2 software emerging from this benchmark are MwPharm and TCIWorks. Other programs evaluated have also a good potential but are less sophisticated (e.g. in terms of storage or report generation) or less user-friendly.¦Conclusion: Whereas 2 integrated programs are at the top of the ranked listed, such complex tools would possibly not fit all institutions, and each software tool must be regarded with respect to individual needs of hospitals or clinicians. Interest in computing tool to support therapeutic monitoring is still growing. Although developers put efforts into it the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capacity of data storage and report generation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Melt-rock reaction in the upper mantle is recorded in a variety of ultramafic rocks and is an important process in modifying melt composition on its way from the source region towards the surface. This experimental study evaluates the compositional variability of tholeiitic basalts upon reaction with depleted peridotite at uppermost-mantle conditions. Infiltration-reaction processes are simulated by employing a three-layered set-up: primitive basaltic powder ('melt layer') is overlain by a 'peridotite layer' and a layer of vitreous carbon spheres ('melt trap'). Melt from the melt layer is forced to move through the peridotite layer into the melt trap. Experiments were conducted at 0.65 and 0.8 GPa in the temperature range 1,170-1,290 degrees C. In this P-T range, representing conditions encountered in the transition zone (thermal boundary layer) between the asthenosphere and the lithosphere underneath oceanic spreading centres, the melt is subjected to fractionation, and the peridotite is partially melting (T (s) similar to 1,260 degrees C). The effect of reaction between melt and peridotite on the melt composition was investigated across each experimental charge. Quenched melts in the peridotite layers display larger compositional variations than melt layer glasses. A difference between glasses in the melt and peridotite layer becomes more important at decreasing temperature through a combination of enrichment in incompatible elements in the melt layer and less efficient diffusive equilibration in the melt phase. At 1,290A degrees C, preferential dissolution of pyroxenes enriches the melt in silica and dilutes it in incompatible elements. Moreover, liquids become increasingly enriched in Cr(2)O(3) at higher temperatures due to the dissolution of spinel. Silica contents of liquids decrease at 1,260 degrees C, whereas incompatible elements start to concentrate in the melt due to increasing levels of crystallization. At the lowest temperatures investigated, increasing alkali contents cause silica to increase as a consequence of reactive fractionation. Pervasive percolation of tholeiitic basalt through an upper-mantle thermal boundary layer can thus impose a high-Si 'low-pressure' signature on MORB. This could explain opx + plag enrichment in shallow plagioclase peridotites and prolonged formation of olivine gabbros.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Selostus: Luonnonvarojen kokonaiskäyttö sovellettuna maatalouteen: esimerkki Suomesta

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Obesity may have an impact on key aspects of health-related quality of life (HRQOL). In this context, the Impact of Weight Quality of Life (IWQOL) questionnaire was the first scale designed to assess HRQOL. The aim of the present study was twofold: to assess HRQOL in a sample of Spanish patients awaiting bariatric surgery and to determine the psychometric properties of the IWQOL-Lite and its sensitivity to detect differences in HRQOL across groups. Methods Participants were 109 obese adult patients (BMI¿ 35 kg/m2) from Barcelona, to whom the following measurement instruments were applied: IWQOL-Lite, Depression Anxiety Stress Scales, Brief Symptom Inventory, and self-perception items. Results Descriptive data regarding the IWQOL-Lite scores obtained by these patients are reported. Principal components analysis revealed a five-factor model accounting for 72.05% of the total variance, with factor loadings being adequate for all items. Corrected itemtotal correlations were acceptable for all items. Cronbach"s alpha coefficients were excellent both for the subscales (0.880.93) and the total scale (0.95). The relationship between the IWQOLLite and other variables supports the construct validity of the scale. Finally, sensitivity analysis revealed large effect sizes when comparing scores obtained by extreme BMI groups. Conclusions This is the first study to report the application of the IWQOL-Lite to a sample of Spanish patients awaiting bariatric surgery and to confirm that the Spanish version of the instrument has adequate psychometric properties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.