992 resultados para Aggregation methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article reviews the statistical methods that have been used to study the planar distribution, and especially clustering, of objects in histological sections of brain tissue. The objective of these studies is usually quantitative description, comparison between patients or correlation between histological features. Objects of interest such as neurones, glial cells, blood vessels or pathological features such as protein deposits appear as sectional profiles in a two-dimensional section. These objects may not be randomly distributed within the section but exhibit a spatial pattern, a departure from randomness either towards regularity or clustering. The methods described include simple tests of whether the planar distribution of a histological feature departs significantly from randomness using randomized points, lines or sample fields and more complex methods that employ grids or transects of contiguous fields, and which can detect the intensity of aggregation and the sizes, distribution and spacing of clusters. The usefulness of these methods in understanding the pathogenesis of neurodegenerative diseases such as Alzheimer's disease and Creutzfeldt-Jakob disease is discussed. © 2006 The Royal Microscopical Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The last decade has seen a considerable increase in the application of quantitative methods in the study of histological sections of brain tissue and especially in the study of neurodegenerative disease. These disorders are characterised by the deposition and aggregation of abnormal or misfolded proteins in the form of extracellular protein deposits such as senile plaques (SP) and intracellular inclusions such as neurofibrillary tangles (NFT). Quantification of brain lesions and studying the relationships between lesions and normal anatomical features of the brain, including neurons, glial cells, and blood vessels, has become an important method of elucidating disease pathogenesis. This review describes methods for quantifying the abundance of a histological feature such as density, frequency, and 'load' and the sampling methods by which quantitative measures can be obtained including plot/quadrat sampling, transect sampling, and the point-quarter method. In addition, methods for determining the spatial pattern of a histological feature, i.e., whether the feature is distributed at random, regularly, or is aggregated into clusters, are described. These methods include the use of the Poisson and binomial distributions, pattern analysis by regression, Fourier analysis, and methods based on mapped point patterns. Finally, the statistical methods available for studying the degree of spatial correlation between pathological lesions and neurons, glial cells, and blood vessels are described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Remote sensing data is routinely used in ecology to investigate the relationship between landscape pattern as characterised by land use and land cover maps, and ecological processes. Multiple factors related to the representation of geographic phenomenon have been shown to affect characterisation of landscape pattern resulting in spatial uncertainty. This study investigated the effect of the interaction between landscape spatial pattern and geospatial processing methods statistically; unlike most papers which consider the effect of each factor in isolation only. This is important since data used to calculate landscape metrics typically undergo a series of data abstraction processing tasks and are rarely performed in isolation. The geospatial processing methods tested were the aggregation method and the choice of pixel size used to aggregate data. These were compared to two components of landscape pattern, spatial heterogeneity and the proportion of landcover class area. The interactions and their effect on the final landcover map were described using landscape metrics to measure landscape pattern and classification accuracy (response variables). All landscape metrics and classification accuracy were shown to be affected by both landscape pattern and by processing methods. Large variability in the response of those variables and interactions between the explanatory variables were observed. However, even though interactions occurred, this only affected the magnitude of the difference in landscape metric values. Thus, provided that the same processing methods are used, landscapes should retain their ranking when their landscape metrics are compared. For example, highly fragmented landscapes will always have larger values for the landscape metric "number of patches" than less fragmented landscapes. But the magnitude of difference between the landscapes may change and therefore absolute values of landscape metrics may need to be interpreted with caution. The explanatory variables which had the largest effects were spatial heterogeneity and pixel size. These explanatory variables tended to result in large main effects and large interactions. The high variability in the response variables and the interaction of the explanatory variables indicate it would be difficult to make generalisations about the impact of processing on landscape pattern as only two processing methods were tested and it is likely that untested processing methods will potentially result in even greater spatial uncertainty. © 2013 Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An iterative travel time forecasting scheme, named the Advanced Multilane Prediction based Real-time Fastest Path (AMPRFP) algorithm, is presented in this dissertation. This scheme is derived from the conventional kernel estimator based prediction model by the association of real-time nonlinear impacts that caused by neighboring arcs’ traffic patterns with the historical traffic behaviors. The AMPRFP algorithm is evaluated by prediction of the travel time of congested arcs in the urban area of Jacksonville City. Experiment results illustrate that the proposed scheme is able to significantly reduce both the relative mean error (RME) and the root-mean-squared error (RMSE) of the predicted travel time. To obtain high quality real-time traffic information, which is essential to the performance of the AMPRFP algorithm, a data clean scheme enhanced empirical learning (DCSEEL) algorithm is also introduced. This novel method investigates the correlation between distance and direction in the geometrical map, which is not considered in existing fingerprint localization methods. Specifically, empirical learning methods are applied to minimize the error that exists in the estimated distance. A direction filter is developed to clean joints that have negative influence to the localization accuracy. Synthetic experiments in urban, suburban and rural environments are designed to evaluate the performance of DCSEEL algorithm in determining the cellular probe’s position. The results show that the cellular probe’s localization accuracy can be notably improved by the DCSEEL algorithm. Additionally, a new fast correlation technique for overcoming the time efficiency problem of the existing correlation algorithm based floating car data (FCD) technique is developed. The matching process is transformed into a 1-dimensional (1-D) curve matching problem and the Fast Normalized Cross-Correlation (FNCC) algorithm is introduced to supersede the Pearson product Moment Correlation Co-efficient (PMCC) algorithm in order to achieve the real-time requirement of the FCD method. The fast correlation technique shows a significant improvement in reducing the computational cost without affecting the accuracy of the matching process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sensitive detection of pathogens is critical to ensure the safety of food supplies and to prevent bacterial disease infection and outbreak at the first onset. While conventional techniques such as cell culture, ELISA, PCR, etc. have been used as the predominant detection workhorses, they are however limited by either time-consuming procedure, complicated sample pre-treatment, expensive analysis and operation, or inability to be implemented at point-of-care testing. Here, we present our recently developed assay exploiting enzyme-induced aggregation of plasmonic gold nanoparticles (AuNPs) for label-free and ultrasensitive detection of bacterial DNA. In the experiments, AuNPs are first functionalized with specific, single-stranded RNA probes so that they exhibit high stability in solution even under high electrolytic condition thus exhibiting red color. When bacterial DNA is present in a sample, a DNA-RNA heteroduplex will be formed and subsequently prone to the RNase H cleavage on the RNA probe, allowing the DNA to liberate and hybridize with another RNA strand. This continuously happens until all of the RNA strands are cleaved, leaving the nanoparticles ‘unprotected’. The addition of NaCl will cause the ‘unprotected’ nanoparticles to aggregate, initiating a colour change from red to blue. The reaction is performed in a multi-well plate format, and the distinct colour signal can be discriminated by naked eye or simple optical spectroscopy. As a result, bacterial DNA as low as pM could be unambiguously detected, suggesting that the enzyme-induced aggregation of AuNPs assay is very easy to perform and sensitive, it will significantly benefit to development of fast and ultrasensitive methods that can be used for disease detection and diagnosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Harpalycin 2 (HP-2) is an isoflavone isolated from the leaves of Harpalyce brasiliana Benth., a snakeroot found in northeast region of Brazil and used in folk medicine to treat snakebite. Its leaves are said to be anti-inflammatory. Secretory phospholipases A(2) are important toxins found in snake venom and are structurally related to those found in inflammatory conditions in mammals, as in arthritis and atherosclerosis, and for this reason can be valuable tools for searching new anti-phospholipase A(2) drugs.Methods: HP-2 and piratoxin-III (PrTX-III) were purified through chromatographic techniques. The effect of HP-2 in the enzymatic activity of PrTX-III was carried out using 4-nitro-3-octanoyloxy-benzoic acid as the substrate. PrTX-III induced platelet aggregation was inhibited by HP-2 when compared to aristolochic acid and p-bromophenacyl bromide (p-BPB). In an attempt to elucidate how HP-2 interacts with PrTX-III, mass spectrometry, circular dichroism and intrinsic fluorescence analysis were performed. Docking scores of the ligands (HP-2, aristolochic acid and p-BPB) using PrTX-III as target were also calculated.Results: HP-2 inhibited the enzymatic activity of PrTX-III (IC50 11.34 +/- 0.28 mu g/mL) although it did not form a stable chemical complex in the active site, since mass spectrometry measurements showed no difference between native (13,837.34 Da) and HP-2 treated PrTX-III (13,856.12 Da). A structural analysis of PrTX-III after treatment with HP-2 showed a decrease in dimerization and a slight protein unfolding. In the platelet aggregation assay, HP-2 previously incubated with PrTX-III inhibited the aggregation when compared with untreated protein. PrTX-III chemical treated with aristolochic acid and p-BPB, two standard PLA(2) inhibitors, showed low inhibitory effects when compared with the HP-2 treatment. Docking scores corroborated these results, showing higher affinity of HP-2 for the PrTX-III target (PDB code: 1GMZ) than aristolochic acid and p-BPB. HP-2 previous incubated with the platelets inhibits the aggregation induced by untreated PrTX-III as well as arachidonic acid.Conclusion: HP-2 changes the structure of PrTX-III, inhibiting the enzymatic activity of this enzyme. In addition, PrTX-III platelet aggregant activity was inhibited by treatment with HP-2, p-BPB and aristolochic acid, and these results were corroborated by docking scores.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This chapter gives an overview of aggregation functions and their use in recommender systems. The classical weighted average lies at the heart of various recommendation mechanisms, often being employed to combine item feature scores or predict ratings from similar users. Some improvements to accuracy and robustness can be achieved by aggregating different measures of similarity or using an average of recommendations obtained through different techniques. Advances made in the theory of aggregation functions therefore have the potential to deliver increased performance to many recommender systems. We provide definitions of some important families and properties, sophisticated methods of construction, and various examples of aggregation functions in the domain of recommender systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider an optimization problem in ecology where our objective is to maximize biodiversity with respect to different land-use allocations. As it turns out, the main problem can be framed as learning the weights of a weighted arithmetic mean where the objective is the geometric mean of its outputs. We propose methods for approximating solutions to this and similar problems, which are non-linear by nature, using linear and bilevel techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of supervised learning techniques for fitting weights and/or generator functions of weighted quasi-arithmetic means – a special class of idempotent and nondecreasing aggregation functions – to empirical data has already been considered in a number of papers. Nevertheless, there are still some important issues that have not been discussed in the literature yet. In the first part of this two-part contribution we deal with the concept of regularization, a quite standard technique from machine learning applied so as to increase the fit quality on test and validation data samples. Due to the constraints on the weighting vector, it turns out that quite different methods can be used in the current framework, as compared to regression models. Moreover, it is worth noting that so far fitting weighted quasi-arithmetic means to empirical data has only been performed approximately, via the so-called linearization technique. In this paper we consider exact solutions to such special optimization tasks and indicate cases where linearization leads to much worse solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a framework for eliciting and aggregating pairwise preference relations based on the assumption of an underlying fuzzy partial order. We also propose some linear programming optimization methods for ensuring consistency either as part of the aggregation phase or as a pre- or post-processing task. We contend that this framework of pairwise-preference relations, based on the Kemeny distance, can be less sensitive to extreme or biased opinions and is also less complex to elicit from experts. We provide some examples and outline their relevant properties and associated concepts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Soil organic matter (SOM) is important to fertility, since it performs several functions such as cycling, water and nutrient retention and soil aggregation, in addition to being an energy requirement for biological activity. This study proposes new trends to the Embrapa, Walkley-Black, and Mebius methods that allowed the determination of SOM by spectrophotometry, increasing functionality. The mass of 500 mg was reduced to 200 mg, generating a mean of 60 % saving of reagents and a decrease of 91 % in the volume of residue generated for the three methods without compromising accuracy and precision. We were able to optimize conditions for the Mebius method and establish the digestion time of maximum recovery of SOM by factorial design and response surface. The methods were validated by the estimate of figures of merits. Between the methods investigated, the optimized Mebius method was best suited for determining SOM, showing near 100 % recovery.