868 resultados para localized algorithms
Resumo:
The paper presents an approach for mapping of precipitation data. The main goal is to perform spatial predictions and simulations of precipitation fields using geostatistical methods (ordinary kriging, kriging with external drift) as well as machine learning algorithms (neural networks). More practically, the objective is to reproduce simultaneously both the spatial patterns and the extreme values. This objective is best reached by models integrating geostatistics and machine learning algorithms. To demonstrate how such models work, two case studies have been considered: first, a 2-day accumulation of heavy precipitation and second, a 6-day accumulation of extreme orographic precipitation. The first example is used to compare the performance of two optimization algorithms (conjugate gradients and Levenberg-Marquardt) of a neural network for the reproduction of extreme values. Hybrid models, which combine geostatistical and machine learning algorithms, are also treated in this context. The second dataset is used to analyze the contribution of radar Doppler imagery when used as external drift or as input in the models (kriging with external drift and neural networks). Model assessment is carried out by comparing independent validation errors as well as analyzing data patterns.
Resumo:
High-field (>or=3 T) cardiac MRI is challenged by inhomogeneities of both the static magnetic field (B(0)) and the transmit radiofrequency field (B(1)+). The inhomogeneous B fields not only demand improved shimming methods but also impede the correct determination of the zero-order terms, i.e., the local resonance frequency f(0) and the radiofrequency power to generate the intended local B(1)+ field. In this work, dual echo time B(0)-map and dual flip angle B(1)+-map acquisition methods are combined to acquire multislice B(0)- and B(1)+-maps simultaneously covering the entire heart in a single breath hold of 18 heartbeats. A previously proposed excitation pulse shape dependent slice profile correction is tested and applied to reduce systematic errors of the multislice B(1)+-map. Localized higher-order shim correction values including the zero-order terms for frequency f(0) and radiofrequency power can be determined based on the acquired B(0)- and B(1)+-maps. This method has been tested in 7 healthy adult human subjects at 3 T and improved the B(0) field homogeneity (standard deviation) from 60 Hz to 35 Hz and the average B(1)+ field from 77% to 100% of the desired B(1)+ field when compared to more commonly used preparation methods.
Resumo:
Biopsies from human localized cutaneous lesions (LCL n = 7) or disseminated lesions (DL n = 8) cases were characterized according to cellular infiltration,frequency of cytokine (IFN-g, TNF-alpha) or iNOS enzyme producing cells. LCL, the most usual form of the disease with usually one or two lesions, exhibits extensive tissue damage. DL is a rare form with widespread lesions throughout the body; exhibiting poor parasite containment but less tissue damage. We demonstrated that LCL lesions exhibit higher frequency of B lymphocytes and a higher intensity of IFN-gamma expression. In both forms of the disease CD8+ were found in higher frequency than CD4+ T cells. Frequency of TNF-alpha and iNOS producing cells, as well as the frequency of CD68+ macrophages, did not differ between LCL and DL. Our findings reinforce the link between an efficient control of parasite and tissue damage, implicating higher frequency of IFN-gamma producing cells, as well as its possible counteraction by infiltrated B cells and hence possible humoral immune response in situ.
Resumo:
This paper presents general problems and approaches for the spatial data analysis using machine learning algorithms. Machine learning is a very powerful approach to adaptive data analysis, modelling and visualisation. The key feature of the machine learning algorithms is that they learn from empirical data and can be used in cases when the modelled environmental phenomena are hidden, nonlinear, noisy and highly variable in space and in time. Most of the machines learning algorithms are universal and adaptive modelling tools developed to solve basic problems of learning from data: classification/pattern recognition, regression/mapping and probability density modelling. In the present report some of the widely used machine learning algorithms, namely artificial neural networks (ANN) of different architectures and Support Vector Machines (SVM), are adapted to the problems of the analysis and modelling of geo-spatial data. Machine learning algorithms have an important advantage over traditional models of spatial statistics when problems are considered in a high dimensional geo-feature spaces, when the dimension of space exceeds 5. Such features are usually generated, for example, from digital elevation models, remote sensing images, etc. An important extension of models concerns considering of real space constrains like geomorphology, networks, and other natural structures. Recent developments in semi-supervised learning can improve modelling of environmental phenomena taking into account on geo-manifolds. An important part of the study deals with the analysis of relevant variables and models' inputs. This problem is approached by using different feature selection/feature extraction nonlinear tools. To demonstrate the application of machine learning algorithms several interesting case studies are considered: digital soil mapping using SVM, automatic mapping of soil and water system pollution using ANN; natural hazards risk analysis (avalanches, landslides), assessments of renewable resources (wind fields) with SVM and ANN models, etc. The dimensionality of spaces considered varies from 2 to more than 30. Figures 1, 2, 3 demonstrate some results of the studies and their outputs. Finally, the results of environmental mapping are discussed and compared with traditional models of geostatistics.
Resumo:
Objectives: Magnetic resonance (MR) imaging and spectroscopy (MRS) allow the establishment of the anatomical evolution and neurochemical profiles of ischemic lesions. The aim of the present study was to identify markers of reversible and irreversible damage by comparing the effects of 10-mins middle cerebral artery occlusion (MCAO), mimicking a transient ischemic attack, with the effects of 30-mins MCAO, inducing a striatal lesion. Methods: ICR-CD1 mice were subjected to 10-mins (n = 11) or 30-mins (n = 9) endoluminal MCAO by filament technique at 0 h. The regional cerebral blood flow (CBF) was monitored in all animals by laser- Doppler flowmetry with a flexible probe fixed on the skull with < 20% of baseline CBF during ischemia and > 70% during reperfusion. All MR studies were carried out in a horizontal 14.1T magnet. Fast spin echo images with T2-weighted parameters were acquired to localize the volume of interest and evaluate the lesion size. Immediately after adjustment of field inhomogeneities, localized 1H MRS was applied to obtain the neurochemical profile from the striatum (6 to 8 microliters). Six animals (sham group) underwent nearly identical procedures without MCAO. Results: The 10-mins MCAO induced no MR- or histologically detectable lesion in most of the mice and a small lesion in some of them. We thus had two groups with the same duration of ischemia but a different outcome, which could be compared to sham-operated mice and more severe ischemic mice (30-mins MCAO). Lactate increase, a hallmark of ischemic insult, was only detected significantly after 30-mins MCAO, whereas at 3 h post ischemia, glutamine was increased in all ischemic mice independently of duration and outcome. In contrast, glutamate, and even more so, N-acetyl-aspartate, decreased only in those mice exhibiting visible lesions on T2-weighted images at 24 h. Conclusions: These results suggest that an increased glutamine/glutamate ratio is a sensitive marker indicating the presence of an excitotoxic insult. Glutamate and NAA, on the other hand, appear to predict permanent neuronal damage. In conclusion, as early as 3 h post ischemia, it is possible to identify early metabolic markers manifesting the presence of a mild ischemic insult as well as the lesion outcome at 24 h.
Resumo:
Aplicació per a iPad a mode de repositori de continguts relacionats amb l'ensenyament d'assignatures d'informàtica.
Resumo:
To make a comprehensive evaluation of organ-specific out-of-field doses using Monte Carlo (MC) simulations for different breast cancer irradiation techniques and to compare results with a commercial treatment planning system (TPS). Three breast radiotherapy techniques using 6MV tangential photon beams were compared: (a) 2DRT (open rectangular fields), (b) 3DCRT (conformal wedged fields), and (c) hybrid IMRT (open conformal+modulated fields). Over 35 organs were contoured in a whole-body CT scan and organ-specific dose distributions were determined with MC and the TPS. Large differences in out-of-field doses were observed between MC and TPS calculations, even for organs close to the target volume such as the heart, the lungs and the contralateral breast (up to 70% difference). MC simulations showed that a large fraction of the out-of-field dose comes from the out-of-field head scatter fluence (>40%) which is not adequately modeled by the TPS. Based on MC simulations, the 3DCRT technique using external wedges yielded significantly higher doses (up to a factor 4-5 in the pelvis) than the 2DRT and the hybrid IMRT techniques which yielded similar out-of-field doses. In sharp contrast to popular belief, the IMRT technique investigated here does not increase the out-of-field dose compared to conventional techniques and may offer the most optimal plan. The 3DCRT technique with external wedges yields the largest out-of-field doses. For accurate out-of-field dose assessment, a commercial TPS should not be used, even for organs near the target volume (contralateral breast, lungs, heart).
Resumo:
In this paper a novel methodology aimed at minimizing the probability of network failure and the failure impact (in terms of QoS degradation) while optimizing the resource consumption is introduced. A detailed study of MPLS recovery techniques and their GMPLS extensions are also presented. In this scenario, some features for reducing the failure impact and offering minimum failure probabilities at the same time are also analyzed. Novel two-step routing algorithms using this methodology are proposed. Results show that these methods offer high protection levels with optimal resource consumption
Resumo:
IP based networks still do not have the required degree of reliability required by new multimedia services, achieving such reliability will be crucial in the success or failure of the new Internet generation. Most of existing schemes for QoS routing do not take into consideration parameters concerning the quality of the protection, such as packet loss or restoration time. In this paper, we define a new paradigm to develop new protection strategies for building reliable MPLS networks, based on what we have called the network protection degree (NPD). This NPD consists of an a priori evaluation, the failure sensibility degree (FSD), which provides the failure probability and an a posteriori evaluation, the failure impact degree (FID), to determine the impact on the network in case of failure. Having mathematical formulated these components, we point out the most relevant components. Experimental results demonstrate the benefits of the utilization of the NPD, when used to enhance some current QoS routing algorithms to offer a certain degree of protection
Resumo:
In image segmentation, clustering algorithms are very popular because they are intuitive and, some of them, easy to implement. For instance, the k-means is one of the most used in the literature, and many authors successfully compare their new proposal with the results achieved by the k-means. However, it is well known that clustering image segmentation has many problems. For instance, the number of regions of the image has to be known a priori, as well as different initial seed placement (initial clusters) could produce different segmentation results. Most of these algorithms could be slightly improved by considering the coordinates of the image as features in the clustering process (to take spatial region information into account). In this paper we propose a significant improvement of clustering algorithms for image segmentation. The method is qualitatively and quantitative evaluated over a set of synthetic and real images, and compared with classical clustering approaches. Results demonstrate the validity of this new approach
Resumo:
This letter presents a comparison between threeFourier-based motion compensation (MoCo) algorithms forairborne synthetic aperture radar (SAR) systems. These algorithmscircumvent the limitations of conventional MoCo, namelythe assumption of a reference height and the beam-center approximation.All these approaches rely on the inherent time–frequencyrelation in SAR systems but exploit it differently, with the consequentdifferences in accuracy and computational burden. Aftera brief overview of the three approaches, the performance ofeach algorithm is analyzed with respect to azimuthal topographyaccommodation, angle accommodation, and maximum frequencyof track deviations with which the algorithm can cope. Also, ananalysis on the computational complexity is presented. Quantitativeresults are shown using real data acquired by the ExperimentalSAR system of the German Aerospace Center (DLR).
Resumo:
In this project a research both in finding predictors via clustering techniques and in reviewing the Data Mining free software is achieved. The research is based in a case of study, from where additionally to the KDD free software used by the scientific community; a new free tool for pre-processing the data is presented. The predictors are intended for the e-learning domain as the data from where these predictors have to be inferred are student qualifications from different e-learning environments. Through our case of study not only clustering algorithms are tested but also additional goals are proposed.