838 resultados para Automated algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The detailed in-vivo characterization of subcortical brain structures is essential not only to understand the basic organizational principles of the healthy brain but also for the study of the involvement of the basal ganglia in brain disorders. The particular tissue properties of basal ganglia - most importantly their high iron content, strongly affect the contrast of magnetic resonance imaging (MRI) images, hampering the accurate automated assessment of these regions. This technical challenge explains the substantial controversy in the literature about the magnitude, directionality and neurobiological interpretation of basal ganglia structural changes estimated from MRI and computational anatomy techniques. My scientific project addresses the pertinent need for accurate automated delineation of basal ganglia using two complementary strategies: ? Empirical testing of the utility of novel imaging protocols to provide superior contrast in the basal ganglia and to quantify brain tissue properties; ? Improvement of the algorithms for the reliable automated detection of basal ganglia and thalamus Previous research demonstrated that MRI protocols based on magnetization transfer (MT) saturation maps provide optimal grey-white matter contrast in subcortical structures compared with the widely used Tl-weighted (Tlw) images (Helms et al., 2009). Under the assumption of a direct impact of brain tissue properties on MR contrast my first study addressed the question of the mechanisms underlying the regional specificities effect of the basal ganglia. I used established whole-brain voxel-based methods to test for grey matter volume differences between MT and Tlw imaging protocols with an emphasis on subcortical structures. I applied a regression model to explain the observed grey matter differences from the regionally specific impact of brain tissue properties on the MR contrast. The results of my first project prompted further methodological developments to create adequate priors for the basal ganglia and thalamus allowing optimal automated delineation of these structures in a probabilistic tissue classification framework. I established a standardized workflow for manual labelling of the basal ganglia, thalamus and cerebellar dentate to create new tissue probability maps from quantitative MR maps featuring optimal grey-white matter contrast in subcortical areas. The validation step of the new tissue priors included a comparison of the classification performance with the existing probability maps. In my third project I continued investigating the factors impacting automated brain tissue classification that result in interpretational shortcomings when using Tlw MRI data in the framework of computational anatomy. While the intensity in Tlw images is predominantly

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Living bacteria or yeast cells are frequently used as bioreporters for the detection of specific chemical analytes or conditions of sample toxicity. In particular, bacteria or yeast equipped with synthetic gene circuitry that allows the production of a reliable non-cognate signal (e.g., fluorescent protein or bioluminescence) in response to a defined target make robust and flexible analytical platforms. We report here how bacterial cells expressing a fluorescence reporter ("bactosensors"), which are mostly used for batch sample analysis, can be deployed for automated semi-continuous target analysis in a single concise biochip. Escherichia coli-based bactosensor cells were continuously grown in a 13 or 50 nanoliter-volume reactor on a two-layered polydimethylsiloxane-on-glass microfluidic chip. Physiologically active cells were directed from the nl-reactor to a dedicated sample exposure area, where they were concentrated and reacted in 40 minutes with the target chemical by localized emission of the fluorescent reporter signal. We demonstrate the functioning of the bactosensor-chip by the automated detection of 50 μgarsenite-As l(-1) in water on consecutive days and after a one-week constant operation. Best induction of the bactosensors of 6-9-fold to 50 μg l(-1) was found at an apparent dilution rate of 0.12 h(-1) in the 50 nl microreactor. The bactosensor chip principle could be widely applicable to construct automated monitoring devices for a variety of targets in different environments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent standardization efforts in e-learning technology have resulted in a number of specifications, however, the automation process that is considered essential in a learning management system (LMS) is a lessexplored one. As learning technology becomes more widespread and more heterogeneous, there is a growing need to specify processes that cross the boundaries of a single LMS or learning resource repository. This article proposes to obtain a specification orientated to automation that takes on board the heterogeneity of systems and formats and provides a language for specifying complex and generic interactions. Having this goal in mind, a technique based on three steps is suggested. The semantic conformance profiles, the business process management (BPM) diagram, and its translation into the business process execution language (BPEL) seem to be suitable for achieving it.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automation or semi-automation of learning scenariospecifications is one of the least exploredsubjects in the e-learning research area. There isa need for a catalogue of learning scenarios and atechnique to facilitate automated retrieval of stored specifications. This requires constructing anontology with this goal and is justified inthis paper. This ontology must mainlysupport a specification technique for learning scenarios. This ontology should also be useful in the creation and validation of new scenarios as well as in the personalization of learning scenarios or their monitoring. Thus, after justifying the need for this ontology, a first approach of a possible knowledge domain is presented. An example of a concrete learning scenario illustrates some relevant concepts supported by this ontology in order to define the scenario in such a way that it could be easy to automate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Network virtualisation is considerably gaining attentionas a solution to ossification of the Internet. However, thesuccess of network virtualisation will depend in part on how efficientlythe virtual networks utilise substrate network resources.In this paper, we propose a machine learning-based approachto virtual network resource management. We propose to modelthe substrate network as a decentralised system and introducea learning algorithm in each substrate node and substrate link,providing self-organization capabilities. We propose a multiagentlearning algorithm that carries out the substrate network resourcemanagement in a coordinated and decentralised way. The taskof these agents is to use evaluative feedback to learn an optimalpolicy so as to dynamically allocate network resources to virtualnodes and links. The agents ensure that while the virtual networkshave the resources they need at any given time, only the requiredresources are reserved for this purpose. Simulations show thatour dynamic approach significantly improves the virtual networkacceptance ratio and the maximum number of accepted virtualnetwork requests at any time while ensuring that virtual networkquality of service requirements such as packet drop rate andvirtual link delay are not affected.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the literature on housing market areas, different approaches can be found to defining them, for example, using travel-to-work areas and, more recently, making use of migration data. Here we propose a simple exercise to shed light on which approach performs better. Using regional data from Catalonia, Spain, we have computed housing market areas with both commuting data and migration data. In order to decide which procedure shows superior performance, we have looked at uniformity of prices within areas. The main finding is that commuting algorithms present more homogeneous areas in terms of housing prices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The spectrophotometric determination of Cd(II) using a flow injection system provided with a solid-phase reactor for cadmium preconcentration and on-line reagent preparation, is described. It is based on the formation of a dithizone-Cd complex in basic medium. The calibration curve is linear between 6 and 300 µg L-1 Cd(II), with a detection limit of 5.4 µg L-1, an RSD of 3.7% (10 replicates in duplicate) and a sample frequency of 11.4 h-1. The proposed method was satisfactorily applied to the determination of Cd(II) in surface, well and drinking waters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A liquid chromatography-tandem mass spectrometry method with atmospheric pressure chemical ionization (LC-APCI/MS/MS) was validated for the determination of etoricoxib in human plasma using antipyrin as internal standard, followed by on-line solid-phase extraction. The method was performed on a Luna C18 column and the mobile phase consisted of acetonitrile:water (95:5, v/v)/ammonium acetate (pH 4.0; 10 mM), run at a flow rate of 0.6 mL/min. The method was linear in the range of 1-5000 ng/mL (r²>0.99). The lower limit of quantitation was 1 ng/mL. The recoveries were within 93.72-96.18%. Moreover, method validation demonstrated acceptable results for the precision, accuracy and stability studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Identification of order of an Autoregressive Moving Average Model (ARMA) by the usual graphical method is subjective. Hence, there is a need of developing a technique to identify the order without employing the graphical investigation of series autocorrelations. To avoid subjectivity, this thesis focuses on determining the order of the Autoregressive Moving Average Model using Reversible Jump Markov Chain Monte Carlo (RJMCMC). The RJMCMC selects the model from a set of the models suggested by better fitting, standard deviation errors and the frequency of accepted data. Together with deep analysis of the classical Box-Jenkins modeling methodology the integration with MCMC algorithms has been focused through parameter estimation and model fitting of ARMA models. This helps to verify how well the MCMC algorithms can treat the ARMA models, by comparing the results with graphical method. It has been seen that the MCMC produced better results than the classical time series approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The application of automated correlation optimized warping (ACOW) to the correction of retention time shift in the chromatographic fingerprints of Radix Puerariae thomsonii (RPT) was investigated. Twenty-seven samples were extracted from 9 batches of RPT products. The fingerprints of the 27 samples were established by the HPLC method. Because there is a retention time shift in the established fingerprints, the quality of these samples cannot be correctly evaluated by using similarity estimation and principal component analysis (PCA). Thus, the ACOW method was used to align these fingerprints. In the ACOW procedure, the warping parameters, which have a significant influence on the alignment result, were optimized by an automated algorithm. After correcting the retention time shift, the quality of these RPT samples was correctly evaluated by similarity estimation and PCA. It is demonstrated that ACOW is a practical method for aligning the chromatographic fingerprints of RPT. The combination of ACOW, similarity estimation, and PCA is shown to be a promising method for evaluating the quality of Traditional Chinese Medicine.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study presents an automatic, computer-aided analytical method called Comparison Structure Analysis (CSA), which can be applied to different dimensions of music. The aim of CSA is first and foremost practical: to produce dynamic and understandable representations of musical properties by evaluating the prevalence of a chosen musical data structure through a musical piece. Such a comparison structure may refer to a mathematical vector, a set, a matrix or another type of data structure and even a combination of data structures. CSA depends on an abstract systematic segmentation that allows for a statistical or mathematical survey of the data. To choose a comparison structure is to tune the apparatus to be sensitive to an exclusive set of musical properties. CSA settles somewhere between traditional music analysis and computer aided music information retrieval (MIR). Theoretically defined musical entities, such as pitch-class sets, set-classes and particular rhythm patterns are detected in compositions using pattern extraction and pattern comparison algorithms that are typical within the field of MIR. In principle, the idea of comparison structure analysis can be applied to any time-series type data and, in the music analytical context, to polyphonic as well as homophonic music. Tonal trends, set-class similarities, invertible counterpoints, voice-leading similarities, short-term modulations, rhythmic similarities and multiparametric changes in musical texture were studied. Since CSA allows for a highly accurate classification of compositions, its methods may be applicable to symbolic music information retrieval as well. The strength of CSA relies especially on the possibility to make comparisons between the observations concerning different musical parameters and to combine it with statistical and perhaps other music analytical methods. The results of CSA are dependent on the competence of the similarity measure. New similarity measures for tonal stability, rhythmic and set-class similarity measurements were proposed. The most advanced results were attained by employing the automated function generation – comparable with the so-called genetic programming – to search for an optimal model for set-class similarity measurements. However, the results of CSA seem to agree strongly, independent of the type of similarity function employed in the analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Forest inventories are used to estimate forest characteristics and the condition of forest for many different applications: operational tree logging for forest industry, forest health state estimation, carbon balance estimation, land-cover and land use analysis in order to avoid forest degradation etc. Recent inventory methods are strongly based on remote sensing data combined with field sample measurements, which are used to define estimates covering the whole area of interest. Remote sensing data from satellites, aerial photographs or aerial laser scannings are used, depending on the scale of inventory. To be applicable in operational use, forest inventory methods need to be easily adjusted to local conditions of the study area at hand. All the data handling and parameter tuning should be objective and automated as much as possible. The methods also need to be robust when applied to different forest types. Since there generally are no extensive direct physical models connecting the remote sensing data from different sources to the forest parameters that are estimated, mathematical estimation models are of "black-box" type, connecting the independent auxiliary data to dependent response data with linear or nonlinear arbitrary models. To avoid redundant complexity and over-fitting of the model, which is based on up to hundreds of possibly collinear variables extracted from the auxiliary data, variable selection is needed. To connect the auxiliary data to the inventory parameters that are estimated, field work must be performed. In larger study areas with dense forests, field work is expensive, and should therefore be minimized. To get cost-efficient inventories, field work could partly be replaced with information from formerly measured sites, databases. The work in this thesis is devoted to the development of automated, adaptive computation methods for aerial forest inventory. The mathematical model parameter definition steps are automated, and the cost-efficiency is improved by setting up a procedure that utilizes databases in the estimation of new area characteristics.