942 resultados para Data Driven Modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The acceptance-probability-controlled simulated annealing with an adaptive move generation procedure, an optimization technique derived from the simulated annealing algorithm, is presented. The adaptive move generation procedure was compared against the random move generation procedure on seven multiminima test functions, as well as on the synthetic data, resembling the optical constants of a metal. In all cases the algorithm proved to have faster convergence and superior escaping from local minima. This algorithm was then applied to fit the model dielectric function to data for platinum and aluminum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The identification, modeling, and analysis of interactions between nodes of neural systems in the human brain have become the aim of interest of many studies in neuroscience. The complex neural network structure and its correlations with brain functions have played a role in all areas of neuroscience, including the comprehension of cognitive and emotional processing. Indeed, understanding how information is stored, retrieved, processed, and transmitted is one of the ultimate challenges in brain research. In this context, in functional neuroimaging, connectivity analysis is a major tool for the exploration and characterization of the information flow between specialized brain regions. In most functional magnetic resonance imaging (fMRI) studies, connectivity analysis is carried out by first selecting regions of interest (ROI) and then calculating an average BOLD time series (across the voxels in each cluster). Some studies have shown that the average may not be a good choice and have suggested, as an alternative, the use of principal component analysis (PCA) to extract the principal eigen-time series from the ROI(s). In this paper, we introduce a novel approach called cluster Granger analysis (CGA) to study connectivity between ROIs. The main aim of this method was to employ multiple eigen-time series in each ROI to avoid temporal information loss during identification of Granger causality. Such information loss is inherent in averaging (e.g., to yield a single ""representative"" time series per ROI). This, in turn, may lead to a lack of power in detecting connections. The proposed approach is based on multivariate statistical analysis and integrates PCA and partial canonical correlation in a framework of Granger causality for clusters (sets) of time series. We also describe an algorithm for statistical significance testing based on bootstrapping. By using Monte Carlo simulations, we show that the proposed approach outperforms conventional Granger causality analysis (i.e., using representative time series extracted by signal averaging or first principal components estimation from ROIs). The usefulness of the CGA approach in real fMRI data is illustrated in an experiment using human faces expressing emotions. With this data set, the proposed approach suggested the presence of significantly more connections between the ROIs than were detected using a single representative time series in each ROI. (c) 2010 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Functional magnetic resonance imaging (fMRI) is currently one of the most widely used methods for studying human brain function in vivo. Although many different approaches to fMRI analysis are available, the most widely used methods employ so called ""mass-univariate"" modeling of responses in a voxel-by-voxel fashion to construct activation maps. However, it is well known that many brain processes involve networks of interacting regions and for this reason multivariate analyses might seem to be attractive alternatives to univariate approaches. The current paper focuses on one multivariate application of statistical learning theory: the statistical discrimination maps (SDM) based on support vector machine, and seeks to establish some possible interpretations when the results differ from univariate `approaches. In fact, when there are changes not only on the activation level of two conditions but also on functional connectivity, SDM seems more informative. We addressed this question using both simulations and applications to real data. We have shown that the combined use of univariate approaches and SDM yields significant new insights into brain activations not available using univariate methods alone. In the application to a visual working memory fMRI data, we demonstrated that the interaction among brain regions play a role in SDM`s power to detect discriminative voxels. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a simulated-annealing-based genetic algorithm for solving model parameter estimation problems. The algorithm incorporates advantages of both genetic algorithms and simulated annealing. Tests on computer-generated synthetic data that closely resemble optical constants of a metal were performed to compare the efficiency of plain genetic algorithms against the simulated-annealing-based genetic algorithms. These tests assess the ability of the algorithms to and the global minimum and the accuracy of values obtained for model parameters. Finally, the algorithm with the best performance is used to fit the model dielectric function to data for platinum and aluminum. (C) 1997 Optical Society of America.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Absorption kinetics of solutes given with the subcutaneous administration of fluids is ill-defined. The gamma emitter, technitium pertechnetate, enabled estimates of absorption rate to be estimated independently using two approaches. In the first approach, the counts remaining at the site were estimated by imaging above the subcutaneous administration site, whereas in the second approach, the plasma technetium concentration-time profiles were monitored up to 8 hr after technetium administration. Boluses of technetium pertechnetate were given both intravenously and subcutaneously on separate occasions with a multiple dosing regimen using three doses on each occasion. The disposition of technetium after iv administration was best described by biexponential kinetics with a V-ss of 0.30 +/- 0.11 L/kg and a clearance of 30.0 +/- 13.1 ml/min. The subcutaneous absorption kinetics was best described as a single exponential process with a half-life of 18.16 +/- 3.97 min by image analysis and a half-life of 11.58 +/- 2.48 min using plasma technetium time data. The bioavailability of technetium by the subcutaneous route was estimated to be 0.96 +/- 0.12. The absorption half-life showed no consistent change with the duration of the subcutaneous infusion. The amount remaining at the absorption site with time was similar when analyzed using image analysis, and plasma concentrations assuming multiexponential disposition kinetics and a first-order absorption process. Profiles of fraction remaining at the absorption sire generated by deconvolution analysis, image analysis, and assumption of a constant first-order absorption process were similar. Slowing of absorption from the subcutaneous administration site is apparent after the last bolus dose in three of the subjects and can De associated with the stopping of the infusion. In a fourth subject, the retention of technetium at the subcutaneous site is more consistent with accumulation of technetium near the absorption site as a result of systemic recirculation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: There are few studies on HIV subtypes and primary and secondary antiretroviral drug resistance (ADR) in community-recruited samples in Brazil. We analyzed HIV clade diversity and prevalence of mutations associated with ADR in men who have sex with men in all five regions of Brazil. Methods: Using respondent-driven sampling, we recruited 3515 men who have sex with men in nine cities: 299 (9.5%) were HIV-positive; 143 subjects had adequate genotyping and epidemiologic data. Forty-four (30.8%) subjects were antiretroviral therapy-experienced (AE) and 99 (69.2%) antiretroviral therapy-naive (AN). We sequenced the reverse transcriptase and protease regions of the virus and analyzed them for drug resistant mutations using World Health Organization guidelines. Results: The most common subtypes were B (81.8%), C (7.7%), and recombinant forms (6.9%). The overall prevalence of primary ADR resistance was 21.4% (i.e. among the AN) and secondary ADR was 35.8% (i.e. among the AE). The prevalence of resistance to protease inhibitors was 3.9% (AN) and 4.4% (AE); to nucleoside reverse transcriptase inhibitors 15.0% (AN) and 31.0% (AE) and to nonnucleoside reverse transcriptase inhibitors 5.5% (AN) and 13.2% (AE). The most common resistance mutation for nucleoside reverse transcriptase inhibitors was 184V (17 cases) and for nonnucleoside reverse transcriptase inhibitors 103N (16 cases). Conclusions: Our data suggest a high level of both primary and secondary ADR in men who have sex with men in Brazil. Additional studies are needed to identify the correlates and causes of antiretroviral therapy resistance to limit the development of resistance among those in care and the transmission of resistant strains in the wider epidemic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Historically, the cure rate model has been used for modeling time-to-event data within which a significant proportion of patients are assumed to be cured of illnesses, including breast cancer, non-Hodgkin lymphoma, leukemia, prostate cancer, melanoma, and head and neck cancer. Perhaps the most popular type of cure rate model is the mixture model introduced by Berkson and Gage [1]. In this model, it is assumed that a certain proportion of the patients are cured, in the sense that they do not present the event of interest during a long period of time and can found to be immune to the cause of failure under study. In this paper, we propose a general hazard model which accommodates comprehensive families of cure rate models as particular cases, including the model proposed by Berkson and Gage. The maximum-likelihood-estimation procedure is discussed. A simulation study analyzes the coverage probabilities of the asymptotic confidence intervals for the parameters. A real data set on children exposed to HIV by vertical transmission illustrates the methodology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To test a mathematical model for measuring blinking kinematics. Spontaneous and reflex blinks of 23 healthy subjects were recorded with two different temporal resolutions. A magnetic search coil was used to record 77 blinks sampled at 200 Hz and 2 kHz in 13 subjects. A video system with low temporal resolution (30 Hz) was employed to register 60 blinks of 10 other subjects. The experimental data points were fitted with a model that assumes that the upper eyelid movement can be divided into two parts: an impulsive accelerated motion followed by a damped harmonic oscillation. All spontaneous and reflex blinks, including those recorded with low resolution, were well fitted by the model with a median coefficient of determination of 0.990. No significant difference was observed when the parameters of the blinks were estimated with the under-damped or critically damped solutions of the harmonic oscillator. On the other hand, the over-damped solution was not applicable to fit any movement. There was good agreement between the model and numerical estimation of the amplitude but not of maximum velocity. Spontaneous and reflex blinks can be mathematically described as consisting of two different phases. The down-phase is mainly an accelerated movement followed by a short time that represents the initial part of the damped harmonic oscillation. The latter is entirely responsible for the up-phase of the movement. Depending on the instantaneous characteristics of each movement, the under-damped or critically damped oscillation is better suited to describe the second phase of the blink. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Land related information about the Earth's surface is commonIJ found in two forms: (1) map infornlation and (2) satellite image da ta. Satellite imagery provides a good visual picture of what is on the ground but complex image processing is required to interpret features in an image scene. Increasingly, methods are being sought to integrate the knowledge embodied in mop information into the interpretation task, or, alternatively, to bypass interpretation and perform biophysical modeling directly on derived data sources. A cartographic modeling language, as a generic map analysis package, is suggested as a means to integrate geographical knowledge and imagery in a process-oriented view of the Earth. Specialized cartographic models may be developed by users, which incorporate mapping information in performing land classification. In addition, a cartographic modeling language may be enhanced with operators suited to processing remotely sensed imagery. We demonstrate the usefulness of a cartographic modeling language for pre-processing satellite imagery, and define two nerv cartographic operators that evaluate image neighborhoods as post-processing operations to interpret thematic map values. The language and operators are demonstrated with an example image classification task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Within the information systems field, the task of conceptual modeling involves building a representation of selected phenomena in some domain. High-quality conceptual-modeling work is important because it facilitates early detection and correction of system development errors. It also plays an increasingly important role in activities like business process reengineering and documentation of best-practice data and process models in enterprise resource planning systems. Yet little research has been undertaken on many aspects of conceptual modeling. In this paper, we propose a framework to motivate research that addresses the following fundamental question: How can we model the world to better facilitate our developing, implementing, using, and maintaining more valuable information systems? The framework comprises four elements: conceptual-modeling grammars, conceptual-modeling methods, conceptual-modeling scripts, and conceptual-modeling contexts. We provide examples of the types of research that have already been undertaken on each element and illustrate research opportunities that exist.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advent of object-oriented languages and the portability of Java, the development and use of class libraries has become widespread. Effective class reuse depends on class reliability which in turn depends on thorough testing. This paper describes a class testing approach based on modeling each test case with a tuple and then generating large numbers of tuples to thoroughly cover an input space with many interesting combinations of values. The testing approach is supported by the Roast framework for the testing of Java classes. Roast provides automated tuple generation based on boundary values, unit operations that support driver standardization, and test case templates used for code generation. Roast produces thorough, compact test drivers with low development and maintenance cost. The framework and tool support are illustrated on a number of non-trivial classes, including a graphical user interface policy manager. Quantitative results are presented to substantiate the practicality and effectiveness of the approach. Copyright (C) 2002 John Wiley Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A thermodynamic approach is developed in this paper to describe the behavior of a subcritical fluid in the neighborhood of vapor-liquid interface and close to a graphite surface. The fluid is modeled as a system of parallel molecular layers. The Helmholtz free energy of the fluid is expressed as the sum of the intrinsic Helmholtz free energies of separate layers and the potential energy of their mutual interactions calculated by the 10-4 potential. This Helmholtz free energy is described by an equation of state (such as the Bender or Peng-Robinson equation), which allows us a convenient means to obtain the intrinsic Helmholtz free energy of each molecular layer as a function of its two-dimensional density. All molecular layers of the bulk fluid are in mechanical equilibrium corresponding to the minimum of the total potential energy. In the case of adsorption the external potential exerted by the graphite layers is added to the free energy. The state of the interface zone between the liquid and the vapor phases or the state of the adsorbed phase is determined by the minimum of the grand potential. In the case of phase equilibrium the approach leads to the distribution of density and pressure over the transition zone. The interrelation between the collision diameter and the potential well depth was determined by the surface tension. It was shown that the distance between neighboring molecular layers substantially changes in the vapor-liquid transition zone and in the adsorbed phase with loading. The approach is considered in this paper for the case of adsorption of argon and nitrogen on carbon black. In both cases an excellent agreement with the experimental data was achieved without additional assumptions and fitting parameters, except for the fluid-solid potential well depth. The approach has far-reaching consequences and can be readily extended to the model of adsorption in slit pores of carbonaceous materials and to the analysis of multicomponent adsorption systems. (C) 2002 Elsevier Science (USA).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurement of exchange of substances between blood and tissue has been a long-lasting challenge to physiologists, and considerable theoretical and experimental accomplishments were achieved before the development of the positron emission tomography (PET). Today, when modeling data from modern PET scanners, little use is made of earlier microvascular research in the compartmental models, which have become the standard model by which the vast majority of dynamic PET data are analysed. However, modern PET scanners provide data with a sufficient temporal resolution and good counting statistics to allow estimation of parameters in models with more physiological realism. We explore the standard compartmental model and find that incorporation of blood flow leads to paradoxes, such as kinetic rate constants being time-dependent, and tracers being cleared from a capillary faster than they can be supplied by blood flow. The inability of the standard model to incorporate blood flow consequently raises a need for models that include more physiology, and we develop microvascular models which remove the inconsistencies. The microvascular models can be regarded as a revision of the input function. Whereas the standard model uses the organ inlet concentration as the concentration throughout the vascular compartment, we consider models that make use of spatial averaging of the concentrations in the capillary volume, which is what the PET scanner actually registers. The microvascular models are developed for both single- and multi-capillary systems and include effects of non-exchanging vessels. They are suitable for analysing dynamic PET data from any capillary bed using either intravascular or diffusible tracers, in terms of physiological parameters which include regional blood flow. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[1] Comprehensive measurements are presented of the piezometric head in an unconfined aquifer during steady, simple harmonic oscillations driven by a hydrostatic clear water reservoir through a vertical interface. The results are analyzed and used to test existing hydrostatic and nonhydrostatic, small-amplitude theories along with capillary fringe effects. As expected, the amplitude of the water table wave decays exponentially. However, the decay rates and phase lags indicate the influence of both vertical flow and capillary effects. The capillary effects are reconciled with observations of water table oscillations in a sand column with the same sand. The effects of vertical flows and the corresponding nonhydrostatic pressure are reasonably well described by small-amplitude theory for water table waves in finite depth aquifers. That includes the oscillation amplitudes being greater at the bottom than at the top and the phase lead of the bottom compared with the top. The main problems with respect to interpreting the measurements through existing theory relate to the complicated boundary condition at the interface between the driving head reservoir and the aquifer. That is, the small-amplitude, finite depth expansion solution, which matches a hydrostatic boundary condition between the bottom and the mean driving head level, is unrealistic with respect to the pressure variation above this level. Hence it cannot describe the finer details of the multiple mode behavior close to the driving head boundary. The mean water table height initially increases with distance from the forcing boundary but then decreases again, and its asymptotic value is considerably smaller than that previously predicted for finite depth aquifers without capillary effects. Just as the mean water table over-height is smaller than predicted by capillarity-free shallow aquifer models, so is the amplitude of the second harmonic. In fact, there is no indication of extra second harmonics ( in addition to that contained in the driving head) being generated at the interface or in the interior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The exponential increase of home-bound persons who live alone and are in need of continuous monitoring requires new solutions to current problems. Most of these cases present illnesses such as motor or psychological disabilities that deprive of a normal living. Common events such as forgetfulness or falls are quite common and have to be prevented or dealt with. This paper introduces a platform to guide and assist these persons (mostly elderly people) by providing multisensory monitoring and intelligent assistance. The platform operates at three levels. The lower level, denominated ‘‘Data acquisition and processing’’performs the usual tasks of a monitoring system, collecting and processing data from the sensors for the purpose of detecting and tracking humans. The aim is to identify their activities in an intermediate level called ‘‘activity detection’’. The upper level, ‘‘Scheduling and decision-making’’, consists of a scheduler which provides warnings, schedules events in an intelligent manner and serves as an interface to the rest of the platform. The idea is to use mobile and static sensors performing constant monitoring of the user and his/her environment, providing a safe environment and an immediate response to severe problems. A case study on elderly fall detection in a nursery home bedroom demonstrates the usefulness of the proposal.