973 resultados para realistic neural modeling
Resumo:
In vivo 13C NMR spectroscopy has the unique capability to measure metabolic fluxes noninvasively in the brain. Quantitative measurements of metabolic fluxes require analysis of the 13C labeling time courses obtained experimentally with a metabolic model. The present work reviews the ingredients necessary for a dynamic metabolic modeling study, with particular emphasis on practical issues.
Resumo:
Compartmental and physiologically based toxicokinetic modeling coupled with Monte Carlo simulation were used to quantify the impact of biological variability (physiological, biochemical, and anatomic parameters) on the values of a series of bio-indicators of metal and organic industrial chemical exposures. A variability extent index and the main parameters affecting biological indicators were identified. Results show a large diversity in interindividual variability for the different categories of biological indicators examined. Measurement of the unchanged substance in blood, alveolar air, or urine is much less variable than the measurement of metabolites, both in blood and urine. In most cases, the alveolar flow and cardiac output were identified as the prime parameters determining biological variability, thus suggesting the importance of workload intensity on absorbed dose for inhaled chemicals.
Resumo:
Due to the advances in sensor networks and remote sensing technologies, the acquisition and storage rates of meteorological and climatological data increases every day and ask for novel and efficient processing algorithms. A fundamental problem of data analysis and modeling is the spatial prediction of meteorological variables in complex orography, which serves among others to extended climatological analyses, for the assimilation of data into numerical weather prediction models, for preparing inputs to hydrological models and for real time monitoring and short-term forecasting of weather.In this thesis, a new framework for spatial estimation is proposed by taking advantage of a class of algorithms emerging from the statistical learning theory. Nonparametric kernel-based methods for nonlinear data classification, regression and target detection, known as support vector machines (SVM), are adapted for mapping of meteorological variables in complex orography.With the advent of high resolution digital elevation models, the field of spatial prediction met new horizons. In fact, by exploiting image processing tools along with physical heuristics, an incredible number of terrain features which account for the topographic conditions at multiple spatial scales can be extracted. Such features are highly relevant for the mapping of meteorological variables because they control a considerable part of the spatial variability of meteorological fields in the complex Alpine orography. For instance, patterns of orographic rainfall, wind speed and cold air pools are known to be correlated with particular terrain forms, e.g. convex/concave surfaces and upwind sides of mountain slopes.Kernel-based methods are employed to learn the nonlinear statistical dependence which links the multidimensional space of geographical and topographic explanatory variables to the variable of interest, that is the wind speed as measured at the weather stations or the occurrence of orographic rainfall patterns as extracted from sequences of radar images. Compared to low dimensional models integrating only the geographical coordinates, the proposed framework opens a way to regionalize meteorological variables which are multidimensional in nature and rarely show spatial auto-correlation in the original space making the use of classical geostatistics tangled.The challenges which are explored during the thesis are manifolds. First, the complexity of models is optimized to impose appropriate smoothness properties and reduce the impact of noisy measurements. Secondly, a multiple kernel extension of SVM is considered to select the multiscale features which explain most of the spatial variability of wind speed. Then, SVM target detection methods are implemented to describe the orographic conditions which cause persistent and stationary rainfall patterns. Finally, the optimal splitting of the data is studied to estimate realistic performances and confidence intervals characterizing the uncertainty of predictions.The resulting maps of average wind speeds find applications within renewable resources assessment and opens a route to decrease the temporal scale of analysis to meet hydrological requirements. Furthermore, the maps depicting the susceptibility to orographic rainfall enhancement can be used to improve current radar-based quantitative precipitation estimation and forecasting systems and to generate stochastic ensembles of precipitation fields conditioned upon the orography.
Resumo:
Excitation-continuous music instrument control patterns are often not explicitly represented in current sound synthesis techniques when applied to automatic performance. Both physical model-based and sample-based synthesis paradigmswould benefit from a flexible and accurate instrument control model, enabling the improvement of naturalness and realism. Wepresent a framework for modeling bowing control parameters inviolin performance. Nearly non-intrusive sensing techniques allow for accurate acquisition of relevant timbre-related bowing control parameter signals.We model the temporal contour of bow velocity, bow pressing force, and bow-bridge distance as sequences of short Bézier cubic curve segments. Considering different articulations, dynamics, and performance contexts, a number of note classes are defined. Contours of bowing parameters in a performance database are analyzed at note-level by following a predefined grammar that dictates characteristics of curve segment sequences for each of the classes in consideration. As a result, contour analysis of bowing parameters of each note yields an optimal representation vector that is sufficient for reconstructing original contours with significant fidelity. From the resulting representation vectors, we construct a statistical model based on Gaussian mixtures suitable for both the analysis and synthesis of bowing parameter contours. By using the estimated models, synthetic contours can be generated through a bow planning algorithm able to reproduce possible constraints caused by the finite length of the bow. Rendered contours are successfully used in two preliminary synthesis frameworks: digital waveguide-based bowed stringphysical modeling and sample-based spectral-domain synthesis.
Resumo:
BACKGROUND: Metals are known endocrine disruptors and have been linked to cardiometabolic diseases via multiple potential mechanisms, yet few human studies have both the exposure variability and biologically-relevant phenotype data available. We sought to examine the distribution of metals exposure and potential associations with cardiometabolic risk factors in the "Modeling the Epidemiologic Transition Study" (METS), a prospective cohort study designed to assess energy balance and change in body weight, diabetes and cardiovascular disease risk in five countries at different stages of social and economic development. METHODS: Young adults (25-45 years) of African descent were enrolled (N = 500 from each site) in: Ghana, South Africa, Seychelles, Jamaica and the U.S.A. We randomly selected 150 blood samples (N = 30 from each site) to determine concentrations of selected metals (arsenic, cadmium, lead, mercury) in a subset of participants at baseline and to examine associations with cardiometabolic risk factors. RESULTS: Median (interquartile range) metal concentrations (μg/L) were: arsenic 8.5 (7.7); cadmium 0.01 (0.8); lead 16.6 (16.1); and mercury 1.5 (5.0). There were significant differences in metals concentrations by: site location, paid employment status, education, marital status, smoking, alcohol use, and fish intake. After adjusting for these covariates plus age and sex, arsenic (OR 4.1, 95% C.I. 1.2, 14.6) and lead (OR 4.0, 95% C.I. 1.6, 9.6) above the median values were significantly associated with elevated fasting glucose. These associations increased when models were further adjusted for percent body fat: arsenic (OR 5.6, 95% C.I. 1.5, 21.2) and lead (OR 5.0, 95% C.I. 2.0, 12.7). Cadmium and mercury were also related with increased odds of elevated fasting glucose, but the associations were not statistically significant. Arsenic was significantly associated with increased odds of low HDL cholesterol both with (OR 8.0, 95% C.I. 1.8, 35.0) and without (OR 5.9, 95% C.I. 1.5, 23.1) adjustment for percent body fat. CONCLUSIONS: While not consistent for all cardiometabolic disease markers, these results are suggestive of potentially important associations between metals exposure and cardiometabolic risk. Future studies will examine these associations in the larger cohort over time.
Resumo:
The work described in this report documents the activities performed for the evaluation, development, and enhancement of the Iowa Department of Transportation (DOT) pavement condition information as part of their pavement management system operation. The study covers all of the Iowa DOT’s interstate and primary National Highway System (NHS) and non-NHS system. A new pavement condition rating system that provides a consistent, unified approach in rating pavements in Iowa is being proposed. The proposed 100-scale system is based on five individual indices derived from specific distress data and pavement properties, and an overall pavement condition index, PCI-2, that combines individual indices using weighting factors. The different indices cover cracking, ride, rutting, faulting, and friction. The Cracking Index is formed by combining cracking data (transverse, longitudinal, wheel-path, and alligator cracking indices). Ride, rutting, and faulting indices utilize the International Roughness Index (IRI), rut depth, and fault height, respectively.
Resumo:
Hydrologic analysis is a critical part of transportation design because it helps ensure that hydraulic structures are able to accommodate the flow regimes they are likely to see. This analysis is currently conducted using computer simulations of water flow patterns, and continuing developments in elevation survey techniques result in higher and higher resolution surveys. Current survey techniques now resolve many natural and anthropogenic features that were not practical to map and, thus, require new methods for dealing with depressions and flow discontinuities. A method for depressional analysis is proposed that uses the fact that most anthropogenically constructed embankments are roughly more symmetrical with greater slopes than natural depressions. An enforcement method for draining depressions is then analyzed on those depressions that should be drained. This procedure has been evaluated on a small watershed in central Iowa, Walnut Creek of the South Skunk River, HUC12 # 070801050901, and was found to accurately identify 88 of 92 drained depressions and place enforcements within two pixels, although the method often tries to drain prairie pothole depressions that are bisected by anthropogenic features.
Resumo:
This paper describes a realistic simulator for the Computed Tomography (CT) scan process for motion analysis. In fact, we are currently developing a new framework to find small motion from the CT scan. In order to prove the fidelity of this framework, or potentially any other algorithm, we present in this paper a simulator to simulate the whole CT acquisition process with a priori known parameters. In other words, it is a digital phantom for the motion analysis that can be used to compare the results of any related algorithm with the ground-truth realistic analytical model. Such a simulator can be used by the community to test different algorithms in the biomedical imaging domain. The most important features of this simulator are its different considerations to simulate the best the real acquisition process and its generality.
Resumo:
In work-zone configurations where lane drops are present, merging of traffic at the taper presents an operational concern. In addition, as flow through the work zone is reduced, the relative traffic safety of the work zone is also reduced. Improving work-zone flow-through merge points depends on the behavior of individual drivers. By better understanding driver behavior, traffic control plans, work zone policies, and countermeasures can be better targeted to reinforce desirable lane closure merging behavior, leading to both improved safety and work-zone capacity. The researchers collected data for two work-zone scenarios that included lane drops with one scenario on the Interstate and the other on an urban arterial roadway. The researchers then modeled and calibrated these scenarios in VISSIM using real-world speeds, travel times, queue lengths, and merging behaviors (percentage of vehicles merging upstream and near the merge point). Once built and calibrated, the researchers modeled strategies for various countermeasures in the two work zones. The models were then used to test and evaluate how various merging strategies affect safety and operations at the merge areas in these two work zones.
Resumo:
Acoustic waveform inversions are an increasingly popular tool for extracting subsurface information from seismic data. They are computationally much more efficient than elastic inversions. Naturally, an inherent disadvantage is that any elastic effects present in the recorded data are ignored in acoustic inversions. We investigate the extent to which elastic effects influence seismic crosshole data. Our numerical modeling studies reveal that in the presence of high contrast interfaces, at which P-to-S conversions occur, elastic effects can dominate the seismic sections, even for experiments involving pressure sources and pressure receivers. Comparisons of waveform inversion results using a purely acoustic algorithm on synthetic data that is either acoustic or elastic, show that subsurface models comprising small low-to-medium contrast (?30%) structures can be successfully resolved in the acoustic approximation. However, in the presence of extended high-contrast anomalous bodies, P-to-S-conversions may substantially degrade the quality of the tomographic images. In particular, extended low-velocity zones are difficult to image. Likewise, relatively small low-velocity features are unresolved, even when advanced a priori information is included. One option for mitigating elastic effects is data windowing, which suppresses later arriving seismic arrivals, such as shear waves. Our tests of this approach found it to be inappropriate because elastic effects are also included in earlier arriving wavetrains. Furthermore, data windowing removes later arriving P-wave phases that may provide critical constraints on the tomograms. Finally, we investigated the extent to which acoustic inversions of elastic data are useful for time-lapse analyses of high contrast engineered structures, for which accurate reconstruction of the subsurface structure is not as critical as imaging differential changes between sequential experiments. Based on a realistic scenario for monitoring a radioactive waste repository, we demonstrated that acoustic inversions of elastic data yield substantial distortions of the tomograms and also unreliable information on trends in the velocity changes.
Resumo:
The objective of this work was to develop neural network models of backpropagation type to estimate solar radiation based on extraterrestrial radiation data, daily temperature range, precipitation, cloudiness and relative sunshine duration. Data from Córdoba, Argentina, were used for development and validation. The behaviour and adjustment between values observed and estimates obtained by neural networks for different combinations of input were assessed. These estimations showed root mean square error between 3.15 and 3.88 MJ m-2 d-1 . The latter corresponds to the model that calculates radiation using only precipitation and daily temperature range. In all models, results show good adjustment to seasonal solar radiation. These results allow inferring the adequate performance and pertinence of this methodology to estimate complex phenomena, such as solar radiation.
Resumo:
Although sources in general nonlinear mixturm arc not separable iising only statistical independence, a special and realistic case of nonlinear mixtnres, the post nonlinear (PNL) mixture is separable choosing a suited separating system. Then, a natural approach is based on the estimation of tho separating Bystem parameters by minimizing an indcpendence criterion, like estimated mwce mutual information. This class of methods requires higher (than 2) order statistics, and cannot separate Gaarsian sources. However, use of [weak) prior, like source temporal correlation or nonstationarity, leads to other source separation Jgw rithms, which are able to separate Gaussian sourra, and can even, for a few of them, works with second-order statistics. Recently, modeling time correlated s011rces by Markov models, we propose vcry efficient algorithms hmed on minimization of the conditional mutual information. Currently, using the prior of temporally correlated sources, we investigate the fesihility of inverting PNL mixtures with non-bijectiw non-liacarities, like quadratic functions. In this paper, we review the main ICA and BSS results for riunlinear mixtures, present PNL models and algorithms, and finish with advanced resutts using temporally correlated snu~sm
Resumo:
We report the case study of a French-Spanish bilingual dyslexic girl, MP, who exhibited a severe visual attention (VA) span deficit but preserved phonological skills. Behavioural investigation showed a severe reduction of reading speed for both single items (words and pseudo-words) and texts in the two languages. However, performance was more affected in French than in Spanish. MP was administered an intensive VA span intervention programme. Pre-post intervention comparison revealed a positive effect of intervention on her VA span abilities. The intervention further transferred to reading. It primarily resulted in faster identification of the regular and irregular words in French. The effect of intervention was rather modest in Spanish that only showed a tendency for faster word reading. Text reading improved in the two languages with a stronger effect in French but pseudo-word reading did not improve in either French or Spanish. The overall results suggest that VA span intervention may primarily enhance the fast global reading procedure, with stronger effects in French than in Spanish. MP underwent two fMRI sessions to explore her brain activations before and after VA span training. Prior to the intervention, fMRI assessment showed that the striate and extrastriate visual cortices alone were activated but none of the regions typically involved in VA span. Post-training fMRI revealed increased activation of the superior and inferior parietal cortices. Comparison of pre- and post-training activations revealed significant activation increase of the superior parietal lobes (BA 7) bilaterally. Thus, we show that a specific VA span intervention not only modulates reading performance but further results in increased brain activity within the superior parietal lobes known to housing VA span abilities. Furthermore, positive effects of VA span intervention on reading suggest that the ability to process multiple visual elements simultaneously is one cause of successful reading acquisition.
Resumo:
We present a framework for modeling right-hand gestures in bowed-string instrument playing, applied to violin. Nearly non-intrusive sensing techniques allow for accurate acquisition of relevant timbre-related bowing gesture parameter cues. We model the temporal contour of bow transversal velocity, bow pressing force, and bow-bridge distance as sequences of short segments, in particular B´ezier cubic curve segments. Considering different articulations, dynamics, andcontexts, a number of note classes is defined. Gesture parameter contours of a performance database are analyzed at note-level by following a predefined grammar that dictatescharacteristics of curve segment sequences for each of the classes into consideration. Based on dynamic programming, gesture parameter contour analysis provides an optimal curve parameter vector for each note. The informationpresent in such parameter vector is enough for reconstructing original gesture parameter contours with significant fidelity. From the resulting representation vectors, weconstruct a statistical model based on Gaussian mixtures, suitable for both analysis and synthesis of bowing gesture parameter contours. We show the potential of the modelby synthesizing bowing gesture parameter contours from an annotated input score. Finally, we point out promising applicationsand developments.