876 resultados para Dynamic Data eXchange


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper offers empirical evidence that a country's choice of exchange rate regime can have a signifficant impact on its medium-term rate of productivity growth. Moreover, the impact depends critically on the country's level of financial development, its degree of market regulation, and its distance from the global technology frontier. We illustrate how each of these channels may operate in a simple stylized growth model in which real exchange rate uncertainty exacerbates the negative investment e¤ects of domestic credit market constraints. The empirical analysis is based on an 83 country data set spanning the years 1960-2000. Our approach delivers results that are in striking contrast to the vast existing empirical exchange rate literature, which largely finds the effects of exchange rate volatility on real activity to be relatively small and insignificant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We incorporate the process of enforcement learning by assuming that the agency's current marginal cost is a decreasing function of its past experience of detecting and convicting. The agency accumulates data and information (on criminals, on opportunities of crime) enhancing the ability to apprehend in the future at a lower marginal cost.We focus on the impact of enforcement learning on optimal stationary compliance rules. In particular, we show that the optimal stationary fine could be less-than-maximal and the optimal stationary probability of detection could be higher-than-otherwise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract One requirement for psychotherapy research is an accurate assessment of therapeutic interventions across studies. This study compared frequency and depth of therapist interventions from a dynamic perspective across four studies, conducted in four countries, including three treatment arms of psychodynamic psychotherapy, and one each of psychoanalysis and CBT. All studies used the Psychodynamic Intervention Rating Scales (PIRS) to identify 10 interventions from transcribed whole sessions early and later in treatment. The PIRS adequately categorized all interventions, except in CBT (only 91-93% categorized). As hypothesized, interpretations were present in all dynamic therapies and relatively absent in CBT. Proportions of interpretations increased over time. Defense interpretations were more common than transference interpretations, which were most prevalent in psychoanalysis. Depth of interpretations also increased over time. These data can serve as norms for measuring where on the supportive-interpretive continuum a dynamic treatment lies, as well as identify potentially mutative interventions for further process and outcome study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The precise sampling of soil, biological or micro climatic attributes in tropical forests, which are characterized by a high diversity of species and complex spatial variability, is a difficult task. We found few basic studies to guide sampling procedures. The objective of this study was to define a sampling strategy and data analysis for some parameters frequently used in nutrient cycling studies, i. e., litter amount, total nutrient amounts in litter and its composition (Ca, Mg, Κ, Ν and P), and soil attributes at three depths (organic matter, Ρ content, cation exchange capacity and base saturation). A natural remnant forest in the West of São Paulo State (Brazil) was selected as study area and samples were collected in July, 1989. The total amount of litter and its total nutrient amounts had a high spatial independent variance. Conversely, the variance of litter composition was lower and the spatial dependency was peculiar to each nutrient. The sampling strategy for the estimation of litter amounts and the amount of nutrient in litter should be different than the sampling strategy for nutrient composition. For the estimation of litter amounts and the amount of nutrients in litter (related to quantity) a large number of randomly distributed determinations are needed. Otherwise, for the estimation of litter nutrient composition (related to quality) a smaller amount of spatially located samples should be analyzed. The determination of sampling for soil attributes differed according to the depth. Overall, surface samples (0-5 cm) showed high short distance spatial dependent variance, whereas, subsurface samples exhibited spatial dependency in longer distances. Short transects with sampling interval of 5-10 m are recommended for surface sampling. Subsurface samples must also be spatially located, but with transects or grids with longer distances between sampling points over the entire area. Composite soil samples would not provide a complete understanding of the relation between soil properties and surface dynamic processes or landscape aspects. Precise distribution of Ρ was difficult to estimate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigated the spatial, spectral, temporal and functional proprieties of functional brain connections involved in the concurrent execution of unrelated visual perception and working memory tasks. Electroencephalography data was analysed using a novel data-driven approach assessing source coherence at the whole-brain level. Three connections in the beta-band (18-24 Hz) and one in the gamma-band (30-40 Hz) were modulated by dual-task performance. Beta-coherence increased within two dorsofrontal-occipital connections in dual-task conditions compared to the single-task condition, with the highest coherence seen during low working memory load trials. In contrast, beta-coherence in a prefrontal-occipital functional connection and gamma-coherence in an inferior frontal-occipitoparietal connection was not affected by the addition of the second task and only showed elevated coherence under high working memory load. Analysis of coherence as a function of time suggested that the dorsofrontal-occipital beta-connections were relevant to working memory maintenance, while the prefrontal-occipital beta-connection and the inferior frontal-occipitoparietal gamma-connection were involved in top-down control of concurrent visual processing. The fact that increased coherence in the gamma-connection, from low to high working memory load, was negatively correlated with faster reaction time on the perception task supports this interpretation. Together, these results demonstrate that dual-task demands trigger non-linear changes in functional interactions between frontal-executive and occipitoparietal-perceptual cortices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[spa] El objetivo de este trabajo es analizar si los municipios españoles se ajustan en presencia de un shock presupuestario y (si es así) qué elementos del presupuesto son los que realizan el ajuste. La metodología utilizada para contestar estas preguntas es un mecanismo de corrección del error, VECM, que estimamos con un panel de datos de los municipios españoles durante el período 1988-2006. Nuestros resultados confirman que, en primer lugar, los municipios se ajustan en presencia de un shock fiscal (es decir, el déficit es estacionario en el largo plazo). En segundo lugar, obtenemos que cuando el shock afecta a los ingresos el ajuste lo soporta principalmente el municipio reduciendo el gasto, las transferencias tienen un papel muy reducido en este proceso de ajuste. Por el contrario, cuando el shock afecta al gasto, el ajuste es compartido en términos similares entre el municipio – incrementado los impuestos – y los gobiernos de niveles superiores – incrementando las transferencias. Estos resultados sugieren que la viabilidad de las finanzas pública locales es factible con diferentes entornos institucionales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[spa] El objetivo de este trabajo es analizar si los municipios españoles se ajustan en presencia de un shock presupuestario y (si es así) qué elementos del presupuesto son los que realizan el ajuste. La metodología utilizada para contestar estas preguntas es un mecanismo de corrección del error, VECM, que estimamos con un panel de datos de los municipios españoles durante el período 1988-2006. Nuestros resultados confirman que, en primer lugar, los municipios se ajustan en presencia de un shock fiscal (es decir, el déficit es estacionario en el largo plazo). En segundo lugar, obtenemos que cuando el shock afecta a los ingresos el ajuste lo soporta principalmente el municipio reduciendo el gasto, las transferencias tienen un papel muy reducido en este proceso de ajuste. Por el contrario, cuando el shock afecta al gasto, el ajuste es compartido en términos similares entre el municipio – incrementado los impuestos – y los gobiernos de niveles superiores – incrementando las transferencias. Estos resultados sugieren que la viabilidad de las finanzas pública locales es factible con diferentes entornos institucionales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nonlocal approximations for the electronic exchange and correlation effects are used to compute, within density-functional theory, the polarizability and surface-plasma frequencies of small jelliumlike alkali-metal clusters. The results are compared with those obtained using the local-density approximation and with available experimental data, showing the relevance of these effects in obtaining an accurate description of the surface response of metallic clusters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent findings suggest that the visuo-spatial sketchpad (VSSP) may be divided into two sub-components processing dynamic or static visual information. This model may be useful to elucidate the confusion of data concerning the functioning of the VSSP in schizophrenia. The present study examined patients with schizophrenia and matched controls in a new working memory paradigm involving dynamic (the Ball Flight Task - BFT) or static (the Static Pattern Task - SPT) visual stimuli. In the BFT, the responses of the patients were apparently based on the retention of the last set of segments of the perceived trajectory, whereas control subjects relied on a more global strategy. We assume that the patients' performances are the result of a reduced capacity in chunking visual information since they relied mainly on the retention of the last set of segments. This assumption is confirmed by the poor performance of the patients in the static task (SPT), which requires a combination of stimulus components into object representations. We assume that the static/dynamic distinction may help us to understand the VSSP deficits in schizophrenia. This distinction also raises questions about the hypothesis that visuo-spatial working memory can simply be dissociated into visual and spatial sub-components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In response to the mandate on Load and Resistance Factor Design (LRFD) implementations by the Federal Highway Administration (FHWA) on all new bridge projects initiated after October 1, 2007, the Iowa Highway Research Board (IHRB) sponsored these research projects to develop regional LRFD recommendations. The LRFD development was performed using the Iowa Department of Transportation (DOT) Pile Load Test database (PILOT). To increase the data points for LRFD development, develop LRFD recommendations for dynamic methods, and validate the results ofLRFD calibration, 10 full-scale field tests on the most commonly used steel H-piles (e.g., HP 10 x 42) were conducted throughout Iowa. Detailed in situ soil investigations were carried out, push-in pressure cells were installed, and laboratory soil tests were performed. Pile responses during driving, at the end of driving (EOD), and at re-strikes were monitored using the Pile Driving Analyzer (PDA), following with the CAse Pile Wave Analysis Program (CAPWAP) analysis. The hammer blow counts were recorded for Wave Equation Analysis Program (WEAP) and dynamic formulas. Static load tests (SLTs) were performed and the pile capacities were determined based on the Davisson’s criteria. The extensive experimental research studies generated important data for analytical and computational investigations. The SLT measured loaddisplacements were compared with the simulated results obtained using a model of the TZPILE program and using the modified borehole shear test method. Two analytical pile setup quantification methods, in terms of soil properties, were developed and validated. A new calibration procedure was developed to incorporate pile setup into LRFD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We sequenced 1077 bp of the mitochondrial cytochrome b gene and 511 bp of the nuclear Apolipoprotein B gene in bicoloured shrew (Crocidura leucodon, Soricidae) populations ranging from France to Georgia. The aims of the study were to identify the main genetic clades within this species and the influence of Pleistocene climatic variations on the respective clades. The mitochondrial analyses revealed a European clade distributed from France eastwards to north-western Turkey and a Near East clade distributed from Georgia to Romania; the two clades separated during the Middle Pleistocene. We clearly identified a population expansion after a bottleneck for the European clade based on mitochondrial and nuclear sequencing data; this expansion was not observed for the eastern clade. We hypothesize that the western population was confined to a small Italo-Balkanic refugium, whereas the eastern population subsisted in several refugia along the southern coast of the Black Sea.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the crowded environment of human cells, folding of nascent polypeptides and refolding of stress-unfolded proteins is error prone. Accumulation of cytotoxic misfolded and aggregated species may cause cell death, tissue loss, degenerative conformational diseases, and aging. Nevertheless, young cells effectively express a network of molecular chaperones and folding enzymes, termed here "the chaperome," which can prevent formation of potentially harmful misfolded protein conformers and use the energy of adenosine triphosphate (ATP) to rehabilitate already formed toxic aggregates into native functional proteins. In an attempt to extend knowledge of chaperome mechanisms in cellular proteostasis, we performed a meta-analysis of human chaperome using high-throughput proteomic data from 11 immortalized human cell lines. Chaperome polypeptides were about 10 % of total protein mass of human cells, half of which were Hsp90s and Hsp70s. Knowledge of cellular concentrations and ratios among chaperome polypeptides provided a novel basis to understand mechanisms by which the Hsp60, Hsp70, Hsp90, and small heat shock proteins (HSPs), in collaboration with cochaperones and folding enzymes, assist de novo protein folding, import polypeptides into organelles, unfold stress-destabilized toxic conformers, and control the conformal activity of native proteins in the crowded environment of the cell. Proteomic data also provided means to distinguish between stable components of chaperone core machineries and dynamic regulatory cochaperones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE OF REVIEW: HIV targets primary CD4(+) T cells. The virus depends on the physiological state of its target cells for efficient replication, and, in turn, viral infection perturbs the cellular state significantly. Identifying the virus-host interactions that drive these dynamic changes is important for a better understanding of viral pathogenesis and persistence. The present review focuses on experimental and computational approaches to study the dynamics of viral replication and latency. RECENT FINDINGS: It was recently shown that only a fraction of the inducible latently infected reservoirs are successfully induced upon stimulation in ex-vivo models while additional rounds of stimulation make allowance for reactivation of more latently infected cells. This highlights the potential role of treatment duration and timing as important factors for successful reactivation of latently infected cells. The dynamics of HIV productive infection and latency have been investigated using transcriptome and proteome data. The cellular activation state has shown to be a major determinant of viral reactivation success. Mathematical models of latency have been used to explore the dynamics of the latent viral reservoir decay. SUMMARY: Timing is an important component of biological interactions. Temporal analyses covering aspects of viral life cycle are essential for gathering a comprehensive picture of HIV interaction with the host cell and untangling the complexity of latency. Understanding the dynamic changes tipping the balance between success and failure of HIV particle production might be key to eradicate the viral reservoir.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Benkelman Beam structural test of flexible pavements was replaced in 1976 by dynamic deflection testing with a model 400 Road Rater. The Road Rater is used to determine structural ratings of flexible pavements. New pavement construction in Iowa has decreased with a corresponding increase of restoration and rehabilitation. A method to determine structural ratings of layered systems and rigid pavements is needed to properly design overlay thickness. The objective of this research was to evaluate the feasibility of using the Road Rater to determine support values of layered systems and rigid pavements. This evaluation was accomplished by correlating the Road Rater with the Federal Highway Administration (FHWA) Thumper, a dynamic deflection testing device. Data were obtained with the Road Rater and Thumper at 411 individual test locations on 39 different structural sections ranging from 10" of PCC pavement and 25" of asphalt pavement to a newly graveled unpaved roadway. A high correlation between a 9000 pound Thumper deflection and the 1185 pound Road Rater deflection was obtained. A Road Rater modification has been completed to provide 2000 pound load inputs. The basin, defined by four sensors spaced at 1 foot intervals, resulting from the 2000 pound loading is being used to develop a graph for determining relative subgrade strengths. Road Rater deflections on rigid pavements are sufficient to support the potential for this technique.