51 resultados para single test electron model
em CentAUR: Central Archive University of Reading - UK
Resumo:
The role of the local atmospheric forcing on the ocean mixed layer depth (MLD) over the global oceans is studied using ocean reanalysis data products and a single-column ocean model coupled to an atmospheric general circulation model. The focus of this study is on how the annual mean and the seasonal cycle of the MLD relate to various forcing characteristics in different parts of the world's ocean, and how anomalous variations in the monthly mean MLD relate to anomalous atmospheric forcings. By analysing both ocean reanalysis data and the single-column ocean model, regions with different dominant forcings and different mean and variability characteristics of the MLD can be identified. Many of the global oceans' MLD characteristics appear to be directly linked to different atmospheric forcing characteristics at different locations. Here, heating and wind-stress are identified as the main drivers; in some, mostly coastal, regions the atmospheric salinity forcing also contributes. The annual mean MLD is more closely related to the annual mean wind-stress and the MLD seasonality is more closely to the seasonality in heating. The single-column ocean model, however, also points out that the MLD characteristics over most global ocean regions, and in particular the tropics and subtropics, cannot be maintained by local atmospheric forcings only, but are also a result of ocean dynamics that are not simulated in a single-column ocean model. Thus, lateral ocean dynamics are essentially in correctly simulating observed MLD.
Resumo:
Using mixed logit models to analyse choice data is common but requires ex ante specification of the functional forms of preference distributions. We make the case for greater use of bounded functional forms and propose the use of the Marginal Likelihood, calculated using Bayesian techniques, as a single measure of model performance across non nested mixed logit specifications. Using this measure leads to very different rankings of model specifications compared to alternative rule of thumb measures. The approach is illustrated using data from a choice experiment regarding GM food types which provides insights regarding the recent WTO dispute between the EU and the US, Canada and Argentina and whether labelling and trade regimes should be based on the production process or product composition.
Resumo:
Presented herein is an experimental design that allows the effects of several radiative forcing factors on climate to be estimated as precisely as possible from a limited suite of atmosphere-only general circulation model (GCM) integrations. The forcings include the combined effect of observed changes in sea surface temperatures, sea ice extent, stratospheric (volcanic) aerosols, and solar output, plus the individual effects of several anthropogenic forcings. A single linear statistical model is used to estimate the forcing effects, each of which is represented by its global mean radiative forcing. The strong colinearity in time between the various anthropogenic forcings provides a technical problem that is overcome through the design of the experiment. This design uses every combination of anthropogenic forcing rather than having a few highly replicated ensembles, which is more commonly used in climate studies. Not only is this design highly efficient for a given number of integrations, but it also allows the estimation of (nonadditive) interactions between pairs of anthropogenic forcings. The simulated land surface air temperature changes since 1871 have been analyzed. The changes in natural and oceanic forcing, which itself contains some forcing from anthropogenic and natural influences, have the most influence. For the global mean, increasing greenhouse gases and the indirect aerosol effect had the largest anthropogenic effects. It was also found that an interaction between these two anthropogenic effects in the atmosphere-only GCM exists. This interaction is similar in magnitude to the individual effects of changing tropospheric and stratospheric ozone concentrations or to the direct (sulfate) aerosol effect. Various diagnostics are used to evaluate the fit of the statistical model. For the global mean, this shows that the land temperature response is proportional to the global mean radiative forcing, reinforcing the use of radiative forcing as a measure of climate change. The diagnostic tests also show that the linear model was suitable for analyses of land surface air temperature at each GCM grid point. Therefore, the linear model provides precise estimates of the space time signals for all forcing factors under consideration. For simulated 50-hPa temperatures, results show that tropospheric ozone increases have contributed to stratospheric cooling over the twentieth century almost as much as changes in well-mixed greenhouse gases.
Resumo:
Windstorms are a main feature of the European climate and exert strong socioeconomic impacts. Large effort has been made in developing and enhancing models to simulate the intensification of windstorms, resulting footprints, and associated impacts. Simulated wind or gust speeds usually differ from observations, as regional climate models have biases and cannot capture all local effects. An approach to adjust regional climate model (RCM) simulations of wind and wind gust toward observations is introduced. For this purpose, 100 windstorms are selected and observations of 173 (111) test sites of the German Weather Service are considered for wind (gust) speed. Theoretical Weibull distributions are fitted to observed and simulated wind and gust speeds, and the distribution parameters of the observations are interpolated onto the RCM computational grid. A probability mapping approach is applied to relate the distributions and to correct the modeled footprints. The results are not only achieved for single test sites but for an area-wide regular grid. The approach is validated using root-mean-square errors on event and site basis, documenting that the method is generally able to adjust the RCM output toward observations. For gust speeds, an improvement on 88 of 100 events and at about 64% of the test sites is reached. For wind, 99 of 100 improved events and ~84% improved sites can be obtained. This gives confidence on the potential of the introduced approach for many applications, in particular those considering wind data.
Resumo:
Northern hemisphere snow water equivalent (SWE) distribution from remote sensing (SSM/I), the ERA40 reanalysis product and the HadCM3 general circulation model are compared. Large differences are seen in the February climatologies, particularly over Siberia. The SSM/I retrieval algorithm may be overestimating SWE in this region, while comparison with independent runoff estimates suggest that HadCM3 is underestimating SWE. Treatment of snow grain size and vegetation parameterizations are concerns with the remotely sensed data. For this reason, ERA40 is used as `truth' for the following experiments. Despite the climatology differences, HadCM3 is able to reproduce the distribution of ERA40 SWE anomalies when assimilating ERA40 anomaly fields of temperature, sea level pressure, atmospheric winds and ocean temperature and salinity. However when forecasts are released from these assimilated initial states, the SWE anomaly distribution diverges rapidly from that of ERA40. No predictability is seen from one season to another. Strong links between European SWE distribution and the North Atlantic Oscillation (NAO) are seen, but forecasts of this index by the assimilation scheme are poor. Longer term relationships between SWE and the NAO, and SWE and the El Ni\~no-Southern Oscillation (ENSO) are also investigated in a multi-century run of HadCM3. SWE is impacted by ENSO in the Himalayas and North America, while the NAO affects SWE in North America and Europe. While significant connections with the NAO index were only present in DJF (and to an extent SON), the link between ENSO and February SWE distribution was seen to exist from the previous JJA ENSO index onwards. This represents a long lead time for SWE prediction for hydrological applications such as flood and wildfire forecasting. Further work is required to develop reliable large scale observation-based SWE datasets with which to test these model-derived connections.
Resumo:
The paper develops a measure of consumer welfare losses associated with withholding it formation about a possible link between BSE and vCJD. The Cost of Ignorance (COI) is measured by comparing the utility of the informed choice with the utility of the uninformed choice, under conditions of improved information. Unlike previous work that is largely based on a single equation demand model, the measure is obtained retrieving a cost,function from a dynamic Almost Ideal Demand System. The estimated perceived loss for Italian consumers due to delayed information ranges from 12 percent to 54 percent of total meat expenditure, depending on the month assumed to embody correct beliefs about the safety level of beef.
Resumo:
The purpose of this study was to apply and compare two time-domain analysis procedures in the determination of oxygen uptake (VO2) kinetics in response to a pseudorandom binary sequence (PRBS) exercise test. PRBS exercise tests have typically been analysed in the frequency domain. However, the complex interpretation of frequency responses may have limited the application of this procedure in both sporting and clinical contexts, where a single time measurement would facilitate subject comparison. The relative potential of both a mean response time (MRT) and a peak cross-correlation time (PCCT) was investigated. This study was divided into two parts: a test-retest reliability study (part A), in which 10 healthy male subjects completed two identical PRBS exercise tests, and a comparison of the VO2 kinetics of 12 elite endurance runners (ER) and 12 elite sprinters (SR; part B). In part A, 95% limits of agreement were calculated for comparison between MRT and PCCT. The results of part A showed no significant difference between test and retest as assessed by MRT [mean (SD) 42.2 (4.2) s and 43.8 (6.9) s] or by PCCT [21.8 (3.7) s and 22.7 (4.5) s]. Measurement error (%) was lower for MRT in comparison with PCCT (16% and 25%, respectively). In part B of the study, the VO2 kinetics of ER were significantly faster than those of SR, as assessed by MRT [33.4 (3.4) s and 39.9 (7.1) s, respectively; P<0.01] and PCCT [20.9 (3.8) s and 24.8 (4.5) s; P < 0.05]. It is possible that either analysis procedure could provide a single test measurement Of VO2 kinetics; however, the greater reliability of the MRT data suggests that this method has more potential for development in the assessment Of VO2 kinetics by PRBS exercise testing.
Resumo:
We discuss the feasibility of wireless terahertz communications links deployed in a metropolitan area and model the large-scale fading of such channels. The model takes into account reception through direct line of sight, ground and wall reflection, as well as diffraction around a corner. The movement of the receiver is modeled by an autonomous dynamic linear system in state space, whereas the geometric relations involved in the attenuation and multipath propagation of the electric field are described by a static nonlinear mapping. A subspace algorithm in conjunction with polynomial regression is used to identify a single-output Wiener model from time-domain measurements of the field intensity when the receiver motion is simulated using a constant angular speed and an exponentially decaying radius. The identification procedure is validated by using the model to perform q-step ahead predictions. The sensitivity of the algorithm to small-scale fading, detector noise, and atmospheric changes are discussed. The performance of the algorithm is tested in the diffraction zone assuming a range of emitter frequencies (2, 38, 60, 100, 140, and 400 GHz). Extensions of the simulation results to situations where a more complicated trajectory describes the motion of the receiver are also implemented, providing information on the performance of the algorithm under a worst case scenario. Finally, a sensitivity analysis to model parameters for the identified Wiener system is proposed.
Resumo:
The associative sequence learning model proposes that the development of the mirror system depends on the same mechanisms of associative learning that mediate Pavlovian and instrumental conditioning. To test this model, two experiments used the reduction of automatic imitation through incompatible sensorimotor training to assess whether mirror system plasticity is sensitive to contingency (i.e., the extent to which activation of one representation predicts activation of another). In Experiment 1, residual automatic imitation was measured following incompatible training in which the action stimulus was a perfect predictor of the response (contingent) or not at all predictive of the response (noncontingent). A contingency effect was observed: There was less automatic imitation indicative of more learning in the contingent group. Experiment 2 replicated this contingency effect and showed that, as predicted by associative learning theory, it can be abolished by signaling trials in which the response occurs in the absence of an action stimulus. These findings support the view that mirror system development depends on associative learning and indicate that this learning is not purely Hebbian. If this is correct, associative learning theory could be used to explain, predict, and intervene in mirror system development.
Resumo:
This paper derives an efficient algorithm for constructing sparse kernel density (SKD) estimates. The algorithm first selects a very small subset of significant kernels using an orthogonal forward regression (OFR) procedure based on the D-optimality experimental design criterion. The weights of the resulting sparse kernel model are then calculated using a modified multiplicative nonnegative quadratic programming algorithm. Unlike most of the SKD estimators, the proposed D-optimality regression approach is an unsupervised construction algorithm and it does not require an empirical desired response for the kernel selection task. The strength of the D-optimality OFR is owing to the fact that the algorithm automatically selects a small subset of the most significant kernels related to the largest eigenvalues of the kernel design matrix, which counts for the most energy of the kernel training data, and this also guarantees the most accurate kernel weight estimate. The proposed method is also computationally attractive, in comparison with many existing SKD construction algorithms. Extensive numerical investigation demonstrates the ability of this regression-based approach to efficiently construct a very sparse kernel density estimate with excellent test accuracy, and our results show that the proposed method compares favourably with other existing sparse methods, in terms of test accuracy, model sparsity and complexity, for constructing kernel density estimates.
Resumo:
Interpretation of ambiguity is consistently associated with anxiety in children, however, the temporal relationship between interpretation and anxiety remains unclear as do the developmental origins of interpretative biases. This study set out to test a model of the development of interpretative biases in a prospective study of 110 children aged 5–9 years of age. Children and their parents were assessed three times, annually, on measures of anxiety and interpretation of ambiguous scenarios (including, for parents, both their own interpretations and their expectations regarding their child). Three models were constructed to assess associations between parent and child anxiety and threat and distress cognitions and expectancies. The three models were all a reasonable fit of the data, and supported conclusions that: (i) children’s threat and distress cognitions were stable over time and were significantly associated with anxiety, (ii) parents’ threat and distress cognitions and expectancies significantly predicted child threat cognitions at some time points, and (iii) parental anxiety significantly predicted parents cognitions, which predicted parental expectancies at some time points. Parental expectancies were also significantly predicted by child cognitions. The findings varied depending on assessment time point and whether threat or distress cognitions were being considered. The findings support the notion that child and parent cognitive processes, in particular parental expectations, may be a useful target in the treatment or prevention of anxiety disorders in children.
Resumo:
Objective To highlight the contribution of the gut microbiota to the modulation of host metabolism by dietary inulin-type fructans (ITF prebiotics) in obese women. Methods A double blind, placebo controlled, intervention study was performed with 30 obese women treated with ITF prebiotics (inulin/oligofructose 50/50 mix; n=15) or placebo (maltodextrin; n=15) for 3 months (16 g/day). Blood, faeces and urine sampling, oral glucose tolerance test, homeostasis model assessment and impedancemetry were performed before and after treatment. The gut microbial composition in faeces was analysed by phylogenetic microarray and qPCR analysis of 16S rDNA. Plasma and urine metabolic profiles were analysed by 1H-NMR spectroscopy. Results Treatment with ITF prebiotics, but not the placebo, led to an increase in Bifidobacterium and Faecalibacterium prausnitzii; both bacteria negatively correlated with serum lipopolysaccharide levels. ITF prebiotics also decreased Bacteroides intestinalis, Bacteroides vulgatus and Propionibacterium, an effect associated with a slight decrease in fat mass and with plasma lactate and phosphatidylcholine levels. No clear treatment clustering could be detected for gut microbial analysis or plasma and urine metabolomic profile analyses. However, ITF prebiotics led to subtle changes in the gut microbiota that may importantly impact on several key metabolites implicated in obesity and/or diabetes. Conclusions ITF prebiotics selectively changed the gut microbiota composition in obese women, leading to modest changes in host metabolism, as suggested by the correlation between some bacterial species and metabolic endotoxaemia or metabolomic signatures.
Resumo:
We present high time-resolution multiwavelength observations of X-ray bursts in the low-mass X-ray binary UY Vol. Strong reprocessed signals are present in the ultraviolet and optical, lagged and smeared with respect to the X-rays. The addition of far-ultraviolet coverage for one burst allows much tighter constraints on the temperature and geometry of the reprocessing region than previously possible. A blackbody reprocessing model for this burst suggests a rise in temperatures during the burst from 18,000 to 35,000 K and an emitting area comparable to that expected for the disk and/or irradiated companion star. The lags are consistent with those expected. The single-zone blackbody model cannot reproduce the ratio of optical to ultraviolet flux during the burst, however. The discrepancy seems too large to explain with deviations from a local blackbody spectrum and more likely indicates that a range of reprocessing temperatures are required. Comparable results are derived from other bursts, and in particular the lag and smearing both appear shorter when the companion star is on the near side of the disk as predicted. The burst observed by HST also yielded a spectrum of the reprocessed light. It is dominated by continuum, with a spectral shape consistent with the temperatures derived from lightcurve modeling. Taken as a whole, our observations confirm the standard paradigm of prompt reprocessing distributed across the disk and companion star, with the response dominated by a thermalized continuum rather than by emission lines.
Resumo:
Tests, as learning events, are often more effective than are additional study opportunities, especially when recall is tested after a long retention interval. To what degree, though, do prior test or study events support subsequent study activities? We set out to test an implication of Bjork and Bjork’s (1992) new theory of disuse—that, under some circumstances, prior study may facilitate subsequent study more than does prior testing. Participants learned English–Swahili translations and then underwent a practice phase during which some items were tested (without feedback) and other items were restudied. Although tested items were better recalled after a 1-week delay than were restudied items, this benefit did not persist after participants had the opportunity to study the items again via feedback. In fact, after this additional study opportunity, items that had been restudied earlier were better recalled than were items that had been tested earlier. These results suggest that measuring the memorial consequences of testing requires more than a single test of retention and, theoretically, a consideration of the differing status of initially recallable and nonrecallable items.
Resumo:
This article examines whether a country's economic reforms are affected by reforms adopted by other countries. Our theoretical model predicts that reforms are more likely when factors of production are internationally mobile and reforms are pursued in other economies. Using the change in the Index of Economic Freedom as the measure of market-liberalizing reforms and panel data (144 countries, 1995–2006), we test our model. We find evidence of the spillover of reforms. Moreover, consistent with our model, international trade is not a vehicle for the diffusion of economic reforms; rather the most important mechanism is geographical or cultural proximity.