79 resultados para Modeling Rapport Using Hidden Markov Models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a description of the theoretical framework and "best practice" for using the paleo-climate model component of the Coupled Model Intercomparison Project (Phase 5) (CMIP5) to constrain future projections of climate using the same models. The constraints arise from measures of skill in hindcasting paleo-climate changes from the present over 3 periods: the Last Glacial Maximum (LGM) (21 thousand years before present, ka), the mid-Holocene (MH) (6 ka) and the Last Millennium (LM) (850–1850 CE). The skill measures may be used to validate robust patterns of climate change across scenarios or to distinguish between models that have differing outcomes in future scenarios. We find that the multi-model ensemble of paleo-simulations is adequate for addressing at least some of these issues. For example, selected benchmarks for the LGM and MH are correlated to the rank of future projections of precipitation/temperature or sea ice extent to indicate that models that produce the best agreement with paleoclimate information give demonstrably different future results than the rest of the models. We also find that some comparisons, for instance associated with model variability, are strongly dependent on uncertain forcing timeseries, or show time dependent behaviour, making direct inferences for the future problematic. Overall, we demonstrate that there is a strong potential for the paleo-climate simulations to help inform the future projections and urge all the modeling groups to complete this subset of the CMIP5 runs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A central difficulty in modeling epileptogenesis using biologically plausible computational and mathematical models is not the production of activity characteristic of a seizure, but rather producing it in response to specific and quantifiable physiologic change or pathologic abnormality. This is particularly problematic when it is considered that the pathophysiological genesis of most epilepsies is largely unknown. However, several volatile general anesthetic agents, whose principle targets of action are quantifiably well characterized, are also known to be proconvulsant. The authors describe recent approaches to theoretically describing the electroencephalographic effects of volatile general anesthetic agents that may be able to provide important insights into the physiologic mechanisms that underpin seizure initiation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The retrieval (estimation) of sea surface temperatures (SSTs) from space-based infrared observations is increasingly performed using retrieval coefficients derived from radiative transfer simulations of top-of-atmosphere brightness temperatures (BTs). Typically, an estimate of SST is formed from a weighted combination of BTs at a few wavelengths, plus an offset. This paper addresses two questions about the radiative transfer modeling approach to deriving these weighting and offset coefficients. How precisely specified do the coefficients need to be in order to obtain the required SST accuracy (e.g., scatter <0.3 K in week-average SST, bias <0.1 K)? And how precisely is it actually possible to specify them using current forward models? The conclusions are that weighting coefficients can be obtained with adequate precision, while the offset coefficient will often require an empirical adjustment of the order of a few tenths of a kelvin against validation data. Thus, a rational approach to defining retrieval coefficients is one of radiative transfer modeling followed by offset adjustment. The need for this approach is illustrated from experience in defining SST retrieval schemes for operational meteorological satellites. A strategy is described for obtaining the required offset adjustment, and the paper highlights some of the subtler aspects involved with reference to the example of SST retrievals from the imager on the geostationary satellite GOES-8.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A study was conducted to estimate variation among laboratories and between manual and automated techniques of measuring pressure on the resulting gas production profiles (GPP). Eight feeds (molassed sugarbeet feed, grass silage, maize silage, soyabean hulls, maize gluten feed, whole crop wheat silage, wheat, glucose) were milled to pass a I mm screen and sent to three laboratories (ADAS Nutritional Sciences Research Unit, UK; Institute of Grassland and Environmental Research (IGER), UK; Wageningen University, The Netherlands). Each laboratory measured GPP over 144 h using standardised procedures with manual pressure transducers (MPT) and automated pressure systems (APS). The APS at ADAS used a pressure transducer and bottles in a shaking water bath, while the APS at Wageningen and IGER used a pressure sensor and bottles held in a stationary rack. Apparent dry matter degradability (ADDM) was estimated at the end of the incubation. GPP were fitted to a modified Michaelis-Menten model assuming a single phase of gas production, and GPP were described in terms of the asymptotic volume of gas produced (A), the time to half A (B), the time of maximum gas production rate (t(RM) (gas)) and maximum gas production rate (R-M (gas)). There were effects (P<0.001) of substrate on all parameters. However, MPT produced more (P<0.001) gas, but with longer (P<0.001) B and t(RM gas) (P<0.05) and lower (P<0.001) R-M gas compared to APS. There was no difference between apparatus in ADDM estimates. Interactions occurred between substrate and apparatus, substrate and laboratory, and laboratory and apparatus. However, when mean values for MPT were regressed from the individual laboratories, relationships were good (i.e., adjusted R-2 = 0.827 or higher). Good relationships were also observed with APS, although they were weaker than for MPT (i.e., adjusted R-2 = 0.723 or higher). The relationships between mean MPT and mean APS data were also good (i.e., adjusted R 2 = 0. 844 or higher). Data suggest that, although laboratory and method of measuring pressure are sources of variation in GPP estimation, it should be possible using appropriate mathematical models to standardise data among laboratories so that data from one laboratory could be extrapolated to others. This would allow development of a database of GPP data from many diverse feeds. (c) 2005 Published by Elsevier B.V.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this contribution we aim at anchoring Agent-Based Modeling (ABM) simulations in actual models of human psychology. More specifically, we apply unidirectional ABM to social psychological models using low level agents (i.e., intra-individual) to examine whether they generate better predictions, in comparison to standard statistical approaches, concerning the intentions of performing a behavior and the behavior. Moreover, this contribution tests to what extent the predictive validity of models of attitude such as the Theory of Planned Behavior (TPB) or Model of Goal-directed Behavior (MGB) depends on the assumption that peoples’ decisions and actions are purely rational. Simulations were therefore run by considering different deviations from rationality of the agents with a trembling hand method. Two data sets concerning respectively the consumption of soft drinks and physical activity were used. Three key findings emerged from the simulations. First, compared to standard statistical approach the agent-based simulation generally improves the prediction of behavior from intention. Second, the improvement in prediction is inversely proportional to the complexity of the underlying theoretical model. Finally, the introduction of varying degrees of deviation from rationality in agents’ behavior can lead to an improvement in the goodness of fit of the simulations. By demonstrating the potential of ABM as a complementary perspective to evaluating social psychological models, this contribution underlines the necessity of better defining agents in terms of psychological processes before examining higher levels such as the interactions between individuals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a new subcortical structure shape modeling framework using heat kernel smoothing constructed with the Laplace-Beltrami eigenfunctions. The cotan discretization is used to numerically obtain the eigenfunctions of the Laplace-Beltrami operator along the surface of subcortical structures of the brain. The eigenfunctions are then used to construct the heat kernel and used in smoothing out measurements noise along the surface. The proposed framework is applied in investigating the influence of age (38-79 years) and gender on amygdala and hippocampus shape. We detected a significant age effect on hippocampus in accordance with the previous studies. In addition, we also detected a significant gender effect on amygdala. Since we did not find any such differences in the traditional volumetric methods, our results demonstrate the benefit of the current framework over traditional volumetric methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to evaluate the future potential benefits of emission regulation on regional air quality, while taking into account the effects of climate change, off-line air quality projection simulations are driven using weather forcing taken from regional climate models. These regional models are themselves driven by simulations carried out using global climate models (GCM) and economical scenarios. Uncertainties and biases in climate models introduce an additional “climate modeling” source of uncertainty that is to be added to all other types of uncertainties in air quality modeling for policy evaluation. In this article we evaluate the changes in air quality-related weather variables induced by replacing reanalyses-forced by GCM-forced regional climate simulations. As an example we use GCM simulations carried out in the framework of the ERA-interim programme and of the CMIP5 project using the Institut Pierre-Simon Laplace climate model (IPSLcm), driving regional simulations performed in the framework of the EURO-CORDEX programme. In summer, we found compensating deficiencies acting on photochemistry: an overestimation by GCM-driven weather due to a positive bias in short-wave radiation, a negative bias in wind speed, too many stagnant episodes, and a negative temperature bias. In winter, air quality is mostly driven by dispersion, and we could not identify significant differences in either wind or planetary boundary layer height statistics between GCM-driven and reanalyses-driven regional simulations. However, precipitation appears largely overestimated in GCM-driven simulations, which could significantly affect the simulation of aerosol concentrations. The identification of these biases will help interpreting results of future air quality simulations using these data. Despite these, we conclude that the identified differences should not lead to major difficulties in using GCM-driven regional climate simulations for air quality projections.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we discuss the current state-of-the-art in estimating, evaluating, and selecting among non-linear forecasting models for economic and financial time series. We review theoretical and empirical issues, including predictive density, interval and point evaluation and model selection, loss functions, data-mining, and aggregation. In addition, we argue that although the evidence in favor of constructing forecasts using non-linear models is rather sparse, there is reason to be optimistic. However, much remains to be done. Finally, we outline a variety of topics for future research, and discuss a number of areas which have received considerable attention in the recent literature, but where many questions remain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of technology obsolescence in information intensive businesses (software and hardware no longer being supported and replaced by improved and different solutions) and a cost constrained market can severely increase costs and operational, and ultimately reputation risk. Although many businesses recognise technological obsolescence, the pervasive nature of technology often means they have little information to identify the risk and location of pending obsolescence and little money to apply to the solution. This paper presents a low cost structured method to identify obsolete software and the risk of their obsolescence where the structure of a business and its supporting IT resources can be captured, modelled, analysed and the risk to the business of technology obsolescence identified to enable remedial action using qualified obsolescence information. The technique is based on a structured modelling approach using enterprise architecture models and a heatmap algorithm to highlight high risk obsolescent elements. The method has been tested and applied in practice in three consulting studies carried out by Capgemini involving four UK police forces. However the generic technique could be applied to any industry based on plans to improve it using ontology framework methods. This paper contains details of enterprise architecture meta-models and related modelling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present an analysis of seven primary transit observations of the hot Neptune GJ436b at 3.6, 4.5, and 8 μm obtained with the Infrared Array Camera on the Spitzer Space Telescope. After correcting for systematic effects, we fitted the light curves using the Markov Chain Monte Carlo technique. Combining these new data with the EPOXI, Hubble Space Telescope, and ground-based V, I, H, and Ks published observations, the range 0.5-10 μm can be covered. Due to the low level of activity of GJ436, the effect of starspots on the combination of transits at different epochs is negligible at the accuracy of the data set. Representative climate models were calculated by using a three-dimensional, pseudospectral general circulation model with idealized thermal forcing. Simulated transit spectra of GJ436b were generated using line-by-line radiative transfer models including the opacities of the molecular species expected to be present in such a planetary atmosphere. A new, ab-initio-calculated, line list for hot ammonia has been used for the first time. The photometric data observed at multiple wavelengths can be interpreted with methane being the dominant absorption after molecular hydrogen, possibly with minor contributions from ammonia, water, and other molecules. No clear evidence of carbon monoxide and carbon dioxide is found from transit photometry. We discuss this result in the light of a recent paper where photochemical disequilibrium is hypothesized to interpret secondary transit photometric data. We show that the emission photometric data are not incompatible with the presence of abundant methane, but further spectroscopic data are desirable to confirm this scenario.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a selection of methodologies for using the palaeo-climate model component of the Coupled Model Intercomparison Project (Phase 5) (CMIP5) to attempt to constrain future climate projections using the same models. The constraints arise from measures of skill in hindcasting palaeo-climate changes from the present over three periods: the Last Glacial Maximum (LGM) (21 000 yr before present, ka), the mid-Holocene (MH) (6 ka) and the Last Millennium (LM) (850–1850 CE). The skill measures may be used to validate robust patterns of climate change across scenarios or to distinguish between models that have differing outcomes in future scenarios. We find that the multi-model ensemble of palaeo-simulations is adequate for addressing at least some of these issues. For example, selected benchmarks for the LGM and MH are correlated to the rank of future projections of precipitation/temperature or sea ice extent to indicate that models that produce the best agreement with palaeo-climate information give demonstrably different future results than the rest of the models. We also explore cases where comparisons are strongly dependent on uncertain forcing time series or show important non-stationarity, making direct inferences for the future problematic. Overall, we demonstrate that there is a strong potential for the palaeo-climate simulations to help inform the future projections and urge all the modelling groups to complete this subset of the CMIP5 runs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A multiple regression analysis of the NCEP-NCAR reanalysis dataset shows a response to increased solar activity of a weakening and poleward shift of the subtropical jets. This signal is separable from other influences, such as those of El Nino-Southern Oscillation (ENSO) and the North Atlantic Oscillation (NAO), and is very similar to that seen in previous studies using global circulation models (GCMs) of the effects of an increase in solar spectral irradiance. The response to increased stratospheric (volcanic) aerosol is found in the data to be a weakening and equatorward shift of the jets. The GCM studies of the solar influence also showed an impact on tropospheric mean meridional circulation with a weakening and expansion of the tropical Hadley cells and a poleward shift of the Ferrel cells. To understand the mechanisms whereby the changes in solar irradiance affect tropospheric winds and circulation, experiments have been carried out with a simplified global circulation model. The results show that generic heating of the lower stratosphere tends to weaken the subtropical jets and the tropospheric mean meridional circulations. The positions of the jets, and the extent of the Hadley cells, respond to the distribution of the stratospheric heating, with low-latitude heating forcing them to move poleward, and high-latitude or latitudinally uniform heating forcing them equatorward. The patterns of response are similar to those that are found to be a result of the solar or volcanic influences, respectively, in the data analysis. This demonstrates that perturbations to the heat balance of the lower stratosphere, such as those brought about by solar or volcanic activity, can produce changes in the mean tropospheric circulation, even without any direct forcing below the tropopause.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Preferred structures in the surface pressure variability are investigated in and compared between two 100-year simulations of the Hadley Centre climate model HadCM3. In the first (control) simulation, the model is forced with pre-industrial carbon dioxide concentration (1×CO2) and in the second simulation the model is forced with doubled CO2 concentration (2×CO2). Daily winter (December-January-February) surface pressures over the Northern Hemisphere are analysed. The identification of preferred patterns is addressed using multivariate mixture models. For the control simulation, two significant flow regimes are obtained at 5% and 2.5% significance levels within the state space spanned by the leading two principal components. They show a high pressure centre over the North Pacific/Aleutian Islands associated with a low pressure centre over the North Atlantic, and its reverse. For the 2×CO2 simulation, no such behaviour is obtained. At higher-dimensional state space, flow patterns are obtained from both simulations. They are found to be significant at the 1% level for the control simulation and at the 2.5% level for the 2×CO2 simulation. Hence under CO2 doubling, regime behaviour in the large-scale wave dynamics weakens. Doubling greenhouse gas concentration affects both the frequency of occurrence of regimes and also the pattern structures. The less frequent regime becomes amplified and the more frequent regime weakens. The largest change is observed over the Pacific where a significant deepening of the Aleutian low is obtained under CO2 doubling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New data show that island arc rocks have (Pb-210/Ra-226)(o) ratios which range from as low as 0.24 up to 2.88. In contrast, (Ra-22S/Th-232) appears always within error of I suggesting that the large Ra-226-excesses observed in arc rocks were generated more than 30 years ago. This places a maximum estimate on melt ascent velocities of around 4000 m/year and provides further confidence that the Ra-226 excesses reflect deep (source) processes rather than shallow level alteration or seawater contamination. Conversely, partial melting must have occurred more than 30 years prior to eruption. The Pb-210 deficits are most readily explained by protracted magma degassing. Using published numerical models, the data suggest that degassing occurred continuously for periods up to several decades just prior to eruption but no link with eruption periodicity was found. Longer periods are required if degassing is discontinuous, less than 100% efficient or if magma is recharged or stored after degassing. The long durations suggest much of this degassing occurs at depth with implications for the formation of hydrothermal and copper-porphyry systems. A suite of lavas erupted in 1985-1986 from Sangeang Api volcano in the Sunda arc are characterised by deficits of Pb-210 relative to Ra-226 from which 6-8 years of continuous Rn-222 degassing would be inferred from recent numerical models. These data also form a linear (Pb-210)/Pb-(Ra-226)/Pb array which might be interpreted as a 71-year isochron. However, the array passes through the origin suggesting displacement downwards from the equiline in response to degassing and so the slope of the array is inferred not to have any age significance. Simple modelling shows that the range of (Ra-226)/Pb ratios requires thousands of years to develop consistent with differentiation occurring in response to cooling at the base of the crust. Thus, degassing post-dated, and was not responsible for magma differentiation. The formation, migration and extraction of gas bubbles must be extremely efficient in mafic magma whereas the higher viscosity of more siliceous magmas retards the process and can lead to Pb-210 excesses. A possible negative correlation between (Pb-210/Ra-226)(o) and SO2 emission rate requires further testing but may have implications for future eruptions. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New data show that island arc rocks have (Pb-210/Ra-226)(o) ratios which range from as low as 0.24 up to 2.88. In contrast, (Ra-22S/Th-232) appears always within error of I suggesting that the large Ra-226-excesses observed in arc rocks were generated more than 30 years ago. This places a maximum estimate on melt ascent velocities of around 4000 m/year and provides further confidence that the Ra-226 excesses reflect deep (source) processes rather than shallow level alteration or seawater contamination. Conversely, partial melting must have occurred more than 30 years prior to eruption. The Pb-210 deficits are most readily explained by protracted magma degassing. Using published numerical models, the data suggest that degassing occurred continuously for periods up to several decades just prior to eruption but no link with eruption periodicity was found. Longer periods are required if degassing is discontinuous, less than 100% efficient or if magma is recharged or stored after degassing. The long durations suggest much of this degassing occurs at depth with implications for the formation of hydrothermal and copper-porphyry systems. A suite of lavas erupted in 1985-1986 from Sangeang Api volcano in the Sunda arc are characterised by deficits of Pb-210 relative to Ra-226 from which 6-8 years of continuous Rn-222 degassing would be inferred from recent numerical models. These data also form a linear (Pb-210)/Pb-(Ra-226)/Pb array which might be interpreted as a 71-year isochron. However, the array passes through the origin suggesting displacement downwards from the equiline in response to degassing and so the slope of the array is inferred not to have any age significance. Simple modelling shows that the range of (Ra-226)/Pb ratios requires thousands of years to develop consistent with differentiation occurring in response to cooling at the base of the crust. Thus, degassing post-dated, and was not responsible for magma differentiation. The formation, migration and extraction of gas bubbles must be extremely efficient in mafic magma whereas the higher viscosity of more siliceous magmas retards the process and can lead to Pb-210 excesses. A possible negative correlation between (Pb-210/Ra-226)(o) and SO2 emission rate requires further testing but may have implications for future eruptions. (C) 2004 Elsevier B.V. All rights reserved.