49 resultados para Data modeling
Resumo:
In order to analyze software systems, it is necessary to model them. Static software models are commonly imported by parsing source code and related data. Unfortunately, building custom parsers for most programming languages is a non-trivial endeavour. This poses a major bottleneck for analyzing software systems programmed in languages for which importers do not already exist. Luckily, initial software models do not require detailed parsers, so it is possible to start analysis with a coarse-grained importer, which is then gradually refined. In this paper we propose an approach to "agile modeling" that exploits island grammars to extract initial coarse-grained models, parser combinators to enable gradual refinement of model importers, and various heuristics to recognize language structure, keywords and other language artifacts.
Resumo:
In situ diffusion experiments are performed in geological formations at underground research laboratories to overcome the limitations of laboratory diffusion experiments and investigate scale effects. Tracer concentrations are monitored at the injection interval during the experiment (dilution data) and measured from host rock samples around the injection interval at the end of the experiment (overcoring data). Diffusion and sorption parameters are derived from the inverse numerical modeling of the measured tracer data. The identifiability and the uncertainties of tritium and Na-22(+) diffusion and sorption parameters are studied here by synthetic experiments having the same characteristics as the in situ diffusion and retention (DR) experiment performed on Opalinus Clay. Contrary to previous identifiability analyses of in situ diffusion experiments, which used either dilution or overcoring data at approximate locations, our analysis of the parameter identifiability relies simultaneously on dilution and overcoring data, accounts for the actual position of the overcoring samples in the claystone, uses realistic values of the standard deviation of the measurement errors, relies on model identification criteria to select the most appropriate hypothesis about the existence of a borehole disturbed zone and addresses the effect of errors in the location of the sampling profiles. The simultaneous use of dilution and overcoring data provides accurate parameter estimates in the presence of measurement errors, allows the identification of the right hypothesis about the borehole disturbed zone and diminishes other model uncertainties such as those caused by errors in the volume of the circulation system and the effective diffusion coefficient of the filter. The proper interpretation of the experiment requires the right hypothesis about the borehole disturbed zone. A wrong assumption leads to large estimation errors. The use of model identification criteria helps in the selection of the best model. Small errors in the depth of the overcoring samples lead to large parameter estimation errors. Therefore, attention should be paid to minimize the errors in positioning the depth of the samples. The results of the identifiability analysis do not depend on the particular realization of random numbers. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
The past 1500 years provide a valuable opportunity to study the response of the climate system to external forcings. However, the integration of paleoclimate proxies with climate modeling is critical to improving the understanding of climate dynamics. In this paper, a climate system model and proxy records are therefore used to study the role of natural and anthropogenic forcings in driving the global climate. The inverse and forward approaches to paleoclimate data–model comparison are applied, and sources of uncertainty are identified and discussed. In the first of two case studies, the climate model simulations are compared with multiproxy temperature reconstructions. Robust solar and volcanic signals are detected in Southern Hemisphere temperatures, with a possible volcanic signal detected in the Northern Hemisphere. The anthropogenic signal dominates during the industrial period. It is also found that seasonal and geographical biases may cause multiproxy reconstructions to overestimate the magnitude of the long-term preindustrial cooling trend. In the second case study, the model simulations are compared with a coral δ18O record from the central Pacific Ocean. It is found that greenhouse gases, solar irradiance, and volcanic eruptions all influence the mean state of the central Pacific, but there is no evidence that natural or anthropogenic forcings have any systematic impact on El Niño–Southern Oscillation. The proxy climate relationship is found to change over time, challenging the assumption of stationarity that underlies the interpretation of paleoclimate proxies. These case studies demonstrate the value of paleoclimate data–model comparison but also highlight the limitations of current techniques and demonstrate the need to develop alternative approaches.
Resumo:
Objective: Processes occurring in the course of psychotherapy are characterized by the simple fact that they unfold in time and that the multiple factors engaged in change processes vary highly between individuals (idiographic phenomena). Previous research, however, has neglected the temporal perspective by its traditional focus on static phenomena, which were mainly assessed at the group level (nomothetic phenomena). To support a temporal approach, the authors introduce time-series panel analysis (TSPA), a statistical methodology explicitly focusing on the quantification of temporal, session-to-session aspects of change in psychotherapy. TSPA-models are initially built at the level of individuals and are subsequently aggregated at the group level, thus allowing the exploration of prototypical models. Method: TSPA is based on vector auto-regression (VAR), an extension of univariate auto-regression models to multivariate time-series data. The application of TSPA is demonstrated in a sample of 87 outpatient psychotherapy patients who were monitored by postsession questionnaires. Prototypical mechanisms of change were derived from the aggregation of individual multivariate models of psychotherapy process. In a 2nd step, the associations between mechanisms of change (TSPA) and pre- to postsymptom change were explored. Results: TSPA allowed a prototypical process pattern to be identified, where patient's alliance and self-efficacy were linked by a temporal feedback-loop. Furthermore, therapist's stability over time in both mastery and clarification interventions was positively associated with better outcomes. Conclusions: TSPA is a statistical tool that sheds new light on temporal mechanisms of change. Through this approach, clinicians may gain insight into prototypical patterns of change in psychotherapy.
Resumo:
Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.
Resumo:
Potential desiccation polygons (PDPs) are polygonal surface patterns that are a common feature in Noachian-to-Hesperian-aged phyllosilicate- and chloride-bearing terrains and have been observed with size scales that range from cm-wide (by current rovers) to 10s of meters-wide. The global distribution of PDPs shows that they share certain traits in terms of morphology and geologic setting that can aid identification and distinction from fracturing patterns caused by other processes. They are mostly associated with sedimentary deposits that display spectral evidence for the presence of Fe/Mg smectites, Al-rich smectites or less commonly kaolinites, carbonates, and sulfates. In addition, PDPs may indicate paleolacustrine environments, which are of high interest for planetary exploration, and their presence implies that the fractured units are rich in smectite minerals that may have been deposited in a standing body of water. A collective synthesis with new data, particularly from the HiRISE camera suggests that desiccation cracks may be more common on the surface of Mars than previously thought. A review of terrestrial research on desiccation processes with emphasis on the theoretical background, field studies, and modeling constraints is presented here as well and shown to be consistent with and relevant to certain polygonal patterns on Mars.
Resumo:
Efforts are ongoing to decrease the noise of the GRACE gravity field models and hence to arrive closer to the GRACE baseline. The most significant error sources belong the untreated errors in the observation data and the imperfections in the background models. The recent study (Bandikova&Flury,2014) revealed that the current release of the star camera attitude data (SCA1B RL02) contain noise systematically higher than expected by about a factor 3-4. This is due to an incorrect implementation of the algorithms for quaternion combination in the JPL processing routines. Generating improved SCA data requires that valid data from both star camera heads are available which is not always the case because the Sun and Moon at times blind one camera. In the gravity field modeling, the attitude data are needed for the KBR antenna offset correction and to orient the non-gravitational linear accelerations sensed by the accelerometer. Hence any improvement in the SCA data is expected to be reflected in the gravity field models. In order to quantify the effect on the gravity field, we processed one month of observation data using two different approaches: the celestial mechanics approach (AIUB) and the variational equations approach (ITSG). We show that the noise in the KBR observations and the linear accelerations has effectively decreased. However, the effect on the gravity field on a global scale is hardly evident. We conclude that, at the current level of accuracy, the errors seen in the temporal gravity fields are dominated by errors coming from sources other than the attitude data.
Resumo:
Numerous studies reported a strong link between working memory capacity (WMC) and fluid intelligence (Gf), although views differ in respect to how close these two constructs are related to each other. In the present study, we used a WMC task with five levels of task demands to assess the relationship between WMC and Gf by means of a new methodological approach referred to as fixed-links modeling. Fixed-links models belong to the family of confirmatory factor analysis (CFA) and are of particular interest for experimental, repeated-measures designs. With this technique, processes systematically varying across task conditions can be disentangled from processes unaffected by the experimental manipulation. Proceeding from the assumption that experimental manipulation in a WMC task leads to increasing demands on WMC, the processes systematically varying across task conditions can be assumed to be WMC-specific. Processes not varying across task conditions, on the other hand, are probably independent of WMC. Fixed-links models allow for representing these two kinds of processes by two independent latent variables. In contrast to traditional CFA where a common latent variable is derived from the different task conditions, fixed-links models facilitate a more precise or purified representation of the WMC-related processes of interest. By using fixed-links modeling to analyze data of 200 participants, we identified a non-experimental latent variable, representing processes that remained constant irrespective of the WMC task conditions, and an experimental latent variable which reflected processes that varied as a function of experimental manipulation. This latter variable represents the increasing demands on WMC and, hence, was considered a purified measure of WMC controlled for the constant processes. Fixed-links modeling showed that both the purified measure of WMC (β = .48) as well as the constant processes involved in the task (β = .45) were related to Gf. Taken together, these two latent variables explained the same portion of variance of Gf as a single latent variable obtained by traditional CFA (β = .65) indicating that traditional CFA causes an overestimation of the effective relationship between WMC and Gf. Thus, fixed-links modeling provides a feasible method for a more valid investigation of the functional relationship between specific constructs.
Resumo:
Pressure–Temperature–time (P–T–t) estimates of the syn-kinematic strain at the peak-pressure conditions reached during shallow underthrusting of the Briançonnais Zone in the Alpine subduction zone was made by thermodynamic modelling and 40Ar/39Ar dating in the Plan-de-Phasy unit (SE of the Pelvoux Massif, Western Alps). The dated phengite minerals crystallized syn-kinematically in a shear zone indicating top-to-the-N motion. By combining X-ray mapping with multi-equilibrium calculations, we estimate the phengite crystallization conditions at 270 ± 50 °C and 8.1 ± 2 kbar at an age of 45.9 ± 1.1 Ma. Combining this P–T–t estimate with data from the literature allows us to constrain the timing and geometry of Alpine continental subduction. We propose that the Briançonnais units were scalped on top of the slab during ongoing continental subduction and exhumed continuously until collision.
Resumo:
This study investigates thermally induced tensile stresses in ceramic tilings. Daily and seasonal thermal cycles, as well as, rare but extreme events, such as a hail-storm striking a heated terrace tiling, were studied in the field and by numerical modeling investigations. The field surveys delivered temperature– time diagrams and temperature profiles across tiling systems. These data were taken as input parameters for modeling the stress distribution in the tiling system in order to detect potential sites for material failure. Dependent on the thermal scenario (e.g., slow heating of the entire structure during morning and afternoon, or a rapid cooling of the tiles by a rain storm) the modeling indicates specific locations with high tensile stresses. Typically regions along the rim of the tiling field showed stresses, which can become critical with respect to the adhesion strength. Over the years, ongoing cycles of thermal expansion–contraction result in material fatigue promoting the propagation of cracks. However, the installation of flexible waterproofing membranes (applied between substrate and tile adhesive) represents an efficient technical innovation to reduce such crack propagation as confirmed by both numerical modeling results and microstructural studies on real systems.
Resumo:
Argillaceous rocks are considered to be a suitable geological barrier for the long-term containment of wastes. Their efficiency at retarding contaminant migration is assessed using reactive-transport experiments and modeling, the latter requiring a sound understanding of pore-water chemistry. The building of a pore-water model, which is mandatory for laboratory experiments mimicking in situ conditions, requires a detailed knowledge of the rock mineralogy and of minerals at equilibrium with present-day pore waters. Using a combination of petrological, mineralogical, and isotopic studies, the present study focused on the reduced Opalinus Clay formation (Fm) of the Benken borehole (30 km north of Zurich) which is intended for nuclear-waste disposal in Switzerland. A diagenetic sequence is proposed, which serves as a basis for determining the minerals stable in the formation and their textural relationships. Early cementation of dominant calcite, rare dolomite, and pyrite formed by bacterial sulfate reduction, was followed by formation of iron-rich calcite, ankerite, siderite, glauconite, (Ba, Sr) sulfates, and traces of sphalerite and galena. The distribution and abundance of siderite depends heavily on the depositional environment (and consequently on the water column). Benken sediment deposition during Aalenian times corresponds to an offshore environment with the early formation of siderite concretions at the water/sediment interface at the fluctuating boundary between the suboxic iron reduction and the sulfate reduction zones. Diagenetic minerals (carbonates except dolomite, sulfates, silicates) remained stable from their formation to the present. Based on these mineralogical and geochemical data, the mineral assemblage previously used for the geochemical model of the pore waters at Mont Terri may be applied to Benken without significant changes. These further investigations demonstrate the need for detailed mineralogical and geochemical study to refine the model of pore-water chemistry in a clay formation.
Resumo:
Sound knowledge of the spatial and temporal patterns of rockfalls is fundamental for the management of this very common hazard in mountain environments. Process-based, three-dimensional simulation models are nowadays capable of reproducing the spatial distribution of rockfall occurrences with reasonable accuracy through the simulation of numerous individual trajectories on highly-resolved digital terrain models. At the same time, however, simulation models typically fail to quantify the ‘real’ frequency of rockfalls (in terms of return intervals). The analysis of impact scars on trees, in contrast, yields real rockfall frequencies, but trees may not be present at the location of interest and rare trajectories may not necessarily be captured due to the limited age of forest stands. In this article, we demonstrate that the coupling of modeling with tree-ring techniques may overcome the limitations inherent to both approaches. Based on the analysis of 64 cells (40 m × 40 m) of a rockfall slope located above a 1631-m long road section in the Swiss Alps, we illustrate results from 488 rockfalls detected in 1260 trees. We illustrate that tree impact data cannot only be used (i) to reconstruct the real frequency of rockfalls for individual cells, but that they also serve (ii) the calibration of the rockfall model Rockyfor3D, as well as (iii) the transformation of simulated trajectories into real frequencies. Calibrated simulation results are in good agreement with real rockfall frequencies and exhibit significant differences in rockfall activity between the cells (zones) along the road section. Real frequencies, expressed as rock passages per meter road section, also enable quantification and direct comparison of the hazard potential between the zones. The contribution provides an approach for hazard zoning procedures that complements traditional methods with a quantification of rockfall frequencies in terms of return intervals through a systematic inclusion of impact records in trees.
Resumo:
Since no single experimental or modeling technique provides data that allow a description of transport processes in clays and clay minerals at all relevant scales, several complementary approaches have to be combined to understand and explain the interplay between transport relevant phenomena. In this paper molecular dynamics simulations (MD) were used to investigate the mobility of water in the interlayer of montmorillonite (Mt), and to estimate the influence of mineral surfaces and interlayer ions on the water diffusion. Random Walk (RW) simulations based on a simplified representation of pore space in Mt were used to estimate and understand the effect of the arrangement of Mt particles on the meso- to macroscopic diffusivity of water. These theoretical calculations were complemented with quasielastic neutron scattering (QENS) measurements of aqueous diffusion in Mt with two pseudo-layers of water performed at four significantly different energy resolutions (i.e. observation times). The size of the interlayer and the size of Mt particles are two characteristic dimensions which determine the time dependent behavior of water diffusion in Mt. MD simulations show that at very short time scales water dynamics has the characteristic features of an oscillatory motion in the cage formed by neighbors in the first coordination shell. At longer time scales, the interaction of water with the surface determines the water dynamics, and the effect of confinement on the overall water mobility within the interlayer becomes evident. At time scales corresponding to an average water displacement equivalent to the average size of Mt particles, the effects of tortuosity are observed in the meso- to macroscopic pore scale simulations. Consistent with the picture obtained in the simulations, the QENS data can be described using a (local) 3D diffusion at short observation times, whereas at sufficiently long observation times a 2D diffusive motion is clearly observed. The effects of tortuosity measured in macroscopic tracer diffusion experiments are in qualitative agreement with RW simulations. By using experimental data to calibrate molecular and mesoscopic theoretical models, a consistent description of water mobility in clay minerals from the molecular to the macroscopic scale can be achieved. In turn, simulations help in choosing optimal conditions for the experimental measurements and the data interpretation. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
The current paper is an excerpt from the doctoral thesis ”Multi-Layer Insulation as Contribution to Orbital Debris”written at the Institute of Aerospace Systems of the Technische Universit ̈at of Braunschweig. The Multi-Layer In-sulation (MLI) population included in ESA’s MASTER-2009 (M eteoroid and Space-Debris Terrestrial Environment Reference) software is based on models for two mechanisms: One model simulates the release of MLI debris during fragmentation events while another estimates the continuo us release of larger MLI pieces due to aging related deterioration of the material. The aim of the thesis was to revise the MLI models from the base up followed by a re-validation of the simulated MLI debris population. The validation is based on comparison to measurement data of the GEO and GTO debris environment obtained by the Astronomical Institute of the University of Bern (AIUB) using ESA’s Space Debris Telescope (ESASDT), the 1-m Zeiss telescope located at the Optical Ground Station (OGS) at the Teide Observatory at Tenerife, Spain. The re-validation led to the conclusion that MLI may cover a much smaller portion of the observed objects than previously published. Further investigation of the resulting discrepancy revealed that the contribution of altogether nine known Ariane H-10 upper stage explosion events which occurred between 1984 and 2002 has very likely been underestimated in past simulations.
Resumo:
Parameter estimates from commonly used multivariable parametric survival regression models do not directly quantify differences in years of life expectancy. Gaussian linear regression models give results in terms of absolute mean differences, but are not appropriate in modeling life expectancy, because in many situations time to death has a negative skewed distribution. A regression approach using a skew-normal distribution would be an alternative to parametric survival models in the modeling of life expectancy, because parameter estimates can be interpreted in terms of survival time differences while allowing for skewness of the distribution. In this paper we show how to use the skew-normal regression so that censored and left-truncated observations are accounted for. With this we model differences in life expectancy using data from the Swiss National Cohort Study and from official life expectancy estimates and compare the results with those derived from commonly used survival regression models. We conclude that a censored skew-normal survival regression approach for left-truncated observations can be used to model differences in life expectancy across covariates of interest.