945 resultados para Data modeling
Resumo:
Sound knowledge of the spatial and temporal patterns of rockfalls is fundamental for the management of this very common hazard in mountain environments. Process-based, three-dimensional simulation models are nowadays capable of reproducing the spatial distribution of rockfall occurrences with reasonable accuracy through the simulation of numerous individual trajectories on highly-resolved digital terrain models. At the same time, however, simulation models typically fail to quantify the ‘real’ frequency of rockfalls (in terms of return intervals). The analysis of impact scars on trees, in contrast, yields real rockfall frequencies, but trees may not be present at the location of interest and rare trajectories may not necessarily be captured due to the limited age of forest stands. In this article, we demonstrate that the coupling of modeling with tree-ring techniques may overcome the limitations inherent to both approaches. Based on the analysis of 64 cells (40 m × 40 m) of a rockfall slope located above a 1631-m long road section in the Swiss Alps, we illustrate results from 488 rockfalls detected in 1260 trees. We illustrate that tree impact data cannot only be used (i) to reconstruct the real frequency of rockfalls for individual cells, but that they also serve (ii) the calibration of the rockfall model Rockyfor3D, as well as (iii) the transformation of simulated trajectories into real frequencies. Calibrated simulation results are in good agreement with real rockfall frequencies and exhibit significant differences in rockfall activity between the cells (zones) along the road section. Real frequencies, expressed as rock passages per meter road section, also enable quantification and direct comparison of the hazard potential between the zones. The contribution provides an approach for hazard zoning procedures that complements traditional methods with a quantification of rockfall frequencies in terms of return intervals through a systematic inclusion of impact records in trees.
Resumo:
Since no single experimental or modeling technique provides data that allow a description of transport processes in clays and clay minerals at all relevant scales, several complementary approaches have to be combined to understand and explain the interplay between transport relevant phenomena. In this paper molecular dynamics simulations (MD) were used to investigate the mobility of water in the interlayer of montmorillonite (Mt), and to estimate the influence of mineral surfaces and interlayer ions on the water diffusion. Random Walk (RW) simulations based on a simplified representation of pore space in Mt were used to estimate and understand the effect of the arrangement of Mt particles on the meso- to macroscopic diffusivity of water. These theoretical calculations were complemented with quasielastic neutron scattering (QENS) measurements of aqueous diffusion in Mt with two pseudo-layers of water performed at four significantly different energy resolutions (i.e. observation times). The size of the interlayer and the size of Mt particles are two characteristic dimensions which determine the time dependent behavior of water diffusion in Mt. MD simulations show that at very short time scales water dynamics has the characteristic features of an oscillatory motion in the cage formed by neighbors in the first coordination shell. At longer time scales, the interaction of water with the surface determines the water dynamics, and the effect of confinement on the overall water mobility within the interlayer becomes evident. At time scales corresponding to an average water displacement equivalent to the average size of Mt particles, the effects of tortuosity are observed in the meso- to macroscopic pore scale simulations. Consistent with the picture obtained in the simulations, the QENS data can be described using a (local) 3D diffusion at short observation times, whereas at sufficiently long observation times a 2D diffusive motion is clearly observed. The effects of tortuosity measured in macroscopic tracer diffusion experiments are in qualitative agreement with RW simulations. By using experimental data to calibrate molecular and mesoscopic theoretical models, a consistent description of water mobility in clay minerals from the molecular to the macroscopic scale can be achieved. In turn, simulations help in choosing optimal conditions for the experimental measurements and the data interpretation. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
The current paper is an excerpt from the doctoral thesis ”Multi-Layer Insulation as Contribution to Orbital Debris”written at the Institute of Aerospace Systems of the Technische Universit ̈at of Braunschweig. The Multi-Layer In-sulation (MLI) population included in ESA’s MASTER-2009 (M eteoroid and Space-Debris Terrestrial Environment Reference) software is based on models for two mechanisms: One model simulates the release of MLI debris during fragmentation events while another estimates the continuo us release of larger MLI pieces due to aging related deterioration of the material. The aim of the thesis was to revise the MLI models from the base up followed by a re-validation of the simulated MLI debris population. The validation is based on comparison to measurement data of the GEO and GTO debris environment obtained by the Astronomical Institute of the University of Bern (AIUB) using ESA’s Space Debris Telescope (ESASDT), the 1-m Zeiss telescope located at the Optical Ground Station (OGS) at the Teide Observatory at Tenerife, Spain. The re-validation led to the conclusion that MLI may cover a much smaller portion of the observed objects than previously published. Further investigation of the resulting discrepancy revealed that the contribution of altogether nine known Ariane H-10 upper stage explosion events which occurred between 1984 and 2002 has very likely been underestimated in past simulations.
Resumo:
Parameter estimates from commonly used multivariable parametric survival regression models do not directly quantify differences in years of life expectancy. Gaussian linear regression models give results in terms of absolute mean differences, but are not appropriate in modeling life expectancy, because in many situations time to death has a negative skewed distribution. A regression approach using a skew-normal distribution would be an alternative to parametric survival models in the modeling of life expectancy, because parameter estimates can be interpreted in terms of survival time differences while allowing for skewness of the distribution. In this paper we show how to use the skew-normal regression so that censored and left-truncated observations are accounted for. With this we model differences in life expectancy using data from the Swiss National Cohort Study and from official life expectancy estimates and compare the results with those derived from commonly used survival regression models. We conclude that a censored skew-normal survival regression approach for left-truncated observations can be used to model differences in life expectancy across covariates of interest.
Resumo:
Individual participant data (IPD) meta-analysis is an increasingly used approach for synthesizing and investigating treatment effect estimates. Over the past few years, numerous methods for conducting an IPD meta-analysis (IPD-MA) have been proposed, often making different assumptions and modeling choices while addressing a similar research question. We conducted a literature review to provide an overview of methods for performing an IPD-MA using evidence from clinical trials or non-randomized studies when investigating treatment efficacy. With this review, we aim to assist researchers in choosing the appropriate methods and provide recommendations on their implementation when planning and conducting an IPD-MA. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd.
Resumo:
NH···π hydrogen bonds occur frequently between the amino acid side groups in proteins and peptides. Data-mining studies of protein crystals find that ~80% of the T-shaped histidine···aromatic contacts are CH···π, and only ~20% are NH···π interactions. We investigated the infrared (IR) and ultraviolet (UV) spectra of the supersonic-jet-cooled imidazole·benzene (Im·Bz) complex as a model for the NH···π interaction between histidine and phenylalanine. Ground- and excited-state dispersion-corrected density functional calculations and correlated methods (SCS-MP2 and SCS-CC2) predict that Im·Bz has a Cs-symmetric T-shaped minimum-energy structure with an NH···π hydrogen bond to the Bz ring; the NH bond is tilted 12° away from the Bz C₆ axis. IR depletion spectra support the T-shaped geometry: The NH stretch vibrational fundamental is red shifted by −73 cm⁻¹ relative to that of bare imidazole at 3518 cm⁻¹, indicating a moderately strong NH···π interaction. While the Sₒ(A1g) → S₁(B₂u) origin of benzene at 38 086 cm⁻¹ is forbidden in the gas phase, Im·Bz exhibits a moderately intense Sₒ → S₁ origin, which appears via the D₆h → Cs symmetry lowering of Bz by its interaction with imidazole. The NH···π ground-state hydrogen bond is strong, De=22.7 kJ/mol (1899 cm⁻¹). The combination of gas-phase UV and IR spectra confirms the theoretical predictions that the optimum Im·Bz geometry is T shaped and NH···π hydrogen bonded. We find no experimental evidence for a CH···π hydrogen-bonded ground-state isomer of Im·Bz. The optimum NH···π geometry of the Im·Bz complex is very different from the majority of the histidine·aromatic contact geometries found in protein database analyses, implying that the CH···π contacts observed in these searches do not arise from favorable binding interactions but merely from protein side-chain folding and crystal-packing constraints. The UV and IR spectra of the imidazole·(benzene)₂ cluster are observed via fragmentation into the Im·Bz+ mass channel. The spectra of Im·Bz and Im·Bz₂ are cleanly separable by IR hole burning. The UV spectrum of Im·Bz₂ exhibits two 000 bands corresponding to the Sₒ → S₁ excitations of the two inequivalent benzenes, which are symmetrically shifted by −86/+88 cm⁻¹ relative to the 000 band of benzene.
Resumo:
The position effect describes the influence of just-completed items in a psychological scale on subsequent items. This effect has been repeatedly reported for psychometric reasoning scales and is assumed to reflect implicit learning during testing. One way to identify the position effect is fixed-links modeling. With this approach, two latent variables are derived from the test items. Factor loadings of one latent variable are fixed to 1 for all items to represent ability-related variance. Factor loadings on the second latent variable increase from the first to the last item describing the position effect. Previous studies using fixed-links modeling on the position effect investigated reasoning scales constructed in accordance with classical test theory (e.g., Raven’s Progressive Matrices) but, to the best of our knowledge, no Rasch-scaled tests. These tests, however, meet stronger requirements on item homogeneity. In the present study, therefore, we will analyze data from 239 participants who have completed the Rasch-scaled Viennese Matrices Test (VMT). Applying a fixed-links modeling approach, we will test whether a position effect can be depicted as a latent variable and separated from a latent variable representing basic reasoning ability. The results have implications for the assumption of homogeneity in Rasch-homogeneous tests.
Resumo:
Past and future forest composition and distribution in temperate mountain ranges is strongly influenced by temperature and snowpack. We used LANDCLIM, a spatially explicit, dynamic vegetation model, to simulate forest dynamics for the last 16,000 years and compared the simulation results to pollen and macrofossil records at five sites on the Olympic Peninsula (Washington, USA). To address the hydrological effects of climate-driven variations in snowpack on simulated forest dynamics, we added a simple snow accumulation-and-melt module to the vegetation model and compared simulations with and without the module. LANDCLIM produced realistic present-day species composition with respect to elevation and precipitation gradients. Over the last 16,000 years, simulations driven by transient climate data from an atmosphere-ocean general circulation model (AOGCM) and by a chironomid-based temperature reconstruction captured Late-glacial to Late Holocene transitions in forest communities. Overall, the reconstruction-driven vegetation simulations matched observed vegetation changes better than the AOGCM-driven simulations. This study also indicates that forest composition is very sensitive to snowpack-mediated changes in soil moisture. Simulations without the snow module showed a strong effect of snowpack on key bioclimatic variables and species composition at higher elevations. A projected upward shift of the snow line and a decrease in snowpack might lead to drastic changes in mountain forests composition and even a shift to dry meadows due to insufficient moisture availability in shallow alpine soils.
Resumo:
Net primary production (NPP) is commonly modeled as a function of chlorophyll concentration (Chl), even though it has been long recognized that variability in intracellular chlorophyll content from light acclimation and nutrient stress confounds the relationship between Chl and phytoplankton biomass. It was suggested previously that satellite estimates of backscattering can be related to phytoplankton carbon biomass (C) under conditions of a conserved particle size distribution or a relatively stable relationship between C and total particulate organic carbon. Together, C and Chl can be used to describe physiological state (through variations in Chl:C ratios) and NPP. Here, we fully develop the carbon-based productivity model (CbPM) to include information on the subsurface light field and nitracline depths to parameterize photoacclimation and nutrient stress throughout the water column. This depth-resolved approach produces profiles of biological properties (Chl, C, NPP) that are broadly consistent with observations. The CbPM is validated using regional in situ data sets of irradiance-derived products, phytoplankton chlorophyll: carbon ratios, and measured NPP rates. CbPM-based distributions of global NPP are significantly different in both space and time from previous Chl-based estimates because of the distinction between biomass and physiological influences on global Chl fields. The new model yields annual, areally integrated water column production of similar to 52 Pg C a(-1) for the global oceans.
Resumo:
Previous studies (e.g., Hamori, 2000; Ho and Tsui, 2003; Fountas et al., 2004) find high volatility persistence of economic growth rates using generalized autoregressive conditional heteroskedasticity (GARCH) specifications. This paper reexamines the Japanese case, using the same approach and showing that this finding of high volatility persistence reflects the Great Moderation, which features a sharp decline in the variance as well as two falls in the mean of the growth rates identified by Bai and Perronâs (1998, 2003) multiple structural change test. Our empirical results provide new evidence. First, excess kurtosis drops substantially or disappears in the GARCH or exponential GARCH model that corrects for an additive outlier. Second, using the outlier-corrected data, the integrated GARCH effect or high volatility persistence remains in the specification once we introduce intercept-shift dummies into the mean equation. Third, the time-varying variance falls sharply, only when we incorporate the break in the variance equation. Fourth, the ARCH in mean model finds no effects of our more correct measure of output volatility on output growth or of output growth on its volatility.
Resumo:
Kriging is a widely employed method for interpolating and estimating elevations from digital elevation data. Its place of prominence is due to its elegant theoretical foundation and its convenient practical implementation. From an interpolation point of view, kriging is equivalent to a thin-plate spline and is one species among the many in the genus of weighted inverse distance methods, albeit with attractive properties. However, from a statistical point of view, kriging is a best linear unbiased estimator and, consequently, has a place of distinction among all spatial estimators because any other linear estimator that performs as well as kriging (in the least squares sense) must be equivalent to kriging, assuming that the parameters of the semivariogram are known. Therefore, kriging is often held to be the gold standard of digital terrain model elevation estimation. However, I prove that, when used with local support, kriging creates discontinuous digital terrain models, which is to say, surfaces with “rips” and “tears” throughout them. This result is general; it is true for ordinary kriging, kriging with a trend, and other forms. A U.S. Geological Survey (USGS) digital elevation model was analyzed to characterize the distribution of the discontinuities. I show that the magnitude of the discontinuity does not depend on surface gradient but is strongly dependent on the size of the kriging neighborhood.
Resumo:
The respiratory central pattern generator is a collection of medullary neurons that generates the rhythm of respiration. The respiratory central pattern generator feeds phrenic motor neurons, which, in turn, drive the main muscle of respiration, the diaphragm. The purpose of this thesis is to understand the neural control of respiration through mathematical models of the respiratory central pattern generator and phrenic motor neurons. ^ We first designed and validated a Hodgkin-Huxley type model that mimics the behavior of phrenic motor neurons under a wide range of electrical and pharmacological perturbations. This model was constrained physiological data from the literature. Next, we designed and validated a model of the respiratory central pattern generator by connecting four Hodgkin-Huxley type models of medullary respiratory neurons in a mutually inhibitory network. This network was in turn driven by a simple model of an endogenously bursting neuron, which acted as the pacemaker for the respiratory central pattern generator. Finally, the respiratory central pattern generator and phrenic motor neuron models were connected and their interactions studied. ^ Our study of the models has provided a number of insights into the behavior of the respiratory central pattern generator and phrenic motor neurons. These include the suggestion of a role for the T-type and N-type calcium channels during single spikes and repetitive firing in phrenic motor neurons, as well as a better understanding of network properties underlying respiratory rhythm generation. We also utilized an existing model of lung mechanics to study the interactions between the respiratory central pattern generator and ventilation. ^
Resumo:
Anticancer drugs typically are administered in the clinic in the form of mixtures, sometimes called combinations. Only in rare cases, however, are mixtures approved as drugs. Rather, research on mixtures tends to occur after single drugs have been approved. The goal of this research project was to develop modeling approaches that would encourage rational preclinical mixture design. To this end, a series of models were developed. First, several QSAR classification models were constructed to predict the cytotoxicity, oral clearance, and acute systemic toxicity of drugs. The QSAR models were applied to a set of over 115,000 natural compounds in order to identify promising ones for testing in mixtures. Second, an improved method was developed to assess synergistic, antagonistic, and additive effects between drugs in a mixture. This method, dubbed the MixLow method, is similar to the Median-Effect method, the de facto standard for assessing drug interactions. The primary difference between the two is that the MixLow method uses a nonlinear mixed-effects model to estimate parameters of concentration-effect curves, rather than an ordinary least squares procedure. Parameter estimators produced by the MixLow method were more precise than those produced by the Median-Effect Method, and coverage of Loewe index confidence intervals was superior. Third, a model was developed to predict drug interactions based on scores obtained from virtual docking experiments. This represents a novel approach for modeling drug mixtures and was more useful for the data modeled here than competing approaches. The model was applied to cytotoxicity data for 45 mixtures, each composed of up to 10 selected drugs. One drug, doxorubicin, was a standard chemotherapy agent and the others were well-known natural compounds including curcumin, EGCG, quercetin, and rhein. Predictions of synergism/antagonism were made for all possible fixed-ratio mixtures, cytotoxicities of the 10 best-scoring mixtures were tested, and drug interactions were assessed. Predicted and observed responses were highly correlated (r2 = 0.83). Results suggested that some mixtures allowed up to an 11-fold reduction of doxorubicin concentrations without sacrificing efficacy. Taken together, the models developed in this project present a general approach to rational design of mixtures during preclinical drug development. ^
Resumo:
Colorectal cancer is the forth most common diagnosed cancer in the United States. Every year about a hundred forty-seven thousand people will be diagnosed with colorectal cancer and fifty-six thousand people lose their lives due to this disease. Most of the hereditary nonpolyposis colorectal cancer (HNPCC) and 12% of the sporadic colorectal cancer show microsatellite instability. Colorectal cancer is a multistep progressive disease. It starts from a mutation in a normal colorectal cell and grows into a clone of cells that further accumulates mutations and finally develops into a malignant tumor. In terms of molecular evolution, the process of colorectal tumor progression represents the acquisition of sequential mutations. ^ Clinical studies use biomarkers such as microsatellite or single nucleotide polymorphisms (SNPs) to study mutation frequencies in colorectal cancer. Microsatellite data obtained from single genome equivalent PCR or small pool PCR can be used to infer tumor progression. Since tumor progression is similar to population evolution, we used an approach known as coalescent, which is well established in population genetics, to analyze this type of data. Coalescent theory has been known to infer the sample's evolutionary path through the analysis of microsatellite data. ^ The simulation results indicate that the constant population size pattern and the rapid tumor growth pattern have different genetic polymorphic patterns. The simulation results were compared with experimental data collected from HNPCC patients. The preliminary result shows the mutation rate in 6 HNPCC patients range from 0.001 to 0.01. The patients' polymorphic patterns are similar to the constant population size pattern which implies the tumor progression is through multilineage persistence instead of clonal sequential evolution. The results should be further verified using a larger dataset. ^
Resumo:
Hodgkin's disease (HD) is a cancer of the lymphatic system. Survivors of HD face varieties of consequent adverse effects, in which secondary primary tumors (SPT) is one of the most serious consequences. This dissertation is aimed to model time-to-SPT in the presence of death and HD relapses during follow-up.^ The model is designed to handle a mixture phenomenon of SPT and the influence of death. Relapses of HD are adjusted as a covariate. Proportional hazards framework is used to define SPT intensity function, which includes an exponential term to estimate explanatory variables. Death as a competing risk is considered according to different scenarios, depending on which terminal event comes first. Newton-Raphson method is used to estimate the parameter estimates in the end.^ The proposed method is applied to a real data set containing a group of HD patients. Several risk factors for the development of SPT are identified and the findings are noteworthy in the development of healthcare guidelines that may lead to the early detection or prevention of SPT.^