951 resultados para Modeling. Simulation
Resumo:
The spectacular images of Comet 103P/Hartley 2 recorded by the Medium Resolution Instrument (MRI) and High Resolution Instrument (HRI) on board of the Extrasolar Planet Observation and Deep Impact Extended Investigation (EPOXI) spacecraft, as the Deep Impact extended mission, revealed that its bi-lobed very active nucleus outgasses volatiles heterogeneously. Indeed, CO2 is the primary driver of activity by dragging out chunks of pure ice out of the nucleus from the sub-solar lobe that appear to be the main source of water in Hartley 2's coma by sublimating slowly as they go away from the nucleus. However, water vapor is released by direct sublimation of the nucleus at the waist without any significant amount of either CO2 or icy grains. The coma structure for a comet with such areas of diverse chemistry differs from the usual models where gases are produced in a homogeneous way from the surface. We use the fully kinetic Direct Simulation Monte Carlo model of Tenishev et al. (Tenishev, V.M., Combi, M.R., Davidsson, B. [2008]. Astrophys. J. 685, 659-677; Tenishev, V.M., Combi, M.R., Rubin, M. [2011]. Astrophys. J. 732, 104-120) applied to Comet 103P/Hartley 2 including sublimating icy grains to reproduce the observations made by EPOXI and ground-based measurements. A realistic bi-lobed nucleus with a succession of active areas with different chemistry was included in the model enabling us to study in details the coma of Hartley 2. The different gas production rates from each area were found by fitting the spectra computed using a line-by-line non-LTE radiative transfer model to the HRI observations. The presence of icy grains with long lifetimes, which are pushed anti-sunward by radiation pressure, explains the observed OH asymmetry with enhancement on the night side of the coma.
Resumo:
Numerical simulation experiments give insight into the evolving energy partitioning during high-strain torsion experiments of calcite. Our numerical experiments are designed to derive a generic macroscopic grain size sensitive flow law capable of describing the full evolution from the transient regime to steady state. The transient regime is crucial for understanding the importance of micro structural processes that may lead to strain localization phenomena in deforming materials. This is particularly important in geological and geodynamic applications where the phenomenon of strain localization happens outside the time frame that can be observed under controlled laboratory conditions. Ourmethod is based on an extension of the paleowattmeter approach to the transient regime. We add an empirical hardening law using the Ramberg-Osgood approximation and assess the experiments by an evolution test function of stored over dissipated energy (lambda factor). Parameter studies of, strain hardening, dislocation creep parameter, strain rates, temperature, and lambda factor as well asmesh sensitivity are presented to explore the sensitivity of the newly derived transient/steady state flow law. Our analysis can be seen as one of the first steps in a hybrid computational-laboratory-field modeling workflow. The analysis could be improved through independent verifications by thermographic analysis in physical laboratory experiments to independently assess lambda factor evolution under laboratory conditions.
Resumo:
Empirical evidence and theoretical studies suggest that the phenotype, i.e., cellular- and molecular-scale dynamics, including proliferation rate and adhesiveness due to microenvironmental factors and gene expression that govern tumor growth and invasiveness, also determine gross tumor-scale morphology. It has been difficult to quantify the relative effect of these links on disease progression and prognosis using conventional clinical and experimental methods and observables. As a result, successful individualized treatment of highly malignant and invasive cancers, such as glioblastoma, via surgical resection and chemotherapy cannot be offered and outcomes are generally poor. What is needed is a deterministic, quantifiable method to enable understanding of the connections between phenotype and tumor morphology. Here, we critically assess advantages and disadvantages of recent computational modeling efforts (e.g., continuum, discrete, and cellular automata models) that have pursued this understanding. Based on this assessment, we review a multiscale, i.e., from the molecular to the gross tumor scale, mathematical and computational "first-principle" approach based on mass conservation and other physical laws, such as employed in reaction-diffusion systems. Model variables describe known characteristics of tumor behavior, and parameters and functional relationships across scales are informed from in vitro, in vivo and ex vivo biology. We review the feasibility of this methodology that, once coupled to tumor imaging and tumor biopsy or cell culture data, should enable prediction of tumor growth and therapy outcome through quantification of the relation between the underlying dynamics and morphological characteristics. In particular, morphologic stability analysis of this mathematical model reveals that tumor cell patterning at the tumor-host interface is regulated by cell proliferation, adhesion and other phenotypic characteristics: histopathology information of tumor boundary can be inputted to the mathematical model and used as a phenotype-diagnostic tool to predict collective and individual tumor cell invasion of surrounding tissue. This approach further provides a means to deterministically test effects of novel and hypothetical therapy strategies on tumor behavior.
Resumo:
Besides its primary role in producing food and fiber, agriculture also has relevant effects on several other functions, such as management of renewable natural resources. Climate change (CC) may lead to new trade-offs between agricultural functions or aggravate existing ones, but suitable agricultural management may maintain or even improve the ability of agroecosystems to supply these functions. Hence, it is necessary to identify relevant drivers (e.g., cropping practices, local conditions) and their interactions, and how they affect agricultural functions in a changing climate. The goal of this study was to use a modeling framework to analyze the sensitivity of indicators of three important agricultural functions, namely crop yield (food and fiber production function), soil erosion (soil conservation function), and nutrient leaching (clean water provision function), to a wide range of agricultural practices for current and future climate conditions. In a two-step approach, cropping practices that explain high proportions of variance of the different indicators were first identified by an analysis of variance-based sensitivity analysis. Then, most suitable combinations of practices to achieve best performance with respect to each indicator were extracted, and trade-offs were analyzed. The procedure was applied to a region in western Switzerland, considering two different soil types to test the importance of local environmental constraints. Results show that the sensitivity of crop yield and soil erosion due to management is high, while nutrient leaching mostly depends on soil type. We found that the influence of most agricultural practices does not change significantly with CC; only irrigation becomes more relevant as a consequence of decreasing summer rainfall. Trade-offs were identified when focusing on best performances of each indicator separately, and these were amplified under CC. For adaptation to CC in the selected study region, conservation soil management and the use of cropped grasslands appear to be the most suitable options to avoid trade-offs.
Resumo:
Using a three-dimensional physical-biogeochemical model, we have investigated the modeled responses of diatom productivity and biogenic silica export to iron enrichment in the equatorial Pacific, and compared the model simulation with in situ (IronEx II) iron fertilization results. In the eastern equatorial Pacific, an area of 540,000 km(2) was enhanced with iron by changing the photosynthetic efficiency and silicate and nitrogen uptake kinetics of phytoplankton in the model for a period of 20 days. The vertically integrated Chl a and primary production increased by about threefold 5 days after the start of the experiment, similar to that observed in the IronEx II experiment. Diatoms contribute to the initial increase of the total phytoplankton biomass, but decrease sharply after 10 days because of mesozooplankton grazing. The modeled surface nutrients (silicate and nitrate) and TCO(2) anomaly fields, obtained from the difference between the "iron addition'' and "ambient'' (without iron) concentrations, also agreed well with the IronEx II observations. The enriched patch is tracked with an inert tracer similar to the SF6 used in the IronEx II. The modeled depth-time distribution of sinking biogenic silica (BSi) indicates that it would take more than 30 days after iron injection to detect any significant BSi export out of the euphotic zone. Sensitivity studies were performed to establish the importance of fertilized patch size, duration of fertilization, and the role of mesozooplankton grazing. A larger size of the iron patch tends to produce a broader extent and longer-lasting phytoplankton blooms. Longer duration prolongs phytoplankton growth, but higher zooplankton grazing pressure prevents significant phytoplankton biomass accumulation. With the same treatment of iron fertilization in the model, lowering mesozooplankton grazing rate generates much stronger diatom bloom, but it is terminated by Si(OH)(4) limitation after the initial rapid increase. Increasing mesozooplankton grazing rate, the diatom increase due to iron addition stays at minimum level, but small phytoplankton tend to increase. The numerical model experiments demonstrate the value of ecosystem modeling for evaluating the detailed interaction between biogeochemical cycle and iron fertilization in the equatorial Pacific.
Resumo:
In this second part of our comparative study inspecting the (dis)similarities between “Stokes” and “Jones,” we present simulation results yielded by two independent Monte Carlo programs: (i) one developed in Bern with the Jones formalism and (ii) the other implemented in Ulm with the Stokes notation. The simulated polarimetric experiments involve suspensions of polystyrene spheres with varying size. Reflection and refraction at the sample/air interfaces are also considered. Both programs yield identical results when propagating pure polarization states, yet, with unpolarized illumination, second order statistical differences appear, thereby highlighting the pre-averaged nature of the Stokes parameters. This study serves as a validation for both programs and clarifies the misleading belief according to which “Jones cannot treat depolarizing effects.”
Resumo:
Aim: The landscape metaphor allows viewing corrective experiences (CE) as pathway to a state with relatively lower 'tension' (local minimum). However, such local minima are not easily accessible but obstructed by states with relatively high tension (local maxima) according to the landscape metaphor (Caspar & Berger, 2012). For example, an individual with spider phobia has to transiently tolerate high levels of tension during an exposure therapy to access the local minimum of habituation. To allow for more specific therapeutic guidelines and empirically testable hypotheses, we advance the landscape metaphor to a scientific model which bases on motivational processes. Specifically, we conceptualize CEs as available but unusual trajectories (=pathways) through a motivational space. The dimensions of the motivational state are set up by basic motives such as need for agency or attachment. Methods: Dynamic system theory is used to model motivational states and trajectories using mathematical equations. Fortunately, these equations have easy-to-comprehend and intuitive visual representations similar to the landscape metaphor. Thus, trajectories that represent CEs are informative and action guiding for both therapists and patients without knowledge on dynamic systems. However, the mathematical underpinnings of the model allow researchers to deduct hypotheses for empirical testing. Results: First, the results of simulations of CEs during exposure therapy in anxiety disorders are presented and compared to empirical findings. Second, hypothetical CEs in an autonomy-attachment conflict are reported from a simulation study. Discussion: Preliminary clinical implications for the evocation of CEs are drawn after a critical discussion of the proposed model.
Resumo:
Periacetabular osteotomy (PAO) is an effective approach for surgical treatment of hip dysplasia. The aim of PAO is to increase acetabular coverage of the femoral head and to reduce contact pressures by reorienting the acetabulum fragment after PAO. The success of PAO significantly depends on the surgeon’s experience. Previously, we have developed a computer-assisted planning and navigation system for PAO, which allows for not only quantifying the 3D hip morphology for a computer-assisted diagnosis of hip dysplasia but also a virtual PAO surgical planning and simulation. In this paper, based on this previously developed PAO planning and navigation system, we developed a 3D finite element (FE) model to investigate the optimal acetabulum reorientation after PAO. Our experimental results showed that an optimal position of the acetabulum can be achieved that maximizes contact area and at the same time minimizes peak contact pressure in pelvic and femoral cartilages. In conclusion, our computer-assisted planning and navigation system with FE modeling can be a promising tool to determine the optimal PAO planning strategy.
Resumo:
A benchmark problem set consisting of four problem levels was developed for the simulation of Cr isotope fractionation in 1D and 2D domains. The benchmark is based on a recent field study where Cr(VI) reduction and accompanying Cr isotope fractionation occurs abiotically by an aqueous reaction with dissolved Fe 2+ (Wanner et al., 2012., Appl. Geochem., 27, 644–662). The problem set includes simulation of the major processes affecting the Cr isotopic composition such as the dissolution of various Cr(VI) bearing minerals, fractionation during abiotic aqueous Cr(VI) reduction, and non-fractionating precipitation of Cr(III) as sparingly soluble Cr-hydroxide. Accuracy of the presented solutions was ensured by running the problems with four well-established reactive transport modeling codes: TOUGHREACT, MIN3P, CRUNCHFLOW, and FLOTRAN. Results were also compared with an analytical Rayleigh-type fractionation model. An additional constraint on the correctness of the results was obtained by comparing output from the problem levels simulating Cr isotope fractionation with the corresponding ones only simulating bulk concentrations. For all problem levels, model to model comparisons showed excellent agreement, suggesting that for the tested geochemical processes any code is capable of accurately simulating the fate of individual Cr isotopes.
Resumo:
The cometary coma is a unique phenomenon in the solar system being a planetary atmosphere influenced by little or no gravity. As a comet approaches the sun, the water vapor with some fraction of other gases sublimate, generating a cloud of gas, ice and other refractory materials (rocky and organic dust) ejected from the surface of the nucleus. Sublimating gas molecules undergo frequent collisions and photochemical processes in the near‐nucleus region. Owing to its negligible gravity, comets produce a large and highly variable extensive dusty coma with a size much larger than the characteristic size of the cometary nucleus. The Rosetta spacecraft is en route to comet 67P/Churyumov‐Gerasimenko for a rendezvous, landing, and extensive orbital phase beginning in 2014. Both, interpretation of measurements and safety consideration of the spacecraft require modeling of the comet’s dusty gas environment. In this work we present results of a numerical study of multispecies gaseous and electrically charged dust environment of comet Chyuryumov‐Gerasimenko. Both, gas and dust phases of the coma are simulated kinetically. Photolytic reactions are taken into account. Parameters of the ambient plasma as well as the distribution of electric/magnetic fields are obtained from an MHD simulation [1] of the coma connected to the solar wind. Trajectories of ions and electrically charged dust grains are simulated by accounting for the Lorentz force and the nucleus gravity.
Resumo:
Past and future forest composition and distribution in temperate mountain ranges is strongly influenced by temperature and snowpack. We used LANDCLIM, a spatially explicit, dynamic vegetation model, to simulate forest dynamics for the last 16,000 years and compared the simulation results to pollen and macrofossil records at five sites on the Olympic Peninsula (Washington, USA). To address the hydrological effects of climate-driven variations in snowpack on simulated forest dynamics, we added a simple snow accumulation-and-melt module to the vegetation model and compared simulations with and without the module. LANDCLIM produced realistic present-day species composition with respect to elevation and precipitation gradients. Over the last 16,000 years, simulations driven by transient climate data from an atmosphere-ocean general circulation model (AOGCM) and by a chironomid-based temperature reconstruction captured Late-glacial to Late Holocene transitions in forest communities. Overall, the reconstruction-driven vegetation simulations matched observed vegetation changes better than the AOGCM-driven simulations. This study also indicates that forest composition is very sensitive to snowpack-mediated changes in soil moisture. Simulations without the snow module showed a strong effect of snowpack on key bioclimatic variables and species composition at higher elevations. A projected upward shift of the snow line and a decrease in snowpack might lead to drastic changes in mountain forests composition and even a shift to dry meadows due to insufficient moisture availability in shallow alpine soils.
Resumo:
The social processes that lead to destructive behavior in celebratory crowds can be studied through an agent-based computer simulation. Riots are an increasingly common outcome of sports celebrations, and pose the potential for harm to participants, bystanders, property, and the reputation of the groups with whom participants are associated. Rioting cannot necessarily be attributed to the negative emotions of individuals, such as anger, rage, frustration and despair. For instance, the celebratory behavior (e.g., chanting, cheering, singing) during UConn’s “Spring Weekend” and after the 2004 NCAA Championships resulted in several small fires and overturned cars. Further, not every individual in the area of a riot engages in violence, and those who do, do not do so continuously. Instead, small groups carry out the majority of violent acts in relatively short-lived episodes. Agent-based computer simulations are an ideal method for modeling complex group-level social phenomena, such as celebratory gatherings and riots, which emerge from the interaction of relatively “simple” individuals. By making simple assumptions about individuals’ decision-making and behaviors and allowing actors to affect one another, behavioral patterns emerge that cannot be predicted by the characteristics of individuals. The computer simulation developed here models celebratory riot behavior by repeatedly evaluating a single algorithm for each individual, the inputs of which are affected by the characteristics of nearby actors. Specifically, the simulation assumes that (a) actors possess 1 of 5 distinct social identities (group memberships), (b) actors will congregate with actors who possess the same identity, (c) the degree of social cohesion generated in the social context determines the stability of relationships within groups, and (d) actors’ level of aggression is affected by the aggression of other group members. Not only does this simulation provide a systematic investigation of the effects of the initial distribution of aggression, social identification, and cohesiveness on riot outcomes, but also an analytic tool others may use to investigate, visualize and predict how various individual characteristics affect emergent crowd behavior.
Resumo:
Colorectal cancer is the forth most common diagnosed cancer in the United States. Every year about a hundred forty-seven thousand people will be diagnosed with colorectal cancer and fifty-six thousand people lose their lives due to this disease. Most of the hereditary nonpolyposis colorectal cancer (HNPCC) and 12% of the sporadic colorectal cancer show microsatellite instability. Colorectal cancer is a multistep progressive disease. It starts from a mutation in a normal colorectal cell and grows into a clone of cells that further accumulates mutations and finally develops into a malignant tumor. In terms of molecular evolution, the process of colorectal tumor progression represents the acquisition of sequential mutations. ^ Clinical studies use biomarkers such as microsatellite or single nucleotide polymorphisms (SNPs) to study mutation frequencies in colorectal cancer. Microsatellite data obtained from single genome equivalent PCR or small pool PCR can be used to infer tumor progression. Since tumor progression is similar to population evolution, we used an approach known as coalescent, which is well established in population genetics, to analyze this type of data. Coalescent theory has been known to infer the sample's evolutionary path through the analysis of microsatellite data. ^ The simulation results indicate that the constant population size pattern and the rapid tumor growth pattern have different genetic polymorphic patterns. The simulation results were compared with experimental data collected from HNPCC patients. The preliminary result shows the mutation rate in 6 HNPCC patients range from 0.001 to 0.01. The patients' polymorphic patterns are similar to the constant population size pattern which implies the tumor progression is through multilineage persistence instead of clonal sequential evolution. The results should be further verified using a larger dataset. ^
Resumo:
Mixture modeling is commonly used to model categorical latent variables that represent subpopulations in which population membership is unknown but can be inferred from the data. In relatively recent years, the potential of finite mixture models has been applied in time-to-event data. However, the commonly used survival mixture model assumes that the effects of the covariates involved in failure times differ across latent classes, but the covariate distribution is homogeneous. The aim of this dissertation is to develop a method to examine time-to-event data in the presence of unobserved heterogeneity under a framework of mixture modeling. A joint model is developed to incorporate the latent survival trajectory along with the observed information for the joint analysis of a time-to-event variable, its discrete and continuous covariates, and a latent class variable. It is assumed that the effects of covariates on survival times and the distribution of covariates vary across different latent classes. The unobservable survival trajectories are identified through estimating the probability that a subject belongs to a particular class based on observed information. We applied this method to a Hodgkin lymphoma study with long-term follow-up and observed four distinct latent classes in terms of long-term survival and distributions of prognostic factors. Our results from simulation studies and from the Hodgkin lymphoma study demonstrated the superiority of our joint model compared with the conventional survival model. This flexible inference method provides more accurate estimation and accommodates unobservable heterogeneity among individuals while taking involved interactions between covariates into consideration.^
Resumo:
To understand the validity of d18O proxy records as indicators of past temperature change, a series of experiments was conducted using an atmospheric general circulation model fitted with water isotope tracers (Community Atmosphere Model version 3.0, IsoCAM). A pre-industrial simulation was performed as the control experiment, as well as a simulation with all the boundary conditions set to Last Glacial Maximum (LGM) values. Results from the pre-industrial and LGM simulations were compared to experiments in which the influence of individual boundary conditions (greenhouse gases, ice sheet albedo and topography, sea surface temperature (SST), and orbital parameters) were changed each at a time to assess their individual impact. The experiments were designed in order to analyze the spatial variations of the oxygen isotopic composition of precipitation (d18Oprecip) in response to individual climate factors. The change in topography (due to the change in land ice cover) played a significant role in reducing the surface temperature and d18Oprecip over North America. Exposed shelf areas and the ice sheet albedo reduced the Northern Hemisphere surface temperature and d18Oprecip further. A global mean cooling of 4.1 °C was simulated with combined LGM boundary conditions compared to the control simulation, which was in agreement with previous experiments using the fully coupled Community Climate System Model (CCSM3). Large reductions in d18Oprecip over the LGM ice sheets were strongly linked to the temperature decrease over them. The SST and ice sheet topography changes were responsible for most of the changes in the climate and hence the d18Oprecip distribution among the simulations.