51 resultados para Subgrid Scale Model
Resumo:
This paper presents the German version of the Short Understanding of Substance Abuse Scale (SUSS) [Humphreys et al.: Psychol Addict Behav 1996;10:38-44], the Verstandnis von Storungen durch Substanzkonsum (VSS), and evaluates its psychometric properties. The VSS assesses clinicians' beliefs about the nature and treatment of substance use disorders, particularly their endorsement of psychosocial and disease orientation. The VSS was administered to 160 treatment staff members at 12 substance use disorder treatment programs in the German-speaking part of Switzerland. Because the confirmatory factor analysis of the VSS did not completely replicate the factorial structure of the SUSS, an exploratory factor analysis was undertaken. This analysis identified two factors: the Psychosocial model factor and a slightly different Disease model factor. The VSS Disease and Psychosocial subscales showed convergent and discriminant validity, as well as sufficient reliability.
Resumo:
Constructing a 3D surface model from sparse-point data is a nontrivial task. Here, we report an accurate and robust approach for reconstructing a surface model of the proximal femur from sparse-point data and a dense-point distribution model (DPDM). The problem is formulated as a three-stage optimal estimation process. The first stage, affine registration, is to iteratively estimate a scale and a rigid transformation between the mean surface model of the DPDM and the sparse input points. The estimation results of the first stage are used to establish point correspondences for the second stage, statistical instantiation, which stably instantiates a surface model from the DPDM using a statistical approach. This surface model is then fed to the third stage, kernel-based deformation, which further refines the surface model. Handling outliers is achieved by consistently employing the least trimmed squares (LTS) approach with a roughly estimated outlier rate in all three stages. If an optimal value of the outlier rate is preferred, we propose a hypothesis testing procedure to automatically estimate it. We present here our validations using four experiments, which include 1 leave-one-out experiment, 2 experiment on evaluating the present approach for handling pathology, 3 experiment on evaluating the present approach for handling outliers, and 4 experiment on reconstructing surface models of seven dry cadaver femurs using clinically relevant data without noise and with noise added. Our validation results demonstrate the robust performance of the present approach in handling outliers, pathology, and noise. An average 95-percentile error of 1.7-2.3 mm was found when the present approach was used to reconstruct surface models of the cadaver femurs from sparse-point data with noise added.
Resumo:
Software must be constantly adapted to changing requirements. The time scale, abstraction level and granularity of adaptations may vary from short-term, fine-grained adaptation to long-term, coarse-grained evolution. Fine-grained, dynamic and context-dependent adaptations can be particularly difficult to realize in long-lived, large-scale software systems. We argue that, in order to effectively and efficiently deploy such changes, adaptive applications must be built on an infrastructure that is not just model-driven, but is both model-centric and context-aware. Specifically, this means that high-level, causally-connected models of the application and the software infrastructure itself should be available at run-time, and that changes may need to be scoped to the run-time execution context. We first review the dimensions of software adaptation and evolution, and then we show how model-centric design can address the adaptation needs of a variety of applications that span these dimensions. We demonstrate through concrete examples how model-centric and context-aware designs work at the level of application interface, programming language and runtime. We then propose a research agenda for a model-centric development environment that supports dynamic software adaptation and evolution.
Resumo:
BACKGROUND: Alveolar echinococcosis (AE) is a severe helminth disease affecting humans, which is caused by the fox tapeworm Echinococcus multilocularis. AE represents a serious public health issue in larger regions of China, Siberia, and other regions in Asia. In Europe, a significant increase in prevalence since the 1990s is not only affecting the historically documented endemic area north of the Alps but more recently also neighbouring regions previously not known to be endemic. The genetic diversity of the parasite population and respective distribution in Europe have now been investigated in view of generating a fine-tuned map of parasite variants occurring in Europe. This approach may serve as a model to study the parasite at a worldwide level. METHODOLOGY/PRINCIPAL FINDINGS: The genetic diversity of E. multilocularis was assessed based upon the tandemly repeated microsatellite marker EmsB in association with matching fox host geographical positions. Our study demonstrated a higher genetic diversity in the endemic areas north of the Alps when compared to other areas. CONCLUSIONS/SIGNIFICANCE: The study of the spatial distribution of E. multilocularis in Europe, based on 32 genetic clusters, suggests that Europe can be considered as a unique global focus of E. multilocularis, which can be schematically drawn as a central core located in Switzerland and Jura Swabe flanked by neighbouring regions where the parasite exhibits a lower genetic diversity. The transmission of the parasite into peripheral regions is governed by a "mainland-island" system. Moreover, the presence of similar genetic profiles in both zones indicated a founder event.
Resumo:
Breaking synoptic-scale Rossby waves (RWB) at the tropopause level are central to the daily weather evolution in the extratropics and the subtropics. RWB leads to pronounced meridional transport of heat, moisture, momentum, and chemical constituents. RWB events are manifest as elongated and narrow structures in the tropopause-level potential vorticity (PV) field. A feature-based validation approach is used to assess the representation of Northern Hemisphere RWB in present-day climate simulations carried out with the ECHAM5-HAM climate model at three different resolutions (T42L19, T63L31, and T106L31) against the ERA-40 reanalysis data set. An objective identification algorithm extracts RWB events from the isentropic PV field and allows quantifying the frequency of occurrence of RWB. The biases in the frequency of RWB are then compared to biases in the time mean tropopause-level jet wind speeds. The ECHAM5-HAM model captures the location of the RWB frequency maxima in the Northern Hemisphere at all three resolutions. However, at coarse resolution (T42L19) the overall frequency of RWB, i.e. the frequency averaged over all seasons and the entire hemisphere, is underestimated by 28%.The higher-resolution simulations capture the overall frequency of RWB much better, with a minor difference between T63L31 and T106L31 (frequency errors of −3.5 and 6%, respectively). The number of large-size RWB events is significantly underestimated by the T42L19 experiment and well represented in the T106L31 simulation. On the local scale, however, significant differences to ERA-40 are found in the higher-resolution simulations. These differences are regionally confined and vary with the season. The most striking difference between T106L31 and ERA-40 is that ECHAM5-HAM overestimates the frequency of RWB in the subtropical Atlantic in all seasons except for spring. This bias maximum is accompanied by an equatorward extension of the subtropical westerlies.
Resumo:
Previous studies have highlighted the severity of detrimental effects for life on earth after an assumed regionally limited nuclear war. These effects are caused by climatic, chemical and radiative changes persisting for up to one decade. However, so far only a very limited number of climate model simulations have been performed, giving rise to the question how realistic previous computations have been. This study uses the coupled chemistry climate model (CCM) SOCOL, which belongs to a different family of CCMs than previously used, to investigate the consequences of such a hypothetical nuclear conflict. In accordance with previous studies, the present work assumes a scenario of a nuclear conflict between India and Pakistan, each applying 50 warheads with an individual blasting power of 15 kt ("Hiroshima size") against the major population centers, resulting in the emission of tiny soot particles, which are generated in the firestorms expected in the aftermath of the detonations. Substantial uncertainties related to the calculation of likely soot emissions, particularly concerning assumptions of target fuel loading and targeting of weapons, have been addressed by simulating several scenarios, with soot emissions ranging from 1 to 12 Tg. Their high absorptivity with respect to solar radiation leads to a rapid self-lofting of the soot particles into the strato- and mesosphere within a few days after emission, where they remain for several years. Consequently, the model suggests earth's surface temperatures to drop by several degrees Celsius due to the shielding of solar irradiance by the soot, indicating a major global cooling. In addition, there is a substantial reduction of precipitation lasting 5 to 10 yr after the conflict, depending on the magnitude of the initial soot release. Extreme cold spells associated with an increase in sea ice formation are found during Northern Hemisphere winter, which expose the continental land masses of North America and Eurasia to a cooling of several degrees. In the stratosphere, the strong heating leads to an acceleration of catalytic ozone loss and, consequently, to enhancements of UV radiation at the ground. In contrast to surface temperature and precipitation changes, which show a linear dependence to the soot burden, there is a saturation effect with respect to stratospheric ozone chemistry. Soot emissions of 5 Tg lead to an ozone column reduction of almost 50% in northern high latitudes, while emitting 12 Tg only increases ozone loss by a further 10%. In summary, this study, though using a different chemistry climate model, corroborates the previous investigations with respect to the atmospheric impacts. In addition to these persistent effects, the present study draws attention to episodically cold phases, which would likely add to the severity of human harm worldwide. The best insurance against such a catastrophic development would be the delegitimization of nuclear weapons.
Resumo:
Asteroid 4Vesta seems to be a major intact protoplanet, with a surface composition similar to that of the HED (howardite-eucrite-diogenite) meteorites. The southern hemisphere is dominated by a giant impact scar, but previous impact models have failed to reproduce the observed topography. The recent discovery that Vesta's southern hemisphere is dominated by two overlapping basins provides an opportunity to model Vesta's topography more accurately. Here we report three-dimensional simulations of Vesta's global evolution under two overlapping planet-scale collisions. We closely reproduce its observed shape, and provide maps of impact excavation and ejecta deposition. Spiral patterns observed in the younger basin Rheasilvia, about one billion years old, are attributed to Coriolis forces during crater collapse. Surface materials exposed in the north come from a depth of about 20kilometres, according to our models, whereas materials exposed inside the southern double-excavation come from depths of about 60-100kilometres. If Vesta began as a layered, completely differentiated protoplanet, then our model predicts large areas of pure diogenites and olivine-rich rocks. These are not seen, possibly implying that the outer 100kilometres or so of Vesta is composed mainly of a basaltic crust (eucrites) with ultramafic intrusions (diogenites).
Resumo:
Reproducing the characteristics and the functional responses of the blood-brain barrier (BBB) in vitro represents an important task for the research community, and would be a critical biotechnological breakthrough. Pharmaceutical and biotechnology industries provide strong demand for inexpensive and easy-to-handle in vitro BBB models to screen novel drug candidates. Recently, it was shown that canonical Wnt signaling is responsible for the induction of the BBB properties in the neonatal brain microvasculature in vivo. In the present study, following on from earlier observations, we have developed a novel model of the BBB in vitro that may be suitable for large scale screening assays. This model is based on immortalized endothelial cell lines derived from murine and human brain, with no need for co-culture with astrocytes. To maintain the BBB endothelial cell properties, the cell lines are cultured in the presence of Wnt3a or drugs that stabilize β-catenin, or they are infected with a transcriptionally active form of β-catenin. Upon these treatments, the cell lines maintain expression of BBB-specific markers, which results in elevated transendothelial electrical resistance and reduced cell permeability. Importantly, these properties are retained for several passages in culture, and they can be reproduced and maintained in different laboratories over time. We conclude that the brain-derived endothelial cell lines that we have investigated gain their specialized characteristics upon activation of the canonical Wnt pathway. This model may be thus suitable to test the BBB permeability to chemicals or large molecular weight proteins, transmigration of inflammatory cells, treatments with cytokines, and genetic manipulation.
Resumo:
Water-conducting faults and fractures were studied in the granite-hosted A¨ spo¨ Hard Rock Laboratory (SE Sweden). On a scale of decametres and larger, steeply dipping faults dominate and contain a variety of different fault rocks (mylonites, cataclasites, fault gouges). On a smaller scale, somewhat less regular fracture patterns were found. Conceptual models of the fault and fracture geometries and of the properties of rock types adjacent to fractures were derived and used as input for the modelling of in situ dipole tracer tests that were conducted in the framework of the Tracer Retention Understanding Experiment (TRUE-1) on a scale of metres. After the identification of all relevant transport and retardation processes, blind predictions of the breakthroughs of conservative to moderately sorbing tracers were calculated and then compared with the experimental data. This paper provides the geological basis and model calibration, while the predictive and inverse modelling work is the topic of the companion paper [J. Contam. Hydrol. 61 (2003) 175]. The TRUE-1 experimental volume is highly fractured and contains the same types of fault rocks and alterations as on the decametric scale. The experimental flow field was modelled on the basis of a 2D-streamtube formalism with an underlying homogeneous and isotropic transmissivity field. Tracer transport was modelled using the dual porosity medium approach, which is linked to the flow model by the flow porosity. Given the substantial pumping rates in the extraction borehole, the transport domain has a maximum width of a few centimetres only. It is concluded that both the uncertainty with regard to the length of individual fractures and the detailed geometry of the network along the flowpath between injection and extraction boreholes are not critical because flow is largely one-dimensional, whether through a single fracture or a network. Process identification and model calibration were based on a single uranine breakthrough (test PDT3), which clearly showed that matrix diffusion had to be included in the model even over the short experimental time scales, evidenced by a characteristic shape of the trailing edge of the breakthrough curve. Using the geological information and therefore considering limited matrix diffusion into a thin fault gouge horizon resulted in a good fit to the experiment. On the other hand, fresh granite was found not to interact noticeably with the tracers over the time scales of the experiments. While fracture-filling gouge materials are very efficient in retarding tracers over short periods of time (hours–days), their volume is very small and, with time progressing, retardation will be dominated by altered wall rock and, finally, by fresh granite. In such rocks, both porosity (and therefore the effective diffusion coefficient) and sorption Kds are more than one order of magnitude smaller compared to fault gouge, thus indicating that long-term retardation is expected to occur but to be less pronounced.
Resumo:
Safe disposal of toxic wastes in geologic formations requires minimal water and gas movement in the vicinity of storage areas, Ventilation of repository tunnels or caverns built in solid rock can desaturate the near field up to a distance of meters from the rock surface, even when the surrounding geological formation is saturated and under hydrostatic pressures. A tunnel segment at the Grimsel test site located in the Aare granite of the Bernese Alps (central Switzerland) has been subjected to a resaturation and, subsequently, to a controlled desaturation, Using thermocouple psychrometers (TP) and time domain reflectometry (TDR), the water potentials psi and water contents theta were measured within the unsaturated granodiorite matrix near the tunnel wall at depths between 0 and 160 cm. During the resaturation the water potentials in the first 30 cm from the rock surface changed within weeks from values of less than -1.5 MPa to near saturation. They returned to the negative initial values during desaturation, The dynamics of this saturation-desaturation regime could be monitored very sensitively using the thermocouple psychrometers, The TDR measurements indicated that water contents changed dose to the surface, but at deeper installation depths the observed changes were within the experimental noise. The field-measured data of the desaturation cycle were used to test the predictive capabilities of the hydraulic parameter functions that were derived from the water retention characteristics psi(theta) determined in the laboratory. A depth-invariant saturated hydraulic conductivity k(s) = 3.0 x 10(-11) m s(-1) was estimated from the psi(t) data at all measurement depths, using the one-dimensional, unsaturated water flow and transport model HYDRUS Vogel er al., 1996, For individual measurement depths, the estimated k(s) varied between 9.8 x 10(-12) and 6.1 x 10(-11) The fitted k(s) values fell within the range of previously estimated k(s) for this location and led to a satisfactory description of the data, even though the model did not include transport of water vapor.
Resumo:
Clays and claystones are used as backfill and barrier materials in the design of waste repositories, because they act as hydraulic barriers and retain contaminants. Transport through such barriers occurs mainly by molecular diffusion. There is thus an interest to relate the diffusion properties of clays to their structural properties. In previous work, we have developed a concept for up-scaling pore-scale molecular diffusion coefficients using a grid-based model for the sample pore structure. Here we present an operational algorithm which can generate such model pore structures of polymineral materials. The obtained pore maps match the rock’s mineralogical components and its macroscopic properties such as porosity, grain and pore size distributions. Representative ensembles of grains in 2D or 3D are created by a lattice Monte Carlo (MC) method, which minimizes the interfacial energy of grains starting from an initial grain distribution. Pores are generated at grain boundaries and/or within grains. The method is general and allows to generate anisotropic structures with grains of approximately predetermined shapes, or with mixtures of different grain types. A specific focus of this study was on the simulation of clay-like materials. The generated clay pore maps were then used to derive upscaled effective diffusion coefficients for non-sorbing tracers using a homogenization technique. The large number of generated maps allowed to check the relations between micro-structural features of clays and their effective transport parameters, as is required to explain and extrapolate experimental diffusion results. As examples, we present a set of 2D and 3D simulations and investigated the effects of nanopores within particles (interlayer pores) and micropores between particles. Archie’s simple power law is followed in systems with only micropores. When nanopores are present, additional parameters are required; the data reveal that effective diffusion coefficients could be described by a sum of two power functions, related to the micro- and nanoporosity. We further used the model to investigate the relationships between particle orientation and effective transport properties of the sample.
Resumo:
Decadal-to-century scale trends for a range of marine environmental variables in the upper mesopelagic layer (UML, 100–600 m) are investigated using results from seven Earth System Models forced by a high greenhouse gas emission scenario. The models as a class represent the observation-based distribution of oxygen (O2) and carbon dioxide (CO2), albeit major mismatches between observation-based and simulated values remain for individual models. By year 2100 all models project an increase in SST between 2 °C and 3 °C, and a decrease in the pH and in the saturation state of water with respect to calcium carbonate minerals in the UML. A decrease in the total ocean inventory of dissolved oxygen by 2% to 4% is projected by the range of models. Projected O2 changes in the UML show a complex pattern with both increasing and decreasing trends reflecting the subtle balance of different competing factors such as circulation, production, remineralization, and temperature changes. Projected changes in the total volume of hypoxic and suboxic waters remain relatively small in all models. A widespread increase of CO2 in the UML is projected. The median of the CO2 distribution between 100 and 600m shifts from 0.1–0.2 mol m−3 in year 1990 to 0.2–0.4 mol m−3 in year 2100, primarily as a result of the invasion of anthropogenic carbon from the atmosphere. The co-occurrence of changes in a range of environmental variables indicates the need to further investigate their synergistic impacts on marine ecosystems and Earth System feedbacks.
Resumo:
The responses of carbon dioxide (CO2) and other climate variables to an emission pulse of CO2 into the atmosphere are often used to compute the Global Warming Potential (GWP) and Global Temperature change Potential (GTP), to characterize the response timescales of Earth System models, and to build reduced-form models. In this carbon cycle-climate model intercomparison project, which spans the full model hierarchy, we quantify responses to emission pulses of different magnitudes injected under different conditions. The CO2 response shows the known rapid decline in the first few decades followed by a millennium-scale tail. For a 100 Gt-C emission pulse added to a constant CO2 concentration of 389 ppm, 25 ± 9% is still found in the atmosphere after 1000 yr; the ocean has absorbed 59 ± 12% and the land the remainder (16 ± 14%). The response in global mean surface air temperature is an increase by 0.20 ± 0.12 °C within the first twenty years; thereafter and until year 1000, temperature decreases only slightly, whereas ocean heat content and sea level continue to rise. Our best estimate for the Absolute Global Warming Potential, given by the time-integrated response in CO2 at year 100 multiplied by its radiative efficiency, is 92.5 × 10−15 yr W m−2 per kg-CO2. This value very likely (5 to 95% confidence) lies within the range of (68 to 117) × 10−15 yr W m−2 per kg-CO2. Estimates for time-integrated response in CO2 published in the IPCC First, Second, and Fourth Assessment and our multi-model best estimate all agree within 15% during the first 100 yr. The integrated CO2 response, normalized by the pulse size, is lower for pre-industrial conditions, compared to present day, and lower for smaller pulses than larger pulses. In contrast, the response in temperature, sea level and ocean heat content is less sensitive to these choices. Although, choices in pulse size, background concentration, and model lead to uncertainties, the most important and subjective choice to determine AGWP of CO2 and GWP is the time horizon.
Resumo:
Understanding natural climate variability and its driving factors is crucial to assessing future climate change. Therefore, comparing proxy-based climate reconstructions with forcing factors as well as comparing these with paleoclimate model simulations is key to gaining insights into the relative roles of internal versus forced variability. A review of the state of modelling of the climate of the last millennium prior to the CMIP5–PMIP3 (Coupled Model Intercomparison Project Phase 5–Paleoclimate Modelling Intercomparison Project Phase 3) coordinated effort is presented and compared to the available temperature reconstructions. Simulations and reconstructions broadly agree on reproducing the major temperature changes and suggest an overall linear response to external forcing on multidecadal or longer timescales. Internal variability is found to have an important influence at hemispheric and global scales. The spatial distribution of simulated temperature changes during the transition from the Medieval Climate Anomaly to the Little Ice Age disagrees with that found in the reconstructions. Thus, either internal variability is a possible major player in shaping temperature changes through the millennium or the model simulations have problems realistically representing the response pattern to external forcing. A last millennium transient climate response (LMTCR) is defined to provide a quantitative framework for analysing the consistency between simulated and reconstructed climate. Beyond an overall agreement between simulated and reconstructed LMTCR ranges, this analysis is able to single out specific discrepancies between some reconstructions and the ensemble of simulations. The disagreement is found in the cases where the reconstructions show reduced covariability with external forcings or when they present high rates of temperature change.