973 resultados para Adjoint boundary conditions
Resumo:
The transverse broadening of an energetic jet passing through a non-Abelian plasma is believed to be described by the thermal expectation value of a light-cone Wilson loop. In this exploratory study, we measure the light-cone Wilson loop with classical lattice gauge theory simulations. We observe, as suggested by previous studies, that there are strong interactions already at short transverse distances, which may lead to more efficient jet quenching than in leading-order perturbation theory. We also verify that the asymptotics of the Wilson loop do not change qualitatively when crossing the light cone, which supports arguments in the literature that infrared contributions to jet quenching can be studied with dimensionally reduced simulations in the space-like domain. Finally we speculate on possibilities for full four-dimensional lattice studies of the same observable, perhaps by employing shifted boundary conditions in order to simulate ensembles boosted by an imaginary velocity.
Resumo:
With improving clinical CT scanning technology, the accuracy of CT-based finite element (FE) models of the human skeleton may be ameliorated by an enhanced description of apparent level bone mechanical properties. Micro-finite element (μFE) modeling can be used to study the apparent elastic behavior of human cancellous bone. In this study, samples from the femur, radius and vertebral body were investigated to evaluate the predictive power of morphology–elasticity relationships and to compare them across different anatomical regions. μFE models of 701 trabecular bone cubes with a side length of 5.3 mm were analyzed using kinematic boundary conditions. Based on the FE results, four morphology–elasticity models using bone volume fraction as well as full, limited or no fabric information were calibrated for each anatomical region. The 5 parameter Zysset–Curnier model using full fabric information showed excellent predictive power with coefficients of determination ( r2adj ) of 0.98, 0.95 and 0.94 of the femur, radius and vertebra data, respectively, with mean total norm errors between 14 and 20%. A constant orthotropy model and a constant transverse isotropy model, where the elastic anisotropy is defined by the model parameters, yielded coefficients of determination between 0.90 and 0.98 with total norm errors between 16 and 25%. Neglecting fabric information and using an isotropic model led to r2adj between 0.73 and 0.92 with total norm errors between 38 and 49%. A comparison of the model regressions revealed minor but significant (p<0.01) differences for the fabric–elasticity model parameters calibrated for the different anatomical regions. The proposed models and identified parameters can be used in future studies to compute the apparent elastic properties of human cancellous bone for homogenized FE models.
Resumo:
In the framework of the International Partnerships in Ice Core Sciences, one of the most important targets is to retrieve an Antarctic ice core that extends over the last 1.5 million years (i.e. an ice core that enters the climate era when glacial–interglacial cycles followed the obliquity cycles of the earth). In such an ice core the annual layers of the oldest ice would be thinned by a factor of about 100 and the climatic information of a 10 000 yr interval would be contained in less than 1 m of ice. The gas record in such an Antarctic ice core can potentially reveal the role of greenhouse gas forcing on these 40 000 yr cycles. However, besides the extreme thinning of the annual layers, also the long residence time of the trapped air in the ice and the relatively high ice temperatures near the bedrock favour diffusive exchanges. To investigate the changes in the O2 / N2 ratio, as well as the trapped CO2 concentrations, we modelled the diffusive exchange of the trapped gases O2, N2 and CO2 along the vertical axis. However, the boundary conditions of a potential drilling site are not yet well constrained and the uncertainties in the permeation coefficients of the air constituents in the ice are large. In our simulations, we have set the drill site ice thickness at 2700 m and the bedrock ice temperature at 5–10 K below the ice pressure melting point. Using these conditions and including all further uncertainties associated with the drill site and the permeation coefficients, the results suggest that in the oldest ice the precessional variations in the O2 / N2 ratio will be damped by 50–100%, whereas CO2 concentration changes associated with glacial–interglacial variations will likely be conserved (simulated damping 5%). If the precessional O2 / N2 signal will have disappeared completely in this future ice core, orbital tuning of the ice-core age scale will be limited.
Resumo:
bstract With its smaller size, well-known boundary conditions, and the availability of detailed bathymetric data, Lake Geneva’s subaquatic canyon in the Rhone Delta is an excellent analogue to understand sedimentary pro- cesses in deep-water submarine channels. A multidisciplinary research effort was undertaken to unravel the sediment dynamics in the active canyon. This approach included innovative coring using the Russian MIR sub- mersibles, in situ geotechnical tests, and geophysical, sedimentological, geochemical and radiometric analysis techniques. The canyon floor/levee complex is character- ized by a classic turbiditic system with frequent spillover events. Sedimentary evolution in the active canyon is controlled by a complex interplay between erosion and sedimentation processes. In situ profiling of sediment strength in the upper layer was tested using a dynamic penetrometer and suggests that erosion is the governing mechanism in the proximal canyon floor while sedimen- tation dominates in the levee structure. Sedimentation rates progressively decrease down-channel along the levee structure, with accumulation exceeding 2.6 cm/year in the proximal levee. A decrease in the frequency of turbidites upwards along the canyon wall suggests a progressive confinement of the flow through time. The multi-proxy methodology has also enabled a qualitative slope-stability assessment in the levee structure. The rapid sediment loading, slope undercutting and over-steepening, and increased pore pressure due to high methane concentrations hint at a potential instability of the proximal levees. Fur- thermore, discrete sandy intervals show very high methane concentrations and low shear strength and thus could cor- respond to potentially weak layers prone to scarp failures.
Resumo:
We solve two inverse spectral problems for star graphs of Stieltjes strings with Dirichlet and Neumann boundary conditions, respectively, at a selected vertex called root. The root is either the central vertex or, in the more challenging problem, a pendant vertex of the star graph. At all other pendant vertices Dirichlet conditions are imposed; at the central vertex, at which a mass may be placed, continuity and Kirchhoff conditions are assumed. We derive conditions on two sets of real numbers to be the spectra of the above Dirichlet and Neumann problems. Our solution for the inverse problems is constructive: we establish algorithms to recover the mass distribution on the star graph (i.e. the point masses and lengths of subintervals between them) from these two spectra and from the lengths of the separate strings. If the root is a pendant vertex, the two spectra uniquely determine the parameters on the main string (i.e. the string incident to the root) if the length of the main string is known. The mass distribution on the other edges need not be unique; the reason for this is the non-uniqueness caused by the non-strict interlacing of the given data in the case when the root is the central vertex. Finally, we relate of our results to tree-patterned matrix inverse problems.
Resumo:
PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.
Resumo:
Environmental policy and decision-making are characterized by complex interactions between different actors and sectors. As a rule, a stakeholder analysis is performed to understand those involved, but it has been criticized for lacking quality and consistency. This lack is remedied here by a formal social network analysis that investigates collaborative and multi-level governance settings in a rigorous way. We examine the added value of combining both elements. Our case study examines infrastructure planning in the Swiss water sector. Water supply and wastewater infrastructures are planned far into the future, usually on the basis of projections of past boundary conditions. They affect many actors, including the population, and are expensive. In view of increasing future dynamics and climate change, a more participatory and long-term planning approach is required. Our specific aims are to investigate fragmentation in water infrastructure planning, to understand how actors from different decision levels and sectors are represented, and which interests they follow. We conducted 27 semi-structured interviews with local stakeholders, but also cantonal and national actors. The network analysis confirmed our hypothesis of strong fragmentation: we found little collaboration between the water supply and wastewater sector (confirming horizontal fragmentation), and few ties between local, cantonal, and national actors (confirming vertical fragmentation). Infrastructure planning is clearly dominated by engineers and local authorities. Little importance is placed on longer-term strategic objectives and integrated catchment planning, but this was perceived as more important in a second analysis going beyond typical questions of stakeholder analysis. We conclude that linking a stakeholder analysis, comprising rarely asked questions, with a rigorous social network analysis is very fruitful and generates complementary results. This combination gave us deeper insight into the socio-political-engineering world of water infrastructure planning that is of vital importance to our well-being.
Resumo:
Solar heat is the acknowledged driving force for climatic change. However, ice sheets are also capable of causing climatic change. This property of ice sheets derives from the facts that ice and rock are crystalline whereas the oceans and atmosphere are fluids and that ice sheets are massive enough to depress the earth's crust well below sea level. These features allow time constants for glacial flow and isostatic compensation to be much larger than those for ocean and atmospheric circulation and therefore somewhat independent of the solar variations that control this circulation. This review examines the nature of dynamic processes in ice streams that give ice sheets their degree of independent behavior and emphasizes the consequences of viscoplastic instability inherent in anisotropic polycrystalline solids such as glacial ice. Viscoplastic instability and subglacial topography are responsible for the formation of ice streams near ice sheet margins grounded below sea level. As a result the West Antarctic marine ice sheet is inherently unstable and can be rapidly carved away by calving bays which migrate up surging ice streams. Analyses of tidal flexure along floating ice stream margins, stress and velocity fields in ice streams, and ice stream boundary conditions are presented and used to interpret ERTS 1 photomosaics for West Antarctica in terms of characteristic ice sheet crevasse patterns that can be used to monitor ice stream surges and to study calving bay dynamics.
Resumo:
Gravity wants to pull an ice sheet to the center of the Earth, but cannot because the Earth's crust is in the way, so ice is pushed out sideways instead. Or is it? The ice sheet "sees" nothing preventing it from spreading out except air, which is much less massive than ice. Therefore, does not ice rush forward to fill this relative vacuum; does not the relative vacuum suck ice into it, because Nature abhors a vacuum? If so, the ice sheet is not only pulled downward by gravity, it is also pulled outward by the relative vacuum. This pulling outward will be most rapid where the ice sheet encounters least resistance. The least resistance exists along the bed of ice streams, where ice-bed coupling is reduced by a basal water layer, especially if the ice stream becomes afloat and the floating part is relatively unconfined around its perimeter and unpinned to the sea floor. Ice streams are therefore fast currents of ice that develop near the margins of an ice sheet where these conditions exist. Because of these conditions, ice streams pull ice out of ice sheets and have pulling power equal to the longitudinal gravitational pulling force multiplied by the ice-stream velocity. These boundary conditions beneath and beyond ice streams can be quantified by a basal buoyancy factor that provides a life-cycle classification of ice streams into inception, growth, mature, declining and terminal stages, during which ice streams disintegrate the ice sheet. Surface profiles of ice streams are diagnostic of the stage in a life cycle and, hence, of the vitality of the ice sheet.
Resumo:
Regional climate simulations are conducted using the Polar fifth-generation Pennsylvania State University (PSU)-NCAR Mesoscale Model (MM5) with a 60-km horizontal resolution domain over North America to explore the summer climate of the Last Glacial Maximum (LGM: 21 000 calendar years ago), when much of the continent was covered by the Laurentide Ice Sheet (LIS). Output from a tailored NCAR Community Climate Model version 3 (CCM3) simulation of the LGM climate is used to provide the initial and lateral boundary conditions for Polar MM5. LGM boundary conditions include continental ice sheets, appropriate orbital forcing, reduced CO2 concentration, paleovegetation, modified sea surface temperatures, and lowered sea level. The simulated LGM summer climate is characterized by a pronounced low-level thermal gradient along the southern margin of the LIS resulting from the juxtaposition of the cold ice sheet and adjacent warm ice-free land surface. This sharp thermal gradient anchors the midtropospheric jet stream and facilitates the development of synoptic cyclones that track over the ice sheet, some of which produce copious liquid precipitation along and south of the LIS terminus. Precipitation on the southern margin is orographically enhanced as moist southerly low-level flow (resembling a contemporary, Great Plains low-level jet configuration) in advance of the cyclone is drawn up the ice sheet slope. Composites of wet and dry periods on the LIS southern margin illustrate two distinctly different atmospheric flow regimes. Given the episodic nature of the summer rain events, it may be possible to reconcile the model depiction of wet conditions on the LIS southern margin during the LGM summer with the widely accepted interpretation of aridity across the Great Plains based on geological proxy evidence.
Resumo:
Optimized regional climate simulations are conducted using the Polar MM5, a version of the fifth-generation Pennsylvania State University-NCAR Mesoscale Model (MM5), with a 60-km horizontal resolution domain over North America during the Last Glacial Maximum (LGM, 21 000 calendar years ago), when much of the continent was covered by the Laurentide Ice Sheet (LIS). The objective is to describe the LGM annual cycle at high spatial resolution with an emphasis on the winter atmospheric circulation. Output from a tailored NCAR Community Climate Model version 3 (CCM3) simulation of the LGM climate is used to provide the initial and lateral boundary conditions for Polar MM5. LGM boundary conditions include continental ice sheets, appropriate orbital forcing, reduced CO2 concentration, paleovegetation, modified sea surface temperatures, and lowered sea level. Polar MM5 produces a substantially different atmospheric response to the LGM boundary conditions than CCM3 and other recent GCM simulations. In particular, from November to April the upper-level flow is split around a blocking anticyclone over the LIS, with a northern branch over the Canadian Arctic and a southern branch impacting southern North America. The split flow pattern is most pronounced in January and transitions into a single, consolidated jet stream that migrates northward over the LIS during summer. Sensitivity experiments indicate that the winter split flow in Polar MM5 is primarily due to mechanical forcing by LIS, although model physics and resolution also contribute to the simulated flow configuration. Polar MM5 LGM results are generally consistent with proxy climate estimates in the western United States, Alaska, and the Canadian Arctic and may help resolve some long-standing discrepancies between proxy data and previous simulations of the LGM climate.
Resumo:
This study examines how different microphysical parameterization schemes influence orographically induced precipitation and the distributions of hydrometeors and water vapour for midlatitude summer conditions in the Weather Research and Forecasting (WRF) model. A high-resolution two-dimensional idealized simulation is used to assess the differences between the schemes in which a moist air flow is interacting with a bell-shaped 2 km high mountain. Periodic lateral boundary conditions are chosen to recirculate atmospheric water in the domain. It is found that the 13 selected microphysical schemes conserve the water in the model domain. The gain or loss of water is less than 0.81% over a simulation time interval of 61 days. The differences of the microphysical schemes in terms of the distributions of water vapour, hydrometeors and accumulated precipitation are presented and discussed. The Kessler scheme, the only scheme without ice-phase processes, shows final values of cloud liquid water 14 times greater than the other schemes. The differences among the other schemes are not as extreme, but still they differ up to 79% in water vapour, up to 10 times in hydrometeors and up to 64% in accumulated precipitation at the end of the simulation. The microphysical schemes also differ in the surface evaporation rate. The WRF single-moment 3-class scheme has the highest surface evaporation rate compensated by the highest precipitation rate. The different distributions of hydrometeors and water vapour of the microphysical schemes induce differences up to 49 W m−2 in the downwelling shortwave radiation and up to 33 W m−2 in the downwelling longwave radiation.
Resumo:
We describe an extension to the SOFTSUSY program that provides for the calculation of the sparticle spectrum in the Next-to-Minimal Supersymmetric Standard Model (NMSSM), where a chiral superfield that is a singlet of the Standard Model gauge group is added to the Minimal Supersymmetric Standard Model (MSSM) fields. Often, a Z3 symmetry is imposed upon the model. SOFTSUSY can calculate the spectrum in this case as well as the case where general Z3 violating (denoted as ) terms are added to the soft supersymmetry breaking terms and the superpotential. The user provides a theoretical boundary condition for the couplings and mass terms of the singlet. Radiative electroweak symmetry breaking data along with electroweak and CKM matrix data are used as weak-scale boundary conditions. The renormalisation group equations are solved numerically between the weak scale and a high energy scale using a nested iterative algorithm. This paper serves as a manual to the NMSSM mode of the program, detailing the approximations and conventions used.
Resumo:
Life expectancy continuously increases but our society faces age-related conditions. Among musculoskeletal diseases, osteoporosis associated with risk of vertebral fracture and degenerative intervertebral disc (IVD) are painful pathologies responsible for tremendous healthcare costs. Hence, reliable diagnostic tools are necessary to plan a treatment or follow up its efficacy. Yet, radiographic and MRI techniques, respectively clinical standards for evaluation of bone strength and IVD degeneration, are unspecific and not objective. Increasingly used in biomedical engineering, CT-based finite element (FE) models constitute the state-of-art for vertebral strength prediction. However, as non-invasive biomechanical evaluation and personalised FE models of the IVD are not available, rigid boundary conditions (BCs) are applied on the FE models to avoid uncertainties of disc degeneration that might bias the predictions. Moreover, considering the impact of low back pain, the biomechanical status of the IVD is needed as a criterion for early disc degeneration. Thus, the first FE study focuses on two rigid BCs applied on the vertebral bodies during compression test of cadaver vertebral bodies, vertebral sections and PMMA embedding. The second FE study highlights the large influence of the intervertebral disc’s compliance on the vertebral strength, damage distribution and its initiation. The third study introduces a new protocol for normalisation of the IVD stiffness in compression, torsion and bending using MRI-based data to account for its morphology. In the last study, a new criterion (Otsu threshold) for disc degeneration based on quantitative MRI data (axial T2 map) is proposed. The results show that vertebral strength and damage distribution computed with rigid BCs are identical. Yet, large discrepancies in strength and damage localisation were observed when the vertebral bodies were loaded via IVDs. The normalisation protocol attenuated the effect of geometry on the IVD stiffnesses without complete suppression. Finally, the Otsu threshold computed in the posterior part of annulus fibrosus was related to the disc biomechanics and meet objectivity and simplicity required for a clinical application. In conclusion, the stiffness normalisation protocol necessary for consistent IVD comparisons and the relation found between degeneration, mechanical response of the IVD and Otsu threshold lead the way for non-invasive evaluation biomechanical status of the IVD. As the FE prediction of vertebral strength is largely influenced by the IVD conditions, this data could also improve the future FE models of osteoporotic vertebra.
Resumo:
The sensitivity of the gas flow field to changes in different initial conditions has been studied for the case of a highly simplified cometary nucleus model. The nucleus model simulated a homogeneously outgassing sphere with a more active ring around an axis of symmetry. The varied initial conditions were the number density of the homogeneous region, the surface temperature, and the composition of the flow (varying amounts of H2O and CO2) from the active ring. The sensitivity analysis was performed using the Polynomial Chaos Expansion (PCE) method. Direct Simulation Monte Carlo (DSMC) was used for the flow, thereby allowing strong deviations from local thermal equilibrium. The PCE approach can be used to produce a sensitivity analysis with only four runs per modified input parameter and allows one to study and quantify non-linear responses of measurable parameters to linear changes in the input over a wide range. Hence the PCE allows one to obtain a functional relationship between the flow field properties at every point in the inner coma and the input conditions. It is for example shown that the velocity and the temperature of the background gas are not simply linear functions of the initial number density at the source. As probably expected, the main influence on the resulting flow field parameter is the corresponding initial parameter (i.e. the initial number density determines the background number density, the temperature of the surface determines the flow field temperature, etc.). However, the velocity of the flow field is also influenced by the surface temperature while the number density is not sensitive to the surface temperature at all in our model set-up. Another example is the change in the composition of the flow over the active area. Such changes can be seen in the velocity but again not in the number density. Although this study uses only a simple test case, we suggest that the approach, when applied to a real case in 3D, should assist in identifying the sensitivity of gas parameters measured in situ by, for example, the Rosetta spacecraft to the surface boundary conditions and vice versa.