972 resultados para Periodic boundary conditions


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Article preview View full access options BoneKEy Reports | Review Print Email Share/bookmark Finite element analysis for prediction of bone strength Philippe K Zysset, Enrico Dall'Ara, Peter Varga & Dieter H Pahr Affiliations Corresponding author BoneKEy Reports (2013) 2, Article number: 386 (2013) doi:10.1038/bonekey.2013.120 Received 03 January 2013 Accepted 25 June 2013 Published online 07 August 2013 Article tools Citation Reprints Rights & permissions Abstract Abstract• References• Author information Finite element (FE) analysis has been applied for the past 40 years to simulate the mechanical behavior of bone. Although several validation studies have been performed on specific anatomical sites and load cases, this study aims to review the predictability of human bone strength at the three major osteoporotic fracture sites quantified in recently completed in vitro studies at our former institute. Specifically, the performance of FE analysis based on clinical computer tomography (QCT) is compared with the ones of the current densitometric standards, bone mineral content, bone mineral density (BMD) and areal BMD (aBMD). Clinical fractures were produced in monotonic axial compression of the distal radii, vertebral sections and in side loading of the proximal femora. QCT-based FE models of the three bones were developed to simulate as closely as possible the boundary conditions of each experiment. For all sites, the FE methodology exhibited the lowest errors and the highest correlations in predicting the experimental bone strength. Likely due to the improved CT image resolution, the quality of the FE prediction in the peripheral skeleton using high-resolution peripheral CT was superior to that in the axial skeleton with whole-body QCT. Because of its projective and scalar nature, the performance of aBMD in predicting bone strength depended on loading mode and was significantly inferior to FE in axial compression of radial or vertebral sections but not significantly inferior to FE in side loading of the femur. Considering the cumulated evidence from the published validation studies, it is concluded that FE models provide the most reliable surrogates of bone strength at any of the three fracture sites.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Computer tomography (CT)-based finite element (FE) models of vertebral bodies assess fracture load in vitro better than dual energy X-ray absorptiometry, but boundary conditions affect stress distribution under the endplates that may influence ultimate load and damage localisation under post-yield strains. Therefore, HRpQCT-based homogenised FE models of 12 vertebral bodies were subjected to axial compression with two distinct boundary conditions: embedding in polymethylmethalcrylate (PMMA) and bonding to a healthy intervertebral disc (IVD) with distinct hyperelastic properties for nucleus and annulus. Bone volume fraction and fabric assessed from HRpQCT data were used to determine the elastic, plastic and damage behaviour of bone. Ultimate forces obtained with PMMA were 22% higher than with IVD but correlated highly (R2 = 0.99). At ultimate force, distinct fractions of damage were computed in the endplates (PMMA: 6%, IVD: 70%), cortex and trabecular sub-regions, which confirms previous observations that in contrast to PMMA embedding, failure initiated underneath the nuclei in healthy IVDs. In conclusion, axial loading of vertebral bodies via PMMA embedding versus healthy IVD overestimates ultimate load and leads to distinct damage localisation and failure pattern.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The transverse broadening of an energetic jet passing through a non-Abelian plasma is believed to be described by the thermal expectation value of a light-cone Wilson loop. In this exploratory study, we measure the light-cone Wilson loop with classical lattice gauge theory simulations. We observe, as suggested by previous studies, that there are strong interactions already at short transverse distances, which may lead to more efficient jet quenching than in leading-order perturbation theory. We also verify that the asymptotics of the Wilson loop do not change qualitatively when crossing the light cone, which supports arguments in the literature that infrared contributions to jet quenching can be studied with dimensionally reduced simulations in the space-like domain. Finally we speculate on possibilities for full four-dimensional lattice studies of the same observable, perhaps by employing shifted boundary conditions in order to simulate ensembles boosted by an imaginary velocity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With improving clinical CT scanning technology, the accuracy of CT-based finite element (FE) models of the human skeleton may be ameliorated by an enhanced description of apparent level bone mechanical properties. Micro-finite element (μFE) modeling can be used to study the apparent elastic behavior of human cancellous bone. In this study, samples from the femur, radius and vertebral body were investigated to evaluate the predictive power of morphology–elasticity relationships and to compare them across different anatomical regions. μFE models of 701 trabecular bone cubes with a side length of 5.3 mm were analyzed using kinematic boundary conditions. Based on the FE results, four morphology–elasticity models using bone volume fraction as well as full, limited or no fabric information were calibrated for each anatomical region. The 5 parameter Zysset–Curnier model using full fabric information showed excellent predictive power with coefficients of determination ( r2adj ) of 0.98, 0.95 and 0.94 of the femur, radius and vertebra data, respectively, with mean total norm errors between 14 and 20%. A constant orthotropy model and a constant transverse isotropy model, where the elastic anisotropy is defined by the model parameters, yielded coefficients of determination between 0.90 and 0.98 with total norm errors between 16 and 25%. Neglecting fabric information and using an isotropic model led to r2adj between 0.73 and 0.92 with total norm errors between 38 and 49%. A comparison of the model regressions revealed minor but significant (p<0.01) differences for the fabric–elasticity model parameters calibrated for the different anatomical regions. The proposed models and identified parameters can be used in future studies to compute the apparent elastic properties of human cancellous bone for homogenized FE models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We investigate the simple harmonic oscillator in a 1-d box, and the 2-d isotropic harmonic oscillator problem in a circular cavity with perfectly reflecting boundary conditions. The energy spectrum has been calculated as a function of the self-adjoint extension parameter. For sufficiently negative values of the self-adjoint extension parameter, there are bound states localized at the wall of the box or the cavity that resonate with the standard bound states of the simple harmonic oscillator or the isotropic oscillator. A free particle in a circular cavity has been studied for the sake of comparison. This work represents an application of the recent generalization of the Heisenberg uncertainty relation related to the theory of self-adjoint extensions in a finite volume.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the framework of the International Partnerships in Ice Core Sciences, one of the most important targets is to retrieve an Antarctic ice core that extends over the last 1.5 million years (i.e. an ice core that enters the climate era when glacial–interglacial cycles followed the obliquity cycles of the earth). In such an ice core the annual layers of the oldest ice would be thinned by a factor of about 100 and the climatic information of a 10 000 yr interval would be contained in less than 1 m of ice. The gas record in such an Antarctic ice core can potentially reveal the role of greenhouse gas forcing on these 40 000 yr cycles. However, besides the extreme thinning of the annual layers, also the long residence time of the trapped air in the ice and the relatively high ice temperatures near the bedrock favour diffusive exchanges. To investigate the changes in the O2 / N2 ratio, as well as the trapped CO2 concentrations, we modelled the diffusive exchange of the trapped gases O2, N2 and CO2 along the vertical axis. However, the boundary conditions of a potential drilling site are not yet well constrained and the uncertainties in the permeation coefficients of the air constituents in the ice are large. In our simulations, we have set the drill site ice thickness at 2700 m and the bedrock ice temperature at 5–10 K below the ice pressure melting point. Using these conditions and including all further uncertainties associated with the drill site and the permeation coefficients, the results suggest that in the oldest ice the precessional variations in the O2 / N2 ratio will be damped by 50–100%, whereas CO2 concentration changes associated with glacial–interglacial variations will likely be conserved (simulated damping 5%). If the precessional O2 / N2 signal will have disappeared completely in this future ice core, orbital tuning of the ice-core age scale will be limited.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

bstract With its smaller size, well-known boundary conditions, and the availability of detailed bathymetric data, Lake Geneva’s subaquatic canyon in the Rhone Delta is an excellent analogue to understand sedimentary pro- cesses in deep-water submarine channels. A multidisciplinary research effort was undertaken to unravel the sediment dynamics in the active canyon. This approach included innovative coring using the Russian MIR sub- mersibles, in situ geotechnical tests, and geophysical, sedimentological, geochemical and radiometric analysis techniques. The canyon floor/levee complex is character- ized by a classic turbiditic system with frequent spillover events. Sedimentary evolution in the active canyon is controlled by a complex interplay between erosion and sedimentation processes. In situ profiling of sediment strength in the upper layer was tested using a dynamic penetrometer and suggests that erosion is the governing mechanism in the proximal canyon floor while sedimen- tation dominates in the levee structure. Sedimentation rates progressively decrease down-channel along the levee structure, with accumulation exceeding 2.6 cm/year in the proximal levee. A decrease in the frequency of turbidites upwards along the canyon wall suggests a progressive confinement of the flow through time. The multi-proxy methodology has also enabled a qualitative slope-stability assessment in the levee structure. The rapid sediment loading, slope undercutting and over-steepening, and increased pore pressure due to high methane concentrations hint at a potential instability of the proximal levees. Fur- thermore, discrete sandy intervals show very high methane concentrations and low shear strength and thus could cor- respond to potentially weak layers prone to scarp failures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We solve two inverse spectral problems for star graphs of Stieltjes strings with Dirichlet and Neumann boundary conditions, respectively, at a selected vertex called root. The root is either the central vertex or, in the more challenging problem, a pendant vertex of the star graph. At all other pendant vertices Dirichlet conditions are imposed; at the central vertex, at which a mass may be placed, continuity and Kirchhoff conditions are assumed. We derive conditions on two sets of real numbers to be the spectra of the above Dirichlet and Neumann problems. Our solution for the inverse problems is constructive: we establish algorithms to recover the mass distribution on the star graph (i.e. the point masses and lengths of subintervals between them) from these two spectra and from the lengths of the separate strings. If the root is a pendant vertex, the two spectra uniquely determine the parameters on the main string (i.e. the string incident to the root) if the length of the main string is known. The mass distribution on the other edges need not be unique; the reason for this is the non-uniqueness caused by the non-strict interlacing of the given data in the case when the root is the central vertex. Finally, we relate of our results to tree-patterned matrix inverse problems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Peptide nucleic acids (PNA) are mimics of nucleic acids with a peptidic backbone. Duplexes and triplexes formed between PNA and DNA or RNA possess remarkable thermal stability, they are resistant to nuclease cleavage and can better discriminate mismatches. Understanding the mechanism for the tight binding between PNA and oligonucleotides is important for the design and development of better PNA-based drugs.^ We have performed molecular dynamics (MD) simulations of 8-mer PNA/DNA duplex and two analogous duplexes with chiral modification of PNA strand (D- or L-Alanine modification). MD simulations were performed with explicit water and Na$\sp{+}$ counter ions. The 1.5-ns simulations were carried out with AMBER using periodic boundary and particle mesh Ewald summation. The point charges for PNA monomers were derived from fitting electrostatic potentials, obtained from ab initio calculation, to atomic centers using RESP. Derived charges reveal significantly altered charge distribution on the PNA bases and predict the Watson-Crick H-bonds involving PNA to be stronger. Results from NMR studies investigating H-bond interactions between DNA-DNA and DNA-PNA base pairs in non-polar environment are consistent with this prediction. MD simulations demonstrated that the PNA strand is more flexible than the DNA strand in the same duplex. That this flexibility might be important for the duplex stability is tested by introducing modification into the PNA backbones. Results from MD simulation revealed dramatically altered structures for the modified PNA-DNA duplexes. Consistent with previous NMR results, we also found no intrachain hydrogen bonds between O7$\sp\prime$ and N1$\sp\prime$ of the neighboring residues in our MD study. Our study reveals that in addition to the lack of charge repulsion, stronger Watson-Crick hydrogen bonds together with flexible backbone are important factors for the enhanced stability of the PNA-DNA duplex.^ In a related study, we have developed an application of Gly-Gly-His-(Gly)$\sb3$-PNA conjugate as an artificial nuclease. We were able to demonstrate cleavage of single stranded DNA at a single site upon Ni(II) binding to Gly-Gly-His tripeptide and activation of nuclease with monoperoxyphthalic acid. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Environmental policy and decision-making are characterized by complex interactions between different actors and sectors. As a rule, a stakeholder analysis is performed to understand those involved, but it has been criticized for lacking quality and consistency. This lack is remedied here by a formal social network analysis that investigates collaborative and multi-level governance settings in a rigorous way. We examine the added value of combining both elements. Our case study examines infrastructure planning in the Swiss water sector. Water supply and wastewater infrastructures are planned far into the future, usually on the basis of projections of past boundary conditions. They affect many actors, including the population, and are expensive. In view of increasing future dynamics and climate change, a more participatory and long-term planning approach is required. Our specific aims are to investigate fragmentation in water infrastructure planning, to understand how actors from different decision levels and sectors are represented, and which interests they follow. We conducted 27 semi-structured interviews with local stakeholders, but also cantonal and national actors. The network analysis confirmed our hypothesis of strong fragmentation: we found little collaboration between the water supply and wastewater sector (confirming horizontal fragmentation), and few ties between local, cantonal, and national actors (confirming vertical fragmentation). Infrastructure planning is clearly dominated by engineers and local authorities. Little importance is placed on longer-term strategic objectives and integrated catchment planning, but this was perceived as more important in a second analysis going beyond typical questions of stakeholder analysis. We conclude that linking a stakeholder analysis, comprising rarely asked questions, with a rigorous social network analysis is very fruitful and generates complementary results. This combination gave us deeper insight into the socio-political-engineering world of water infrastructure planning that is of vital importance to our well-being.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Solar heat is the acknowledged driving force for climatic change. However, ice sheets are also capable of causing climatic change. This property of ice sheets derives from the facts that ice and rock are crystalline whereas the oceans and atmosphere are fluids and that ice sheets are massive enough to depress the earth's crust well below sea level. These features allow time constants for glacial flow and isostatic compensation to be much larger than those for ocean and atmospheric circulation and therefore somewhat independent of the solar variations that control this circulation. This review examines the nature of dynamic processes in ice streams that give ice sheets their degree of independent behavior and emphasizes the consequences of viscoplastic instability inherent in anisotropic polycrystalline solids such as glacial ice. Viscoplastic instability and subglacial topography are responsible for the formation of ice streams near ice sheet margins grounded below sea level. As a result the West Antarctic marine ice sheet is inherently unstable and can be rapidly carved away by calving bays which migrate up surging ice streams. Analyses of tidal flexure along floating ice stream margins, stress and velocity fields in ice streams, and ice stream boundary conditions are presented and used to interpret ERTS 1 photomosaics for West Antarctica in terms of characteristic ice sheet crevasse patterns that can be used to monitor ice stream surges and to study calving bay dynamics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Gravity wants to pull an ice sheet to the center of the Earth, but cannot because the Earth's crust is in the way, so ice is pushed out sideways instead. Or is it? The ice sheet "sees" nothing preventing it from spreading out except air, which is much less massive than ice. Therefore, does not ice rush forward to fill this relative vacuum; does not the relative vacuum suck ice into it, because Nature abhors a vacuum? If so, the ice sheet is not only pulled downward by gravity, it is also pulled outward by the relative vacuum. This pulling outward will be most rapid where the ice sheet encounters least resistance. The least resistance exists along the bed of ice streams, where ice-bed coupling is reduced by a basal water layer, especially if the ice stream becomes afloat and the floating part is relatively unconfined around its perimeter and unpinned to the sea floor. Ice streams are therefore fast currents of ice that develop near the margins of an ice sheet where these conditions exist. Because of these conditions, ice streams pull ice out of ice sheets and have pulling power equal to the longitudinal gravitational pulling force multiplied by the ice-stream velocity. These boundary conditions beneath and beyond ice streams can be quantified by a basal buoyancy factor that provides a life-cycle classification of ice streams into inception, growth, mature, declining and terminal stages, during which ice streams disintegrate the ice sheet. Surface profiles of ice streams are diagnostic of the stage in a life cycle and, hence, of the vitality of the ice sheet.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Regional climate simulations are conducted using the Polar fifth-generation Pennsylvania State University (PSU)-NCAR Mesoscale Model (MM5) with a 60-km horizontal resolution domain over North America to explore the summer climate of the Last Glacial Maximum (LGM: 21 000 calendar years ago), when much of the continent was covered by the Laurentide Ice Sheet (LIS). Output from a tailored NCAR Community Climate Model version 3 (CCM3) simulation of the LGM climate is used to provide the initial and lateral boundary conditions for Polar MM5. LGM boundary conditions include continental ice sheets, appropriate orbital forcing, reduced CO2 concentration, paleovegetation, modified sea surface temperatures, and lowered sea level. The simulated LGM summer climate is characterized by a pronounced low-level thermal gradient along the southern margin of the LIS resulting from the juxtaposition of the cold ice sheet and adjacent warm ice-free land surface. This sharp thermal gradient anchors the midtropospheric jet stream and facilitates the development of synoptic cyclones that track over the ice sheet, some of which produce copious liquid precipitation along and south of the LIS terminus. Precipitation on the southern margin is orographically enhanced as moist southerly low-level flow (resembling a contemporary, Great Plains low-level jet configuration) in advance of the cyclone is drawn up the ice sheet slope. Composites of wet and dry periods on the LIS southern margin illustrate two distinctly different atmospheric flow regimes. Given the episodic nature of the summer rain events, it may be possible to reconcile the model depiction of wet conditions on the LIS southern margin during the LGM summer with the widely accepted interpretation of aridity across the Great Plains based on geological proxy evidence.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Optimized regional climate simulations are conducted using the Polar MM5, a version of the fifth-generation Pennsylvania State University-NCAR Mesoscale Model (MM5), with a 60-km horizontal resolution domain over North America during the Last Glacial Maximum (LGM, 21 000 calendar years ago), when much of the continent was covered by the Laurentide Ice Sheet (LIS). The objective is to describe the LGM annual cycle at high spatial resolution with an emphasis on the winter atmospheric circulation. Output from a tailored NCAR Community Climate Model version 3 (CCM3) simulation of the LGM climate is used to provide the initial and lateral boundary conditions for Polar MM5. LGM boundary conditions include continental ice sheets, appropriate orbital forcing, reduced CO2 concentration, paleovegetation, modified sea surface temperatures, and lowered sea level. Polar MM5 produces a substantially different atmospheric response to the LGM boundary conditions than CCM3 and other recent GCM simulations. In particular, from November to April the upper-level flow is split around a blocking anticyclone over the LIS, with a northern branch over the Canadian Arctic and a southern branch impacting southern North America. The split flow pattern is most pronounced in January and transitions into a single, consolidated jet stream that migrates northward over the LIS during summer. Sensitivity experiments indicate that the winter split flow in Polar MM5 is primarily due to mechanical forcing by LIS, although model physics and resolution also contribute to the simulated flow configuration. Polar MM5 LGM results are generally consistent with proxy climate estimates in the western United States, Alaska, and the Canadian Arctic and may help resolve some long-standing discrepancies between proxy data and previous simulations of the LGM climate.