76 resultados para INDENTATION EXPERIMENTS
Resumo:
Localized Magnetic Resonance Spectroscopy (MRS) is in widespread use for clinical brain research. Standard acquisition sequences to obtain one-dimensional spectra suffer from substantial overlap of spectral contributions from many metabolites. Therefore, specially tuned editing sequences or two-dimensional acquisition schemes are applied to extend the information content. Tuning specific acquisition parameters allows to make the sequences more efficient or more specific for certain target metabolites. Cramér-Rao bounds have been used in other fields for optimization of experiments and are now shown to be very useful as design criteria for localized MRS sequence optimization. The principle is illustrated for one- and two-dimensional MRS, in particular the 2D separation experiment, where the usual restriction to equidistant echo time spacings and equal acquisition times per echo time can be abolished. Particular emphasis is placed on optimizing experiments for quantification of GABA and glutamate. The basic principles are verified by Monte Carlo simulations and in vivo for repeated acquisitions of generalized two-dimensional separation brain spectra obtained from healthy subjects and expanded by bootstrapping for better definition of the quantification uncertainties.
Resumo:
A major objective in ecology is to find general patterns, and to establish the rules and underlying mechanisms that generate those patterns. Nevertheless, most of our current insights in ecology are based on case studies of a single or few species, whereas multi-species experimental studies remain rare. We underline the power of the multi-species experimental approach for addressing general ecological questions, e. g. on species environmental responses or on patterns of among-and within-species variation. We present simulations that show that the accuracy of estimates of between-group differences is increased by maximizing the number of species rather than the number of populations or individuals per species. Thus, the more species a multi-species experiment includes, the more powerful it is. In addition, we discuss some inevitable methodological challenges of multi-species experiments. While we acknowledge the value of single-or few-species experiments, we strongly advocate the use of multi-species experiments for addressing ecological questions at a more general level.
Resumo:
1. Biodiversity-ecosystem functioning (BEF) experiments address ecosystem-level consequences of species loss by comparing communities of high species richness with communities from which species have been gradually eliminated. BEF experiments originally started with microcosms in the laboratory and with grassland ecosystems. A new frontier in experimental BEF research is manipulating tree diversity in forest ecosystems, compelling researchers to think big and comprehensively. 2. We present and discuss some of the major issues to be considered in the design of BEF experiments with trees and illustrate these with a new forest biodiversity experiment established in subtropical China (Xingangshan, Jiangxi Province) in 2009/2010. Using a pool of 40 tree species, extinction scenarios were simulated with tree richness levels of 1, 2, 4, 8 and 16 species on a total of 566 plots of 25.8x25.8m each. 3. The goal of this experiment is to estimate effects of tree and shrub species richness on carbon storage and soil erosion; therefore, the experiment was established on sloped terrain. The following important design choices were made: (i) establishing many small rather than fewer larger plots, (ii) using high planting density and random mixing of species rather than lower planting density and patchwise mixing of species, (iii) establishing a map of the initial ecoscape' to characterize site heterogeneity before the onset of biodiversity effects and (iv) manipulating tree species richness not only in random but also in trait-oriented extinction scenarios. 4. Data management and analysis are particularly challenging in BEF experiments with their hierarchical designs nesting individuals within-species populations within plots within-species compositions. Statistical analysis best proceeds by partitioning these random terms into fixed-term contrasts, for example, species composition into contrasts for species richness and the presence of particular functional groups, which can then be tested against the remaining random variation among compositions. 5. We conclude that forest BEF experiments provide exciting and timely research options. They especially require careful thinking to allow multiple disciplines to measure and analyse data jointly and effectively. Achieving specific research goals and synergy with previous experiments involves trade-offs between different designs and requires manifold design decisions.
Resumo:
Relationships between mineralization, collagen orientation and indentation modulus were investigated in bone structural units from the mid-shaft of human femora using a site-matched design. Mineral mass fraction, collagen fibril angle and indentation moduli were measured in registered anatomical sites using backscattered electron imaging, polarized light microscopy and nano-indentation, respectively. Theoretical indentation moduli were calculated with a homogenization model from the quantified mineral densities and mean collagen fibril orientations. The average indentation moduli predicted based on local mineralization and collagen fibers arrangement were not significantly different from the average measured experimentally with nanoindentation (p=0.9). Surprisingly, no substantial correlation of the measured indentation moduli with tissue mineralization and/or collagen fiber arrangement was found. Nano-porosity, micro-damage, collagen cross-links, non-collagenous proteins or other parameters affect the indentation measurements. Additional testing/simulation methods need to be considered to properly understand the variability of indentation moduli, beyond the mineralization and collagen arrangement in bone structural units.
Resumo:
For several years now, neuroscientific research has been striving towards fundamental answers to questions about the relevance of sex/gender to language processing in the brain. This research has been effected through the search for sex/gender differences in the neurobiology of language processing. Thus, the main aim has ever been to focus on the differentiation of the sexes/genders, failing to define what sex, what gender, what female or male is in neurolingustic research. In other words, although neuroscientific findings have provided key insights into the brain functioning of women and men, neuropsychology has rarely questioned the complexity of the sex/gender variable beyond biology. What does “female” or “male” mean in human neurocognition; how are operationalisations implemented along the axes of “femaleness” or “maleness”; or what biological evidence is used to register the variables sex and/or gender? In the neurosciences as well as in neurocognitive research, questions such as these have so far not been studied in detail, even if they are highly significant for the scientific process. Instead, the variable of sex/gender has always been thought as solely dichotomous (as either female or male), oppositional and exclusionary of each other. Here, this theoretical contribution sets in. Based on findings in neuroscience and concepts in gender theory, this poster is dedicated to the reflection about what sex/gender is in the neuroscience of language processing. Following this aim, two levels of interest will be addressed. First: How do we define sex/gender at the level of participants? And second: How do we define sex/gender at the level of the experimental task? For the first, a multifactorial registration (work in progress) of the variable sex/gender will be presented, i.e. a tool that records sex/gender in terms of biology and social issues as well as on a spectrum between femaleness and maleness. For the second, the compulsory dichotomy of a gendered task when neurolinguistically approaching our cognitions of sex/gender will be explored.
Resumo:
We have measured the bidirectional reflectance of spherical micrometer-sized water-ice particles in the visible spectral range over a wide range of incidence and emission angles. The small ice spheres were produced by spraying fine water droplets directly into liquid nitrogen. The resulting mean particle radii are 1.47 + 0.96 - 0.58 μm. Such a material shares many properties with ice in comets and at the surface of icy satellites. Measurements show that the fresh sample material is highly backscattering, contrasting with natural terrestrial snow and frost. The formation of agglomerates of particles during the sample production results in a noticeable variability of the photometric properties of the samples in their initial state. We have also observed significant temporal evolutions of the scattering behavior of the samples, shifting towards more forward scattering within some tens of hours, resulting most likely from sintering processes. All reflectance data are fitted by the Hapke photometric model (1993 and 2002 formulation) with a one/two/three-parameter Henyey-Greenstein phase function and the resulting Hapke parameters are provided. These parameters can be used to compare laboratory results with the observed photometric behaviors of astronomical objects. We show, in particular, that the optical properties of the fresh micrometer-sized ice samples can be used to reproduce the predominant backscattering in the phase curves of Enceladus and Europa.
Resumo:
Neurotensin(8-13) (NTS(8-13)) analogs with C- and/or N-terminal β-amino acid residues and three DOTA derivatives thereof have been synthesized (i.e., 1-6). A virtual docking experiment showed almost perfect fit of one of the 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA) derivatives, 6a, into a crystallographically identified receptor NTSR1 (Fig.1). The affinities for the receptors of the NTS analogs and derivatives are low, when determined with cell-membrane homogenates, while, with NTSR1-exhibiting cancer tissues, affinities in the single-digit nanomolar range can be observed (Table 2). Most of the β-amino acid-containing NTS(8-13) analogs (Table 1 and Fig.2), including the (68) Ga complexes of the DOTA-substituted ones (6; Figs.2 and 5), are stable for ca. 1 h in human serum and plasma, and in murine plasma. The biodistributions of two (68) Ga complexes (of 6a and 6b) in HT29 tumor-bearing nude mice, in the absence and in the presence of a blocking compound, after 10, 30, and 60 min (Figs. 3 and 4) lead to the conclusion that the amount of specifically bound radioligand is rather low. This was confirmed by PET-imaging experiments with the tumor-bearing mice (Fig.6). Comparison of the in vitro plasma stability (after 1 h) with the ex vivo blood content (after 10-15 min) of the two (68) Ga complexes shows that they are rapidly cleaved in the animals (Fig.5).
Resumo:
The occurrence of gaseous pollutants in soils has stimulated many experimental activities, including forced ventilation in the field as well as laboratory transport experiments with gases. The dispersion coefficient in advective-dispersive gas phase transport is often dominated by molecular diffusion, which leads to a large overall dispersivity gamma. Under such conditions it is important to distinguish between flux and resident modes of solute injection and detection. The influence of the inlet type oil the macroscopic injection mode was tested in two series of column experiments with gases at different mean flow velocities nu. First we compared infinite resident and flux injections, and second, semi-infinite resident and flux injections. It is shown that the macroscopically apparent injection condition depends on the geometry of the inlet section. A reduction of the cross-sectional area of the inlet relative to that of the column is very effective in excluding the diffusive solute input, thus allowing us to use the solutions for a flux Injection also at rather low mean flow velocities nu. If the whole cross section of a column is exposed to a large reservoir like that of ambient air, a semi-infinite resident injection is established, which can be distinguished from a flux injection even at relatively high velocities nu, depending on the mechanical dispersivity of the porous medium.
Resumo:
In situ diffusion experiments are performed in geological formations at underground research laboratories to overcome the limitations of laboratory diffusion experiments and investigate scale effects. Tracer concentrations are monitored at the injection interval during the experiment (dilution data) and measured from host rock samples around the injection interval at the end of the experiment (overcoring data). Diffusion and sorption parameters are derived from the inverse numerical modeling of the measured tracer data. The identifiability and the uncertainties of tritium and Na-22(+) diffusion and sorption parameters are studied here by synthetic experiments having the same characteristics as the in situ diffusion and retention (DR) experiment performed on Opalinus Clay. Contrary to previous identifiability analyses of in situ diffusion experiments, which used either dilution or overcoring data at approximate locations, our analysis of the parameter identifiability relies simultaneously on dilution and overcoring data, accounts for the actual position of the overcoring samples in the claystone, uses realistic values of the standard deviation of the measurement errors, relies on model identification criteria to select the most appropriate hypothesis about the existence of a borehole disturbed zone and addresses the effect of errors in the location of the sampling profiles. The simultaneous use of dilution and overcoring data provides accurate parameter estimates in the presence of measurement errors, allows the identification of the right hypothesis about the borehole disturbed zone and diminishes other model uncertainties such as those caused by errors in the volume of the circulation system and the effective diffusion coefficient of the filter. The proper interpretation of the experiment requires the right hypothesis about the borehole disturbed zone. A wrong assumption leads to large estimation errors. The use of model identification criteria helps in the selection of the best model. Small errors in the depth of the overcoring samples lead to large parameter estimation errors. Therefore, attention should be paid to minimize the errors in positioning the depth of the samples. The results of the identifiability analysis do not depend on the particular realization of random numbers. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.
Resumo:
The combination of scaled analogue experiments, material mechanics, X-ray computed tomography (XRCT) and Digital Volume Correlation techniques (DVC) is a powerful new tool not only to examine the 3 dimensional structure and kinematic evolution of complex deformation structures in scaled analogue experiments, but also to fully quantify their spatial strain distribution and complete strain history. Digital image correlation (DIC) is an important advance in quantitative physical modelling and helps to understand non-linear deformation processes. Optical non-intrusive (DIC) techniques enable the quantification of localised and distributed deformation in analogue experiments based either on images taken through transparent sidewalls (2D DIC) or on surface views (3D DIC). X-ray computed tomography (XRCT) analysis permits the non-destructive visualisation of the internal structure and kinematic evolution of scaled analogue experiments simulating tectonic evolution of complex geological structures. The combination of XRCT sectional image data of analogue experiments with 2D DIC only allows quantification of 2D displacement and strain components in section direction. This completely omits the potential of CT experiments for full 3D strain analysis of complex, non-cylindrical deformation structures. In this study, we apply digital volume correlation (DVC) techniques on XRCT scan data of “solid” analogue experiments to fully quantify the internal displacement and strain in 3 dimensions over time. Our first results indicate that the application of DVC techniques on XRCT volume data can successfully be used to quantify the 3D spatial and temporal strain patterns inside analogue experiments. We demonstrate the potential of combining DVC techniques and XRCT volume imaging for 3D strain analysis of a contractional experiment simulating the development of a non-cylindrical pop-up structure. Furthermore, we discuss various options for optimisation of granular materials, pattern generation, and data acquisition for increased resolution and accuracy of the strain results. Three-dimensional strain analysis of analogue models is of particular interest for geological and seismic interpretations of complex, non-cylindrical geological structures. The volume strain data enable the analysis of the large-scale and small-scale strain history of geological structures.
Resumo:
Monte Carlo simulations arrive at their results by introducing randomness, sometimes derived from a physical randomizing device. Nonetheless, we argue, they open no new epistemic channels beyond that already employed by traditional simulations: the inference by ordinary argumentation of conclusions from assumptions built into the simulations. We show that Monte Carlo simulations cannot produce knowledge other than by inference, and that they resemble other computer simulations in the manner in which they derive their conclusions. Simple examples of Monte Carlo simulations are analysed to identify the underlying inferences.
Resumo:
Aging societies suffer from an increasing incidence of bone fractures. Bone strength depends on the amount of mineral measured by clinical densitometry, but also on the micromechanical properties of the bone hierarchical organization. A good understanding has been reached for elastic properties on several length scales, but up to now there is a lack of reliable postyield data on the lower length scales. In order to be able to describe the behavior of bone at the microscale, an anisotropic elastic-viscoplastic damage model was developed using an eccentric generalized Hill criterion and nonlinear isotropic hardening. The model was implemented as a user subroutine in Abaqus and verified using single element tests. A FE simulation of microindentation in lamellar bone was finally performed show-ing that the new constitutive model can capture the main characteristics of the indentation response of bone. As the generalized Hill criterion is limited to elliptical and cylindrical yield surfaces and the correct shape for bone is not known, a new yield surface was developed that takes any convex quadratic shape. The main advantage is that in the case of material identification the shape of the yield surface does not have to be anticipated but a minimization results in the optimal shape among all convex quadrics. The generality of the formulation was demonstrated by showing its degeneration to classical yield surfaces. Also, existing yield criteria for bone at multiple length scales were converted to the quadric formulation. Then, a computational study to determine the influence of yield surface shape and damage on the in-dentation response of bone using spherical and conical tips was performed. The constitutive model was adapted to the quadric criterion and yield surface shape and critical damage were varied. They were shown to have a major impact on the indentation curves. Their influence on indentation modulus, hardness, their ratio as well as the elastic to total work ratio were found to be very well described by multilinear regressions for both tip shapes. For conical tips, indentation depth was not a significant fac-tor, while for spherical tips damage was insignificant. All inverse methods based on microindentation suffer from a lack of uniqueness of the found material properties in the case of nonlinear material behavior. Therefore, monotonic and cyclic micropillar com-pression tests in a scanning electron microscope allowing a straightforward interpretation comple-mented by microindentation and macroscopic uniaxial compression tests were performed on dry ovine bone to identify modulus, yield stress, plastic deformation, damage accumulation and failure mecha-nisms. While the elastic properties were highly consistent, the postyield deformation and failure mech-anisms differed between the two length scales. A majority of the micropillars showed a ductile behavior with strain hardening until failure by localization in a slip plane, while the macroscopic samples failed in a quasi-brittle fashion with microcracks coalescing into macroscopic failure surfaces. In agreement with a proposed rheological model, these experiments illustrate a transition from a ductile mechanical behavior of bone at the microscale to a quasi-brittle response driven by the growth of preexisting cracks along interfaces or in the vicinity of pores at the macroscale. Subsequently, a study was undertaken to quantify the topological variability of indentations in bone and examine its relationship with mechanical properties. Indentations were performed in dry human and ovine bone in axial and transverse directions and their topography measured by AFM. Statistical shape modeling of the residual imprint allowed to define a mean shape and describe the variability with 21 principal components related to imprint depth, surface curvature and roughness. The indentation profile of bone was highly consistent and free of any pile up. A few of the topological parameters, in particular depth, showed significant correlations to variations in mechanical properties, but the cor-relations were not very strong or consistent. We could thus verify that bone is rather homogeneous in its micromechanical properties and that indentation results are not strongly influenced by small de-viations from the ideal case. As the uniaxial properties measured by micropillar compression are in conflict with the current literature on bone indentation, another dissipative mechanism has to be present. The elastic-viscoplastic damage model was therefore extended to viscoelasticity. The viscoelastic properties were identified from macroscopic experiments, while the quasistatic postelastic properties were extracted from micropillar data. It was found that viscoelasticity governed by macroscale properties has very little influence on the indentation curve and results in a clear underestimation of the creep deformation. Adding viscoplasticity leads to increased creep, but hardness is still highly overestimated. It was possible to obtain a reasonable fit with experimental indentation curves for both Berkovich and spherical indenta-tion when abandoning the assumption of shear strength being governed by an isotropy condition. These results remain to be verified by independent tests probing the micromechanical strength prop-erties in tension and shear. In conclusion, in this thesis several tools were developed to describe the complex behavior of bone on the microscale and experiments were performed to identify its material properties. Micropillar com-pression highlighted a size effect in bone due to the presence of preexisting cracks and pores or inter-faces like cement lines. It was possible to get a reasonable fit between experimental indentation curves using different tips and simulations using the constitutive model and uniaxial properties measured by micropillar compression. Additional experimental work is necessary to identify the exact nature of the size effect and the mechanical role of interfaces in bone. Deciphering the micromechanical behavior of lamellar bone and its evolution with age, disease and treatment and its failure mechanisms on several length scales will help preventing fractures in the elderly in the future.
Resumo:
Previous studies of the sediments of Lake Lucerne have shown that massive subaqueous mass movements affecting unconsolidated sediments on lateral slopes are a common process in this lake, and, in view of historical reports describing damaging waves on the lake, it was suggested that tsunamis generated by mass movements represent a considerable natural hazard on the lakeshores. Newly performed numerical simulations combining two-dimensional, depth-averaged models for mass-movement propagation and for tsunami generation, propagation and inunda- tion reproduce a number of reported tsunami effects. Four analysed mass-movement scenarios—three based on documented slope failures involving volumes of 5.5 to 20.8 9 106 m3—show peak wave heights of several metres and maximum runup of 6 to [10 m in the directly affected basins, while effects in neighbouring basins are less drastic. The tsunamis cause large-scale inundation over distances of several hundred metres on flat alluvial plains close to the mass-movement source areas. Basins at the ends of the lake experience regular water-level oscillations with characteristic periods of several minutes. The vulnerability of potentially affected areas has increased dramatically since the times of the damaging historical events, recommending a thorough evaluation of the hazard.