432 resultados para PARAMETERIZATION


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of innovative carbon-based materials can be greatly facilitated by molecular modeling techniques. Although molecular modeling has been used extensively to predict elastic properties of materials, modeling of more complex phenomenon such as fracture has only recently been possible with the development of new force fields such as ReaxFF, which is used in this work. It is not fully understood what molecular modeling parameters such as thermostat type, thermostat coupling, time step, system size, and strain rate are required for accurate modeling of fracture. Selection of modeling parameters to model fracture can be difficult and non-intuitive compared to modeling elastic properties using traditional force fields, and the errors generated by incorrect parameters may be non-obvious. These molecular modeling parameters are systematically investigated and their effects on the fracture of well-known carbon materials are analyzed. It is determined that for coupling coefficients of 250 fs and greater do not result in substantial differences in the stress-strain response of the materials using any thermostat type. A time step of 0.5 fs of smaller is required for accurate results. Strain rates greater than 2.2 ns-1 are sufficient to obtain repeatable results with slower strain rates for the materials studied. The results of this study indicate that further refinement of the Chenoweth parameter set is required to accurately predict the mechanical response of carbon-based systems. The ReaxFF has been used extensively to model systems in which bond breaking and formation occur. In particular ReaxFF has been used to model reactions of small molecules. Some elastic and fracture properties have been successfully modeled using ReaxFF in materials such as silicon and some metals. However, it is not clear if current parameterizations for ReaxFF are able to accurately reproduce the elastic and fracture properties of carbon materials. The stress-strain response of a new ReaxFF parameterization is compared to the previous parameterization and density functional theory results for well-known carbon materials. The new ReaxFF parameterization makes xv substantial improvements to the predicted mechanical response of carbon materials, and is found to be suitable for modeling the mechanical response of carbon materials. Finally, a new material composed of carbon nanotubes within an amorphous carbon (AC) matrix is modeled using the ReaxFF. Various parameters that may be experimentally controlled are investigated such as nanotube bundling, comparing multi-walled nanotube with single-walled nanotubes, and degree of functionalization of the nanotubes. Elastic and fracture properties are investigated for the composite systems and compared to results of pure-nanotube and pure-AC models. It is found that the arrangement of the nanotubes and degree of crosslinking may substantially affect the properties of the systems, particularly in the transverse directions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die motorikwissenschaftliche Befundlage zum sogenannten „Quiet Eye“ weist darauf hin, dass hohe sportmotorische Leistungen, insbesondere in Präzisionsaufgaben, mit einer langen finalen Fixation vor der Bewegungsentfaltung einhergehen. Ein Mechanismus, der diesen Zusammenhang aus einer kognitionspsychologischen Perspektive erklären könnte, ist die Optimierung von Informationsverarbeitungsprozessen der Bewegungsparametrisierung. Diese Annahme wurde durch eine experimentelle Manipulation von Zielinstruktionen in einer Ballwurfaufgabe untersucht. Zum einen zeigen die Ergebnisse, dass sich die räumliche Verankerung des Quiet Eye in Abhängigkeit der variierten Aufgabenziele verändert; zum anderen deuten die Befunde darauf hin, dass sich Veränderungen der Verankerung im Bewegungsresultat niederschlagen. Damit wird ein kognitiver Wirkmechanismus plausibilisiert, nach dem die Bewegungsgenauigkeit durch Zielinstruktion via räumliche Quiet-Eye-Verankerung bestimmt wird.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Currently, the contributions of Starlette, Stella, and AJISAI are not taken into account when defining the International Terrestrial Reference Frame (ITRF), despite the large amount of data collected in a long time-span. Consequently, the SLR-derived parameters and the SLR part of the ITRF are almost exclusively defined by LAGEOS-1 and LAGEOS-2. We investigate the potential of combining the observations to several SLR satellites with different orbital characteristics. Ten years of SLR data are homogeneously processed using the development version 5.3 of the Bernese GNSS Software. Special emphasis is put on orbit parameterization and the impact of LEO data on the estimation of the geocenter coordinates, Earth rotation parameters, Earth gravity field coefficients, and the station coordinates in one common adjustment procedure. We find that the parameters derived from the multi-satellite solutions are of better quality than those obtained in single satellite solutions or solutions based on the two LAGEOS satellites. A spectral analysis of the SLR network scale w.r.t. SLRF2008 shows that artifacts related to orbit perturbations in the LAGEOS-1/2 solutions, i.e., periods related to the draconitic years of the LAGEOS satellites, are greatly reduced in the combined solutions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study examines how different microphysical parameterization schemes influence orographically induced precipitation and the distributions of hydrometeors and water vapour for midlatitude summer conditions in the Weather Research and Forecasting (WRF) model. A high-resolution two-dimensional idealized simulation is used to assess the differences between the schemes in which a moist air flow is interacting with a bell-shaped 2 km high mountain. Periodic lateral boundary conditions are chosen to recirculate atmospheric water in the domain. It is found that the 13 selected microphysical schemes conserve the water in the model domain. The gain or loss of water is less than 0.81% over a simulation time interval of 61 days. The differences of the microphysical schemes in terms of the distributions of water vapour, hydrometeors and accumulated precipitation are presented and discussed. The Kessler scheme, the only scheme without ice-phase processes, shows final values of cloud liquid water 14 times greater than the other schemes. The differences among the other schemes are not as extreme, but still they differ up to 79% in water vapour, up to 10 times in hydrometeors and up to 64% in accumulated precipitation at the end of the simulation. The microphysical schemes also differ in the surface evaporation rate. The WRF single-moment 3-class scheme has the highest surface evaporation rate compensated by the highest precipitation rate. The different distributions of hydrometeors and water vapour of the microphysical schemes induce differences up to 49 W m−2 in the downwelling shortwave radiation and up to 33 W m−2 in the downwelling longwave radiation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Four different literature parameterizations for the formation and evolution of urban secondary organic aerosol (SOA) frequently used in 3-D models are evaluated using a 0-D box model representing the Los Angeles metropolitan region during the California Research at the Nexus of Air Quality and Climate Change (CalNex) 2010 campaign. We constrain the model predictions with measurements from several platforms and compare predictions with particle- and gas-phase observations from the CalNex Pasadena ground site. That site provides a unique opportunity to study aerosol formation close to anthropogenic emission sources with limited recirculation. The model SOA that formed only from the oxidation of VOCs (V-SOA) is insufficient to explain the observed SOA concentrations, even when using SOA parameterizations with multi-generation oxidation that produce much higher yields than have been observed in chamber experiments, or when increasing yields to their upper limit estimates accounting for recently reported losses of vapors to chamber walls. The Community Multiscale Air Quality (WRF-CMAQ) model (version 5.0.1) provides excellent predictions of secondary inorganic particle species but underestimates the observed SOA mass by a factor of 25 when an older VOC-only parameterization is used, which is consistent with many previous model–measurement comparisons for pre-2007 anthropogenic SOA modules in urban areas. Including SOA from primary semi-volatile and intermediate-volatility organic compounds (P-S/IVOCs) following the parameterizations of Robinson et al. (2007), Grieshop et al. (2009), or Pye and Seinfeld (2010) improves model–measurement agreement for mass concentration. The results from the three parameterizations show large differences (e.g., a factor of 3 in SOA mass) and are not well constrained, underscoring the current uncertainties in this area. Our results strongly suggest that other precursors besides VOCs, such as P-S/IVOCs, are needed to explain the observed SOA concentrations in Pasadena. All the recent parameterizations overpredict urban SOA formation at long photochemical ages (3 days) compared to observations from multiple sites, which can lead to problems in regional and especially global modeling. However, reducing IVOC emissions by one-half in the model to better match recent IVOC measurements improves SOA predictions at these long photochemical ages. Among the explicitly modeled VOCs, the precursor compounds that contribute the greatest SOA mass are methylbenzenes. Measured polycyclic aromatic hydrocarbons (naphthalenes) contribute 0.7% of the modeled SOA mass. The amounts of SOA mass from diesel vehicles, gasoline vehicles, and cooking emissions are estimated to be 16–27, 35–61, and 19–35 %, respectively, depending on the parameterization used, which is consistent with the observed fossil fraction of urban SOA, 71(+-3) %. The relative contribution of each source is uncertain by almost a factor of 2 depending on the parameterization used. In-basin biogenic VOCs are predicted to contribute only a few percent to SOA. A regional SOA background of approximately 2.1 μgm-3 is also present due to the long-distance transport of highly aged OA, likely with a substantial contribution from regional biogenic SOA. The percentage of SOA from diesel vehicle emissions is the same, within the estimated uncertainty, as reported in previous work that analyzed the weekly cycles in OA concentrations (Bahreini et al., 2012; Hayes et al., 2013). However, the modeling work presented here suggests a strong anthropogenic source of modern carbon in SOA, due to cooking emissions, which was not accounted for in those previous studies and which is higher on weekends. Lastly, this work adapts a simple two-parameter model to predict SOA concentration and O/C from urban emissions. This model successfully predicts SOA concentration, and the optimal parameter combination is very similar to that found for Mexico City. This approach provides a computationally inexpensive method for predicting urban SOA in global and climate models. We estimate pollution SOA to account for 26 Tg yr-1 of SOA globally, or 17% of global SOA, one third of which is likely to be non-fossil.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: A prerequisite for high performance in motor tasks is the acquisition of egocentric sensory information that must be translated into motor actions. A phenomenon that supports this process is the Quiet Eye (QE) defined as long final fixation before movement initiation. It is assumed that the QE facilitates information processing, particularly regarding movement parameterization. Aims: The question remains whether this facilitation also holds for the information-processing stage of response selection and – related to perception crucial – stage of stimulus identification. Method: In two experiments with sport science students, performance-enhancing effects of experimentally manipulated QE durations were tested as a function of target position predictability and target visibility, thereby selectively manipulating response selection and stimulus identification demands, respectively. Results: The results support the hypothesis of facilitated information processing through long QE durations since in both experiments performance-enhancing effects of long QE durations were found under increased processing demands only. In Experiment 1, QE duration affected performance only if the target position was not predictable and positional information had to be processed over the QE period. In Experiment 2, in a full vs. no target visibility comparison with saccades to the upcoming target position induced by flicker cues, the functionality of a long QE duration depended on the visual stimulus identification period as soon as the interval falls below a certain threshold. Conclusions: The results corroborate earlier findings that QE efficiency depends on demands put on the visuomotor system, thereby furthering the assumption that the phenomenon supports the processes of sensorimotor integration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new methodology based on combining active and passive remote sensing and simultaneous and collocated radiosounding data to study the aerosol hygroscopic growth effects on the particle optical and microphysical properties is presented. The identification of hygroscopic growth situations combines the analysis of multispectral aerosol particle backscatter coefficient and particle linear depolarization ratio with thermodynamic profiling of the atmospheric column. We analyzed the hygroscopic growth effects on aerosol properties, namely the aerosol particle backscatter coefficient and the volume concentration profiles, using data gathered at Granada EARLINET station. Two study cases, corresponding to different aerosol loads and different aerosol types, are used for illustrating the potential of this methodology. Values of the aerosol particle backscatter coefficient enhancement factors range from 2.1 ± 0.8 to 3.9 ± 1.5, in the ranges of relative humidity 60–90 and 40–83%, being similar to those previously reported in the literature. Differences in the enhancement factor are directly linked to the composition of the atmospheric aerosol. The largest value of the aerosol particle backscatter coefficient enhancement factor corresponds to the presence of sulphate and marine particles that are more affected by hygroscopic growth. On the contrary, the lowest value of the enhancement factor corresponds to an aerosol mixture containing sulphates and slight traces of mineral dust. The Hänel parameterization is applied to these case studies, obtaining results within the range of values reported in previous studies, with values of the γ exponent of 0.56 ± 0.01 (for anthropogenic particles slightly influenced by mineral dust) and 1.07 ± 0.01 (for the situation dominated by anthropogenic particles), showing the convenience of this remote sensing approach for the study of hygroscopic effects of the atmospheric aerosol under ambient unperturbed conditions. For the first time, the retrieval of the volume concentration profiles for these cases using the Lidar Radiometer Inversion Code (LIRIC) allows us to analyze the aerosol hygroscopic growth effects on aerosol volume concentration, observing a stronger increase of the fine mode volume concentration with increasing relative humidity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Changes in temperature and carbon dioxide during glacial cycles recorded in Antarctic ice cores are tightly coupled. However, this relationship does not hold for interglacials. While climate cooled towards the end of both the last (Eemian) and present (Holocene) interglacials, CO₂ remained stable during the Eemian while rising in the Holocene. We identify and review twelve biogeochemical mechanisms of terrestrial (vegetation dynamics and CO₂ fertilization, land use, wild fire, accumulation of peat, changes in permafrost carbon, subaerial volcanic outgassing) and marine origin (changes in sea surface temperature, carbonate compensation to deglaciation and terrestrial biosphere regrowth, shallow-water carbonate sedimentation, changes in the soft tissue pump, and methane hydrates), which potentially may have contributed to the CO₂ dynamics during interglacials but which remain not well quantified. We use three Earth System Models (ESMs) of intermediate complexity to compare effects of selected mechanisms on the interglacial CO₂ and δ¹³ CO₂ changes, focusing on those with substantial potential impacts: namely carbonate sedimentation in shallow waters, peat growth, and (in the case of the Holocene) human land use. A set of specified carbon cycle forcings could qualitatively explain atmospheric CO₂ dynamics from 8ka BP to the pre-industrial. However, when applied to Eemian boundary conditions from 126 to 115 ka BP, the same set of forcings led to disagreement with the observed direction of CO₂ changes after 122 ka BP. This failure to simulate late-Eemian CO₂ dynamics could be a result of the imposed forcings such as prescribed CaCO₃ accumulation and/or an incorrect response of simulated terrestrial carbon to the surface cooling at the end of the interglacial. These experiments also reveal that key natural processes of interglacial CO₂ dynamics eshallow water CaCO₃ accumulation, peat and permafrost carbon dynamics are not well represented in the current ESMs. Global-scale modeling of these long-term carbon cycle components started only in the last decade, and uncertainty in parameterization of these mechanisms is a main limitation in the successful modeling of interglacial CO₂ dynamics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Systematic consideration of scientific support is a critical element in developing and, ultimately, using adverse outcome pathways (AOPs) for various regulatory applications. Though weight of evidence (WoE) analysis has been proposed as a basis for assessment of the maturity and level of confidence in an AOP, methodologies and tools are still being formalized. The Organization for Economic Co-operation and Development (OECD) Users' Handbook Supplement to the Guidance Document for Developing and Assessing AOPs (OECD 2014a; hereafter referred to as the OECD AOP Handbook) provides tailored Bradford-Hill (BH) considerations for systematic assessment of confidence in a given AOP. These considerations include (1) biological plausibility and (2) empirical support (dose-response, temporality, and incidence) for Key Event Relationships (KERs), and (3) essentiality of key events (KEs). Here, we test the application of these tailored BH considerations and the guidance outlined in the OECD AOP Handbook using a number of case examples to increase experience in more transparently documenting rationales for assigned levels of confidence to KEs and KERs, and to promote consistency in evaluation within and across AOPs. The major lessons learned from experience are documented, and taken together with the case examples, should contribute to better common understanding of the nature and form of documentation required to increase confidence in the application of AOPs for specific uses. Based on the tailored BH considerations and defining questions, a prototype quantitative model for assessing the WoE of an AOP using tools of multi-criteria decision analysis (MCDA) is described. The applicability of the approach is also demonstrated using the case example aromatase inhibition leading to reproductive dysfunction in fish. Following the acquisition of additional experience in the development and assessment of AOPs, further refinement of parameterization of the model through expert elicitation is recommended. Overall, the application of quantitative WoE approaches hold promise to enhance the rigor, transparency and reproducibility for AOP WoE determinations and may play an important role in delineating areas where research would have the greatest impact on improving the overall confidence in the AOP.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A stratigraphy-based chronology for the North Greenland Eemian Ice Drilling (NEEM) ice core has been derived by transferring the annual layer counted Greenland Ice Core Chronology 2005 (GICC05) and its model extension (GICC05modelext) from the NGRIP core to the NEEM core using 787 match points of mainly volcanic origin identified in the electrical conductivity measurement (ECM) and dielectrical profiling (DEP) records. Tephra horizons found in both the NEEM and NGRIP ice cores are used to test the matching based on ECM and DEP and provide five additional horizons used for the timescale transfer. A thinning function reflecting the accumulated strain along the core has been determined using a Dansgaard-Johnsen flow model and an isotope-dependent accumulation rate parameterization. Flow parameters are determined from Monte Carlo analysis constrained by the observed depth-age horizons. In order to construct a chronology for the gas phase, the ice age-gas age difference (Delta age) has been reconstructed using a coupled firn densification-heat diffusion model. Temperature and accumulation inputs to the Delta age model, initially derived from the water isotope proxies, have been adjusted to optimize the fit to timing constraints from d15N of nitrogen and high-resolution methane data during the abrupt onset of Greenland interstadials. The ice and gas chronologies and the corresponding thinning function represent the first chronology for the NEEM core, named GICC05modelext-NEEM-1. Based on both the flow and firn modelling results, the accumulation history for the NEEM site has been reconstructed. Together, the timescale and accumulation reconstruction provide the necessary basis for further analysis of the records from NEEM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A time series of fCO2, SST, and fluorescence data was collected between 1995 and 1997 by a CARIOCA buoy moored at the DyFAMed station (Dynamique des Flux Atmospheriques en Mediterranée) located in the northwestern Mediterranean Sea. On seasonal timescales, the spring phytoplankton bloom decreases the surface water fCO2 to approximately 290 µatm, followed by summer heating and a strong increase in fCO2 to a maximum of approximately 510 µatm. While the DELTA fCO2 shows strong variations on seasonal timescales, the annual average air-sea disequilibrium is only 2 µatm. Temperature-normalized fCO2 shows a continued decrease in dissolved CO2 throughout the summer and fall at a rate of approximately 0.6 µatm/d. The calculated annual air-sea CO2 transfer rate is -0.10 to -0.15 moles CO2 m-2 y-1, with these low values reflecting the relatively weak wind speed regime and small annual air-sea fCO2 disequilibrium. Extrapolating this rate over the whole Mediterranean Sea would lead to a flux of approximately -3 * 10**12 to -4.5 * 10**12 grams C/y, in good agreement with other estimates. An analysis of the effects of sampling frequency on annual air-sea CO2 flux estimates showed that monthly sampling is adequate to resolve the annual CO2 flux to within approximately ±10 - 18% at this site. Annual flux estimates made using temperature-derived fCO2 based on the measured fCO2-SST correlations are in agreement with measurement-based calculations to within ± 7-10% (depending on the gas transfer parameterization used), and suggest that annual CO2 flux estimates may be reasonably well predicted in this region from satellite or model-derived SST and wind speed information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dataset present result from the DFG- funded Arctic-Turbulence-Experiment (ARCTEX-2006) performed by the University of Bayreuth on the island of Svalbard, Norway, during the winter/spring transition 2006. From May 5 to May 19, 2006 turbulent flux and meteorological measurements were performed on the monitoring field near Ny-Ålesund, at 78°55'24'' N, 11°55'15'' E Kongsfjord, Svalbard (Spitsbergen), Norway. The ARCTEX-2006 campaign site was located about 200 m southeast of the settlement on flat snow covered tundra, 11 m to 14 m above sea level. The permanent sites used for this study consisted of the 10 m meteorological tower of the Alfred Wegener Institute for Polar- and Marine Research (AWI), the international standardized radiation measurement site of the Baseline Surface Radiation Network (BSRN), the radiosonde launch site and the AWI tethered balloon launch sites. The temporary sites - set up by the University of Bayreuth - were a 6 m meteorological gradient tower, an eddy-flux measurement complex (EF), and a laser-scintillometer section (SLS). A quality assessment and data correction was applied to detect and eliminate specific measurement errors common at a high arctic landscape. In addition, the quality checked sensible heat flux measurements are compared with bulk aerodynamic formulas that are widely used in atmosphere-ocean/land-ice models for polar regions as described in Ebert and Curry (1993, doi:10.1029/93JC00656) and Launiainen and Cheng (1995). These parameterization approaches easily allow estimation of the turbulent surface fluxes from routine meteorological measurements. The data show: - the role of the intermittency of the turbulent atmospheric fluctuation of momentum and scalars, - the existence of a disturbed vertical temperature profile (sharp inversion layer) close to the surface, - the relevance of possible free convection events for the snow or ice melt in the Arctic spring at Svalbard, and - the relevance of meso-scale atmospheric circulation pattern and air-mass advection for the near-surface turbulent heat exchange in the Arctic spring at Svalbard. Recommendations and improvements regarding the interpretation of eddy-flux and laser-scintillometer data as well as the arrangement of the instrumentation under polar distinct exchange conditions and (extreme) weather situations could be derived.