19 resultados para Glass and other amorphous materials
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
The structural relaxation of pure amorphous silicon a-Si and hydrogenated amorphous silicon a-Si:H materials, that occurs during thermal annealing experiments, has been analyzed by Raman spectroscopy and differential scanning calorimetry. Unlike a-Si, the heat evolved from a-Si:H cannot be explained by relaxation of the Si-Si network strain but it reveals a derelaxation of the bond angle strain. Since the state of relaxation after annealing is very similar for pure and hydrogenated materials, our results give strong experimental support to the predicted configurational gap between a-Si and crystalline silicon.
Resumo:
The structural relaxation of pure amorphous silicon (a-Si) and hydrogenated amorphous silicon (a-Si:H) materials, that occurs during thermal annealing experiments, has been analyzed by Raman spectroscopy and differential scanning calorimetry. Unlike a-Si, the heat evolved from a-Si:H cannot be explained by relaxation of the Si-Si network strain but it reveals a derelaxation of the bond angle strain. Since the state of relaxation after annealing is very similar for pure and hydrogenated materials, our results give strong experimental support to the predicted configurational gap between a-Si and crystalline silicon
Resumo:
The purpose of this paper is twofold. First, we construct a DSGE model which spells out explicitly the instrumentation of monetary policy. The interest rate is determined every period depending on the supply and demand for reserves which in turn are affected by fundamental shocks: unforeseeable changes in cash withdrawal, autonomous factors, technology and government spending. Unexpected changes in the monetary conditions of the economy are interpreted as monetary shocks. We show that these monetary shocks have the usual effects on economic activity without the need of imposing additional frictions as limited participation in asset markets or sticky prices. Second, we show that this view of monetary policy may have important consequences for empirical research. In the model, the contemporaneous correlations between interest rates, prices and output are due to the simultaneous effect of all fundamental shocks. We provide an example where these contemporaneous correlations may be misinterpreted as a Taylor rule. In addition, we use the sign of the impact responses of all shocks on output, prices and interest rates derived from the model to identify the sources of shocks in the data.
Resumo:
Theory of compositional data analysis is often focused on the composition only. However in practical applications we often treat a composition together with covariableswith some other scale. This contribution systematically gathers and develop statistical tools for this situation. For instance, for the graphical display of the dependenceof a composition with a categorical variable, a colored set of ternary diagrams mightbe a good idea for a first look at the data, but it will fast hide important aspects ifthe composition has many parts, or it takes extreme values. On the other hand colored scatterplots of ilr components could not be very instructive for the analyst, if theconventional, black-box ilr is used.Thinking on terms of the Euclidean structure of the simplex, we suggest to set upappropriate projections, which on one side show the compositional geometry and on theother side are still comprehensible by a non-expert analyst, readable for all locations andscales of the data. This is e.g. done by defining special balance displays with carefully-selected axes. Following this idea, we need to systematically ask how to display, explore,describe, and test the relation to complementary or explanatory data of categorical, real,ratio or again compositional scales.This contribution shows that it is sufficient to use some basic concepts and very fewadvanced tools from multivariate statistics (principal covariances, multivariate linearmodels, trellis or parallel plots, etc.) to build appropriate procedures for all these combinations of scales. This has some fundamental implications in their software implementation, and how might they be taught to analysts not already experts in multivariateanalysis
Resumo:
El participi i altres fenòmens relacionats en el Castellà i el Català antic
Resumo:
There is almost not a case in exploration geology, where the studied data doesn’tincludes below detection limits and/or zero values, and since most of the geological dataresponds to lognormal distributions, these “zero data” represent a mathematicalchallenge for the interpretation.We need to start by recognizing that there are zero values in geology. For example theamount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-existswith nepheline. Another common essential zero is a North azimuth, however we canalways change that zero for the value of 360°. These are known as “Essential zeros”, butwhat can we do with “Rounded zeros” that are the result of below the detection limit ofthe equipment?Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimeswe need to differentiate between a sodic and a potassic alteration. Pre-classification intogroups requires a good knowledge of the distribution of the data and the geochemicalcharacteristics of the groups which is not always available. Considering the zero valuesequal to the limit of detection of the used equipment will generate spuriousdistributions, especially in ternary diagrams. Same situation will occur if we replace thezero values by a small amount using non-parametric or parametric techniques(imputation).The method that we are proposing takes into consideration the well known relationshipsbetween some elements. For example, in copper porphyry deposits, there is always agood direct correlation between the copper values and the molybdenum ones, but whilecopper will always be above the limit of detection, many of the molybdenum values willbe “rounded zeros”. So, we will take the lower quartile of the real molybdenum valuesand establish a regression equation with copper, and then we will estimate the“rounded” zero values of molybdenum by their corresponding copper values.The method could be applied to any type of data, provided we establish first theircorrelation dependency.One of the main advantages of this method is that we do not obtain a fixed value for the“rounded zeros”, but one that depends on the value of the other variable.Key words: compositional data analysis, treatment of zeros, essential zeros, roundedzeros, correlation dependency
Resumo:
Recently, the surprising result that ab initio calculations on benzene and other planar arenes at correlated MP2, MP3, configuration interaction with singles and doubles (CISD), and coupled cluster with singles and doubles levels of theory using standard Pople’s basis sets yield nonplanar minima has been reported. The planar optimized structures turn out to be transition states presenting one or more large imaginary frequencies, whereas single-determinant-based methods lead to the expected planar minima and no imaginary frequencies. It has been suggested that such anomalous behavior can be originated by two-electron basis set incompleteness error. In this work, we show that the reported pitfalls can be interpreted in terms of intramolecular basis set superposition error (BSSE) effects, mostly between the C–H moieties constituting the arenes. We have carried out counterpoise-corrected optimizations and frequency calculations at the Hartree–Fock, B3LYP, MP2, and CISD levels of theory with several basis sets for a number of arenes. In all cases, correcting for intramolecular BSSE fixes the anomalous behavior of the correlated methods, whereas no significant differences are observed in the single-determinant case. Consequently, all systems studied are planar at all levels of theory. The effect of different intramolecular fragment definitions and the particular case of charged species, namely, cyclopentadienyl and indenyl anions, respectively, are also discussed
Resumo:
The effectiveness of decision rules depends on characteristics of bothrules and environments. A theoretical analysis of environments specifiesthe relative predictive accuracies of the lexicographic rule 'take-the-best'(TTB) and other simple strategies for binary choice. We identify threefactors: how the environment weights variables; characteristics of choicesets; and error. For cases involving from three to five binary cues, TTBis effective across many environments. However, hybrids of equal weights(EW) and TTB models are more effective as environments become morecompensatory. In the presence of error, TTB and similar models do not predictmuch better than a naïve model that exploits dominance. We emphasizepsychological implications and the need for more complete theories of theenvironment that include the role of error.
Resumo:
In dealing with systems as complex as the cytoskeleton, we need organizing principles or, short of that, an empirical framework into which these systems fit. We report here unexpected invariants of cytoskeletal behavior that comprise such an empirical framework. We measured elastic and frictional moduli of a variety of cell types over a wide range of time scales and using a variety of biological interventions. In all instances elastic stresses dominated at frequencies below 300 Hz, increased only weakly with frequency, and followed a power law; no characteristic time scale was evident. Frictional stresses paralleled the elastic behavior at frequencies below 10 Hz but approached a Newtonian viscous behavior at higher frequencies. Surprisingly, all data could be collapsed onto master curves, the existence of which implies that elastic and frictional stresses share a common underlying mechanism. Taken together, these findings define an unanticipated integrative framework for studying protein interactions within the complex microenvironment of the cell body, and appear to set limits on what can be predicted about integrated mechanical behavior of the matrix based solely on cytoskeletal constituents considered in isolation. Moreover, these observations are consistent with the hypothesis that the cytoskeleton of the living cell behaves as a soft glassy material, wherein cytoskeletal proteins modulate cell mechanical properties mainly by changing an effective temperature of the cytoskeletal matrix. If so, then the effective temperature becomes an easily quantified determinant of the ability of the cytoskeleton to deform, flow, and reorganize.
Resumo:
DnaSP is a software package for the analysis of DNA polymorphism data. Present version introduces several new modules and features which, among other options allow: (1) handling big data sets (~5 Mb per sequence); (2) conducting a large number of coalescent-based tests by Monte Carlo computer simulations; (3) extensive analyses of the genetic differentiation and gene flow among populations; (4) analysing the evolutionary pattern of preferred and unpreferred codons; (5) generating graphical outputs for an easy visualization of results. Availability: The software package, including complete documentation and examples, is freely available to academic users from: http://www.ub.es/dnasp
Resumo:
Seismic methods used in the study of snow avalanches may be employed to detect and characterize landslides and other mass movements, using standard spectrogram/sonogram analysis. For snow avalanches, the spectrogram for a station that is approached by a sliding mass exhibits a triangular time/frequency signature due to an increase over time in the higher-frequency constituents. Recognition of this characteristic footprint in a spectrogram suggests a useful metric for identifying other mass-movement events such as landslides. The 1 June 2005 slide at Laguna Beach, California is examined using data obtained from the Caltech/USGS Regional Seismic Network. This event exhibits the same general spectrogram features observed in studies of Alpine snow avalanches. We propose that these features are due to the systematic relative increase in high-frequency energy transmitted to a seismometer in the path of a mass slide owing to a reduction of distance from the source signal. This phenomenon is related to the path of the waves whose high frequencies are less attenuated as they traverse shorter source-receiver paths. Entrainment of material in the course of the slide may also contribute to the triangular time/frequency signature as a consequence of the increase in the energy involved in the process; in this case the contribution would be a source effect. By applying this commonly observed characteristic to routine monitoring algorithms, along with custom adjustments for local site effects, we seek to contribute to the improvement in automatic detection and monitoring methods of landslides and other mass movements.
Resumo:
Courtyard houses are attested at several sites in southern Gaul between the 5th and the 1st centuries BC. They represent a new concept when compared to the traditional protohistoric houses of the region and have often been interpreted in terms of Mediterranean, Greek or Italic influences. Regardless of their origin, exogenous influences or evolution, these houses suggest the emergence of social differentiation and elites in several of the main settlements. This article analyses the significance of the various courtyard house categories in the context of local, indigenous societies, while trying to understand the social implications of this new type of residence. In a wider context, the development of domestic architecture during the Iron Age is analysed alongside the relationships between changing uses of space and social changes.
Resumo:
Head space gas chromatography with flame-ionization detection (HS-GC-FID), ancl purge and trap gas chromatography-mass spectrometry (P&T-GC-MS) have been used to determine methyl-tert-butyl ether (MTBE) and benzene, toluene, and the ylenes (BTEX) in groundwater. In the work discussed in this paper measures of quality, e.g. recovery (94-111%), precision (4.6 - 12.2%), limits of detection (0.3 - 5.7 I~g L 1 for HS and 0.001 I~g L 1 for PT), and robust-ness, for both methods were compared. In addition, for purposes of comparison, groundwater samples from areas suffering from odor problems because of fuel spillage and tank leakage were analyzed by use of both techniques. For high concentration levels there was good correlation between results from both methods.
Abnormalities of sodium excretion and other disorders of renal function in fulminant hepatic failure
Resumo:
Renal function was evaluated in 40 patients with fulminant hepatic failure, They were divided into two groups on the basis of glomerular filtration rates greater than 40 ml/min or less than 25 ml/min. A number of patients in group 1 had markedly abnormal renal retention of sodium together with a reduced free water clearance and low potassium excretion which could be explained by increased proximal tubular reabsorption of sodium. The patients in group 2 had evidence that renal tubular integrity was maintained when the glomerular filtration rate was greater than or equal ml/min (functional renal failure), but evidence of tubular damage was present when this was less than 3 ml/min (acute tubular necrosis).
Resumo:
Peer-reviewed