910 resultados para representation of linear operators
Resumo:
We introduce a technique for assessing the diurnal development of convective storm systems based on outgoing longwave radiation fields. Using the size distribution of the storms measured from a series of images, we generate an array in the lengthscale-time domain based on the standard score statistic. It demonstrates succinctly the size evolution of storms as well as the dissipation kinematics. It also provides evidence related to the temperature evolution of the cloud tops. We apply this approach to a test case comparing observations made by the Geostationary Earth Radiation Budget instrument to output from the Met Office Unified Model run at two resolutions. The 12km resolution model produces peak convective activity on all lengthscales significantly earlier in the day than shown by the observations and no evidence for storms growing in size. The 4km resolution model shows realistic timing and growth evolution although the dissipation mechanism still differs from the observed data.
Resumo:
Matrix isolation IR spectroscopy has been used to study the vacuum pyrolysis of 1,1,3,3-tetramethyldisiloxane (L1), 1,1,3,3,5,5-hexamethyltrisiloxane (L2) and 3H,5H-octamethyltetrasiloxane (L3) at ca. 1000 K in a flow reactor at low pressures. The hydrocarbons CH3, CH4, C2H2, C2H4, and C2H6 were observed as prominent pyrolysis products in all three systems, and amongst the weaker features are bands arising from the methylsilanes Me2SiH2 (for L1 and L2) and Me3SiH (for L3). The fundamental of SiO was also observed very weakly. By use of quantum chemical calculations combined with earlier kinetic models, mechanisms have been proposed involving the intermediacy of silanones Me2Si = O and MeSiH = O. Model calculations on the decomposition pathways of H3SiOSiH3 and H3SiOSiH2OSiH3 show that silanone elimination is favoured over silylene extrusion.
Resumo:
A range of linear polyurethanes featuring aliphatic, aromatic and ether residues have been prepared by co-polymerisation of novel 'masked' isocyanate A(2)-type monomers and diols. The reactive isocyanate monomers were generated in situ via the triphenylphosphine mediated decomposition of the heterocyclic disulfide, 1,2,4-dithiazolidine-3,5-dione. Two different synthetic approaches were examined and assessed for the construction of the novel A(2)-type monomers, which involved either coupling two 1,2,4-dithiazolidine-3,5-diones together through a spacer group or construction of 1,2,4-dithiazolidine-3,5-diones directly from diamines. The resulting polyurethanes were purified via solvent extraction and analysed using GPC, NMR and IR spectroscopic analyses. Molecular weight data were obtained and compared from both GPC and H-1 NMR (via end-group analysis) spectroscopic analysis. The thermal properties of the polyurethanes were determined using DSC and their solubility in common aprotic organic solvents was also assessed and related to their structural composition. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Three new linear trinuclear nickel(II) complexes, [Ni-3(salpen)(2)(OAc)(2)(H2O)(2)]center dot 4H(2)O (1) (OAc = acetate, CH3COO-), [Ni-3(salpen)(2)(OBz)(2)] (2) (OBz=benzoate, PhCOO-) and [Ni-3(salpen)(2)(OCn)(2)(CH3CN)(2)] (4) (OCn = cinnamate, PhCH=CHCOO-), H(2)salpen = tetradentate ligand, N,N'-bis(salicylidene)-1,3-pentanediamine have been synthesized and characterized structurally and magnetically. The choice of solvent for growing single crystal was made by inspecting the morphology of the initially obtained solids with the help of SEM study. The magnetic properties of a closely related complex, [Ni-3(salpen)(2)(OPh)(2)(EtOH)] (3) (OPh = phenyl acetate, PhCH2COO-) whose structure and solution properties have been reported recently, has also been studied here. The structural analyses reveal that both phenoxo and carboxylate bridging are present in all the complexes and the three Ni(II) atoms remain in linear disposition. Although the Schiff base ligand and the syn-syn bridging bidentate mode of the carboxylate group remain the same in complexes 1-4, the change of alkyl/aryl group of the carboxylates brings about systematic variations between six- and five-coordination in the geometry of the terminal Ni(II) centres of the trinuclear units. The steric demand as well as hydrophobic nature of the alkyl/aryl group of the carboxylate is found to play a crucial role in the tuning of the geometry. Variable-temperature (2-300 K) magnetic susceptibility measurements show that complexes 1-4 are antiferromagnetically coupled (J = -3.2(1), -4.6(1). -3.2(1) and -2.8(1) cm(-1) in 1-4, respectively). Calculations of the zero-field splitting parameter indicate that the values of D for complexes 1-4 are in the high range (D = +9.1(2), +14.2(2), +9.8(2) and +8.6(1) cm(-1) for 1-4, respectively). The highest D value of +14.2(2) and +9.8(2) cm(-1) for complexes 2 and 3, respectively, are consistent with the pentacoordinated geometry of the two terminal nickel(II) ions in 2 and one terminal nickel(II) ion in 3. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The paper is an investigation of the exchange of ideas and information between an architect and building users in the early stages of the building design process before the design brief or any drawings have been produced. The purpose of the research is to gain insight into the type of information users exchange with architects in early design conversations and to better understand the influence the format of design interactions and interactional behaviours have on the exchange of information. We report an empirical study of pre-briefing conversations in which the overwhelming majority of the exchanges were about the functional or structural attributes of space, discussion that touched on the phenomenological, perceptual and the symbolic meanings of space were rare. We explore the contextual features of meetings and the conversational strategies taken by the architect to prompt the users for information and the influence these had on the information provided. Recommendations are made on the format and structure of pre-briefing conversations and on designers' strategies for raising the level of information provided by the user beyond the functional or structural attributes of space.
Resumo:
Background Recent research provides evidence for specific disturbance in feeding and growth in children of mothers with eating disorders. Aim To investigate the impact of maternal eating disorders during the post-natal year on the internal world of children, as expressed in children's representations of self and their mother in pretend mealtime play at 5 years of age. Methods Children of mothers with eating disorders (n = 33) and a comparison group (n = 24) were videotaped enacting a family mealtime in pretend play. Specific classes of children's play representations were coded blind to group membership. Univariate analyses compared the groups on representations of mother and self. Logistic regression explored factors predicting pretend play representations. Results Positive representations of the mother expressed as feeding, eating or body shape themes were more frequent in the index group. There were no other significant group differences in representations. In a logistic regression analysis, current maternal eating psychopathology was the principal predictor of these positive maternal representations. Marital criticism was associated with negative representations of the mother. Conclusions These findings suggest that maternal eating disorders may influence the development of a child's internal world, such that they are more preoccupied with maternal eating concerns. However, more extensive research on larger samples is required to replicate these preliminary findings.
Resumo:
To-be-enacted material is more accessible in tests of recognition and lexical decision than material not intended for action (T. Goschke J. Kuhl, 1993; R. L. Marsh, J. L. Hicks, & M. L. Bink, 1998). This finding has been attributed to the superior status of intention-related information. The current article explores an alternative (action-superiority) account that draws parallels between the intended enactment effect (IEE) and the subject-performed task effect. Using 2 paradigms, the authors observed faster recognition latencies for both enacted and to-be-enacted material. It is crucial to note that there was no evidence of an IEE for items that had already been executed during encoding. The IEE was also eliminated when motor processing was prevented after verbal encoding. These findings suggest an overlap between overt and intended enactment and indicate that motor information may be activated for verbal material in preparation for subsequent execution.
Resumo:
Time/frequency and temporal analyses have been widely used in biomedical signal processing. These methods represent important characteristics of a signal in both time and frequency domain. In this way, essential features of the signal can be viewed and analysed in order to understand or model the physiological system. Historically, Fourier spectral analyses have provided a general method for examining the global energy/frequency distributions. However, an assumption inherent to these methods is the stationarity of the signal. As a result, Fourier methods are not generally an appropriate approach in the investigation of signals with transient components. This work presents the application of a new signal processing technique, empirical mode decomposition and the Hilbert spectrum, in the analysis of electromyographic signals. The results show that this method may provide not only an increase in the spectral resolution but also an insight into the underlying process of the muscle contraction.
Resumo:
This book is a collection of articles devoted to the theory of linear operators in Hilbert spaces and its applications. The subjects covered range from the abstract theory of Toeplitz operators to the analysis of very specific differential operators arising in quantum mechanics, electromagnetism, and the theory of elasticity; the stability of numerical methods is also discussed. Many of the articles deal with spectral problems for not necessarily selfadjoint operators. Some of the articles are surveys outlining the current state of the subject and presenting open problems.
Resumo:
The ability of four operational weather forecast models [ECMWF, Action de Recherche Petite Echelle Grande Echelle model (ARPEGE), Regional Atmospheric Climate Model (RACMO), and Met Office] to generate a cloud at the right location and time (the cloud frequency of occurrence) is assessed in the present paper using a two-year time series of observations collected by profiling ground-based active remote sensors (cloud radar and lidar) located at three different sites in western Europe (Cabauw. Netherlands; Chilbolton, United Kingdom; and Palaiseau, France). Particular attention is given to potential biases that may arise from instrumentation differences (especially sensitivity) from one site to another and intermittent sampling. In a second step the statistical properties of the cloud variables involved in most advanced cloud schemes of numerical weather forecast models (ice water content and cloud fraction) are characterized and compared with their counterparts in the models. The two years of observations are first considered as a whole in order to evaluate the accuracy of the statistical representation of the cloud variables in each model. It is shown that all models tend to produce too many high-level clouds, with too-high cloud fraction and ice water content. The midlevel and low-level cloud occurrence is also generally overestimated, with too-low cloud fraction but a correct ice water content. The dataset is then divided into seasons to evaluate the potential of the models to generate different cloud situations in response to different large-scale forcings. Strong variations in cloud occurrence are found in the observations from one season to the same season the following year as well as in the seasonal cycle. Overall, the model biases observed using the whole dataset are still found at seasonal scale, but the models generally manage to well reproduce the observed seasonal variations in cloud occurrence. Overall, models do not generate the same cloud fraction distributions and these distributions do not agree with the observations. Another general conclusion is that the use of continuous ground-based radar and lidar observations is definitely a powerful tool for evaluating model cloud schemes and for a responsive assessment of the benefit achieved by changing or tuning a model cloud
Resumo:
A poor representation of cloud structure in a general circulation model (GCM) is widely recognised as a potential source of error in the radiation budget. Here, we develop a new way of representing both horizontal and vertical cloud structure in a radiation scheme. This combines the ‘Tripleclouds’ parametrization, which introduces inhomogeneity by using two cloudy regions in each layer as opposed to one, each with different water content values, with ‘exponential-random’ overlap, in which clouds in adjacent layers are not overlapped maximally, but according to a vertical decorrelation scale. This paper, Part I of two, aims to parametrize the two effects such that they can be used in a GCM. To achieve this, we first review a number of studies for a globally applicable value of fractional standard deviation of water content for use in Tripleclouds. We obtain a value of 0.75 ± 0.18 from a variety of different types of observations, with no apparent dependence on cloud type or gridbox size. Then, through a second short review, we create a parametrization of decorrelation scale for use in exponential-random overlap, which varies the scale linearly with latitude from 2.9 km at the Equator to 0.4 km at the poles. When applied to radar data, both components are found to have radiative impacts capable of offsetting biases caused by cloud misrepresentation. Part II of this paper implements Tripleclouds and exponential-random overlap into a radiation code and examines both their individual and combined impacts on the global radiation budget using re-analysis data.
Resumo:
Reliably representing both horizontal cloud inhomogeneity and vertical cloud overlap is fundamentally important for the radiation budget of a general circulation model. Here, we build on the work of Part One of this two-part paper by applying a pair of parameterisations that account for horizontal inhomogeneity and vertical overlap to global re-analysis data. These are applied both together and separately in an attempt to quantify the effects of poor representation of the two components on radiation budget. Horizontal inhomogeneity is accounted for using the “Tripleclouds” scheme, which uses two regions of cloud in each layer of a gridbox as opposed to one; vertical overlap is accounted for using “exponential-random” overlap, which aligns vertically continuous cloud according to a decorrelation height. These are applied to a sample of scenes from a year of ERA-40 data. The largest radiative effect of horizontal inhomogeneity is found to be in areas of marine stratocumulus; the effect of vertical overlap is found to be fairly uniform, but with larger individual short-wave and long-wave effects in areas of deep, tropical convection. The combined effect of the two parameterisations is found to reduce the magnitude of the net top-of-atmosphere cloud radiative forcing (CRF) by 2.25 W m−2, with shifts of up to 10 W m−2 in areas of marine stratocumulus. The effects of the uncertainty in our parameterisations on radiation budget is also investigated. It is found that the uncertainty in the impact of horizontal inhomogeneity is of order ±60%, while the uncertainty in the impact of vertical overlap is much smaller. This suggests an insensitivity of the radiation budget to the exact nature of the global decorrelation height distribution derived in Part One.
Resumo:
We present extensive molecular dynamics simulations of the dynamics of diluted long probe chains entangled with a matrix of shorter chains. The chain lengths of both components are above the entanglement strand length, and the ratio of their lengths is varied over a wide range to cover the crossover from the chain reptation regime to tube Rouse motion regime of the long probe chains. Reducing the matrix chain length results in a faster decay of the dynamic structure factor of the probe chains, in good agreement with recent neutron spin echo experiments. The diffusion of the long chains, measured by the mean square displacements of the monomers and the centers of mass of the chains, demonstrates a systematic speed-up relative to the pure reptation behavior expected for monodisperse melts of sufficiently long polymers. On the other hand, the diffusion of the matrix chains is only weakly perturbed by the diluted long probe chains. The simulation results are qualitatively consistent with the theoretical predictions based on constraint release Rouse model, but a detailed comparison reveals the existence of a broad distribution of the disentanglement rates, which is partly confirmed by an analysis of the packing and diffusion of the matrix chains in the tube region of the probe chains. A coarse-grained simulation model based on the tube Rouse motion model with incorporation of the probability distribution of the tube segment jump rates is developed and shows results qualitatively consistent with the fine scale molecular dynamics simulations. However, we observe a breakdown in the tube Rouse model when the short chain length is decreased to around N-S = 80, which is roughly 3.5 times the entanglement spacing N-e(P) = 23. The location of this transition may be sensitive to the chain bending potential used in our simulations.
Resumo:
The paper proposes a method of performing system identification of a linear system in the presence of bounded disturbances. The disturbances may be piecewise parabolic or periodic functions. The method is demonstrated effectively on two example systems with a range of disturbances.