903 resultados para General theory of fields and particles
Resumo:
The clusters [Fe3(CO)11(RCN)] (1: R = Me, C3H5, C6H5, or C6H4-2-Me) have been prepared at low temperature from [Fe3(CO)12] and RCN in the presence of Me3NO. Compounds 1 react essentially quantitatively with a wide range of two-electron donors, L, (viz.: CO, PPh3, P(OMe)3, PPh2H, PPh2Me, PF3, CyNC (Cy = cyclohexyl), P(OEt)3, SbPh3, PBu3, AsPh3, or SnR2 (R = CH(SiMe3)2)) to give [Fe3(CO)11L] (2). In some cases (2), on treatment with Me3NO and then L′ (L′ = a second two-electron donor) yields [Fe3(CO)10LL′] in high yield. The crystal and molecular structures of 1 (L = NCC6H4Me-2) have been determined by a full single crystal structure analysis, and shown to have an axial nitrile coordinated at the unique iron atom, with two CO groups bridging the other two metal atoms.
Resumo:
CloudSat is a satellite experiment designed to measure the vertical structure of clouds from space. The expected launch of CloudSat is planned for 2004, and once launched, CloudSat will orbit in formation as part of a constellation of satellites (the A-Train) that includes NASA's Aqua and Aura satellites, a NASA-CNES lidar satellite (CALIPSO), and a CNES satellite carrying a polarimeter (PARASOL). A unique feature that CloudSat brings to this constellation is the ability to fly a precise orbit enabling the fields of view of the CloudSat radar to be overlapped with the CALIPSO lidar footprint and the other measurements of the constellation. The precision and near simultaneity of this overlap creates a unique multisatellite observing system for studying the atmospheric processes essential to the hydrological cycle.The vertical profiles of cloud properties provided by CloudSat on the global scale fill a critical gap in the investigation of feedback mechanisms linking clouds to climate. Measuring these profiles requires a combination of active and passive instruments, and this will be achieved by combining the radar data of CloudSat with data from other active and passive sensors of the constellation. This paper describes the underpinning science and general overview of the mission, provides some idea of the expected products and anticipated application of these products, and the potential capability of the A-Train for cloud observations. Notably, the CloudSat mission is expected to stimulate new areas of research on clouds. The mission also provides an important opportunity to demonstrate active sensor technology for future scientific and tactical applications. The CloudSat mission is a partnership between NASA's JPL, the Canadian Space Agency, Colorado State University, the U.S. Air Force, and the U.S. Department of Energy.
Resumo:
This paper examines the implications of policy fracture and arms length governance within the decision making processes currently shaping curriculum design within the English education system. In particular it argues that an unresolved ‘ideological fracture’ at government level has been passed down to school leaders whose response to the dilemma is distorted by the target-driven agenda of arms length agencies. Drawing upon the findings of a large scale on-line survey of history teaching in English secondary schools, this paper illustrates the problems that occur when policy making is divorced from curriculum theory, and in particular from any consideration of the nature of knowledge. Drawing on the social realist theory of knowledge elaborated by Young (2008), we argue that the rapid spread of alternative curricular arrangements, implemented in the absence of an understanding of curriculum theory, undermines the value of disciplined thinking to the detriment of many young people, particularly those in areas of social and economic deprivation.
Resumo:
Undirected graphical models are widely used in statistics, physics and machine vision. However Bayesian parameter estimation for undirected models is extremely challenging, since evaluation of the posterior typically involves the calculation of an intractable normalising constant. This problem has received much attention, but very little of this has focussed on the important practical case where the data consists of noisy or incomplete observations of the underlying hidden structure. This paper specifically addresses this problem, comparing two alternative methodologies. In the first of these approaches particle Markov chain Monte Carlo (Andrieu et al., 2010) is used to efficiently explore the parameter space, combined with the exchange algorithm (Murray et al., 2006) for avoiding the calculation of the intractable normalising constant (a proof showing that this combination targets the correct distribution in found in a supplementary appendix online). This approach is compared with approximate Bayesian computation (Pritchard et al., 1999). Applications to estimating the parameters of Ising models and exponential random graphs from noisy data are presented. Each algorithm used in the paper targets an approximation to the true posterior due to the use of MCMC to simulate from the latent graphical model, in lieu of being able to do this exactly in general. The supplementary appendix also describes the nature of the resulting approximation.
Resumo:
Particulate matter generated during the cooking process has been identified as one of the major problems of indoor air quality and indoor environmental health. Reliable assessment of exposure to cooking-generated particles requires accurate information of emission characteristics especially the size distribution. This study characterizes the volume/mass-based size distribution of the fume particles at the oil-heating stage for the typical Chinese-style cooking in a laboratory kitchen. A laser-diffraction size analyzer is applied to measure the volume frequency of fume particles ranged from 0.1 to 10 μm, which contribute to most mass proportion in PM2.5 and PM10. Measurements show that particle emissions have little dependence on the types of vegetable oil used but have a close relationship with the heating temperature. It is found that volume frequency of fume particles in the range of 1.0–4.0 μm accounts for nearly 100% of PM0.1–10 with the mode diameter 2.7 μm, median diameter 2.6 μm, Sauter mean diameter 3.0 μm, DeBroukere mean diameter 3.2 μm, and distribution span 0.48. Such information on emission characteristics obtained in this study can be possibly used to improve the assessment of indoor air quality due to PM0.1–10 in the kitchen and residential flat.
Resumo:
This work reports the ligational behavior of the neutral bidentate chelating molecule 2-(3,5-dimethyl pyrazol-1-yl) benzothiazole towards the oxomolybdenum(V) center. Both mononuclear complexes of the type (MoOX3L)-O-V and binuclear complexes of the formula (Mo2O4X2L2)-O-V (where X = Cl, Br) are isolated in the solid state. The complexes are characterized by elemental analyses, various spectroscopic techniques (UV-Vis IR), magnetic susceptibility measurement at room temperature, and cyclic voltammetry for their redox behavior at a platinum electrode in CH3CN. The mononuclear complexes (MoOX3L)-O-V are found to be paramagnetic while the binuclear complexes Mo2O4X2L2 are diamagnetic. Crystal and molecular structure of the ligand and the dioxomolybdenum complex (MoO2Br2L)-O-VI (obtained from the complex MoOBr3L during crystallization) have been solved by single crystal X-ray diffraction technique. Relevant DFT calculations of the ligand and the complex (MoO2Br2L)-O-VI are also carried out.
Resumo:
Bertolt Brecht's dramaturgy was as influential upon the development of British drama on television between the 1950s and the 1970s as it was in the theatre. His influence was made manifest through the work of writers, directors and producers such as Tony Garnett, Ken Loach, John McGrath and Dennis Potter, whose attempts to create original Brechtian forms of television drama were reflected in the frequent reference to Brecht in contemporary debate concerning the political and aesthetic direction and value of television drama. While this discussion has been framed thus far around how Brechtian techniques and theory were applied to the newer media of television, this article examines these arguments from another perspective. Through detailed analysis of a 1964 BBC production of The Life of Galileo, I assess how the primary, canonical sources of Brecht's stage plays were realised on television during this period, locating Brecht's drama in the wider context of British television drama in general during the 1960s and 1970s. I pay particular attention to the use of the television studio as a site that could replicate or reinvent the theatrical space of the stage, and the responsiveness of the television audience towards Brechtian dramaturgy.
Resumo:
In this paper, we examine the temporal stability of the evidence for two commodity futures pricing theories. We investigate whether the forecast power of commodity futures can be attributed to the extent to which they exhibit seasonality and we also consider whether there are time varying parameters or structural breaks in these pricing relationships. Compared to previous studies, we find stronger evidence of seasonality in the basis, which supports the theory of storage. The power of the basis to forecast subsequent price changes is also strengthened, while results on the presence of a risk premium are inconclusive. In addition, we show that the forecasting power of commodity futures cannot be attributed to the extent to which they exhibit seasonality. We find that in most cases where structural breaks occur, only changes in the intercepts and not the slopes are detected, illustrating that the forecast power of the basis is stable over different economic environments.
Resumo:
We give a characterisation of the spectral properties of linear differential operators with constant coefficients, acting on functions defined on a bounded interval, and determined by general linear boundary conditions. The boundary conditions may be such that the resulting operator is not selfadjoint. We associate the spectral properties of such an operator $S$ with the properties of the solution of a corresponding boundary value problem for the partial differential equation $\partial_t q \pm iSq=0$. Namely, we are able to establish an explicit correspondence between the properties of the family of eigenfunctions of the operator, and in particular whether this family is a basis, and the existence and properties of the unique solution of the associated boundary value problem. When such a unique solution exists, we consider its representation as a complex contour integral that is obtained using a transform method recently proposed by Fokas and one of the authors. The analyticity properties of the integrand in this representation are crucial for studying the spectral theory of the associated operator.
Resumo:
Many physical systems exhibit dynamics with vastly different time scales. Often the different motions interact only weakly and the slow dynamics is naturally constrained to a subspace of phase space, in the vicinity of a slow manifold. In geophysical fluid dynamics this reduction in phase space is called balance. Classically, balance is understood by way of the Rossby number R or the Froude number F; either R ≪ 1 or F ≪ 1. We examined the shallow-water equations and Boussinesq equations on an f -plane and determined a dimensionless parameter _, small values of which imply a time-scale separation. In terms of R and F, ∈= RF/√(R^2+R^2 ) We then developed a unified theory of (extratropical) balance based on _ that includes all cases of small R and/or small F. The leading-order systems are ensured to be Hamiltonian and turn out to be governed by the quasi-geostrophic potential-vorticity equation. However, the height field is not necessarily in geostrophic balance, so the leading-order dynamics are more general than in quasi-geostrophy. Thus the quasi-geostrophic potential-vorticity equation (as distinct from the quasi-geostrophic dynamics) is valid more generally than its traditional derivation would suggest. In the case of the Boussinesq equations, we have found that balanced dynamics generally implies hydrostatic balance without any assumption on the aspect ratio; only when the Froude number is not small and it is the Rossby number that guarantees a timescale separation must we impose the requirement of a small aspect ratio to ensure hydrostatic balance.
Resumo:
We discuss the modeling of dielectric responses of electromagnetically excited networks which are composed of a mixture of capacitors and resistors. Such networks can be employed as lumped-parameter circuits to model the response of composite materials containing conductive and insulating grains. The dynamics of the excited network systems are studied using a state space model derived from a randomized incidence matrix. Time and frequency domain responses from synthetic data sets generated from state space models are analyzed for the purpose of estimating the fraction of capacitors in the network. Good results were obtained by using either the time-domain response to a pulse excitation or impedance data at selected frequencies. A chemometric framework based on a Successive Projections Algorithm (SPA) enables the construction of multiple linear regression (MLR) models which can efficiently determine the ratio of conductive to insulating components in composite material samples. The proposed method avoids restrictions commonly associated with Archie’s law, the application of percolation theory or Kohlrausch-Williams-Watts models and is applicable to experimental results generated by either time domain transient spectrometers or continuous-wave instruments. Furthermore, it is quite generic and applicable to tomography, acoustics as well as other spectroscopies such as nuclear magnetic resonance, electron paramagnetic resonance and, therefore, should be of general interest across the dielectrics community.
Resumo:
Objective To determine the prevalence and nature of prescribing and monitoring errors in general practices in England. Design Retrospective case note review of unique medication items prescribed over a 12 month period to a 2% random sample of patients. Mixed effects logistic regression was used to analyse the data. Setting Fifteen general practices across three primary care trusts in England. Data sources Examination of 6048 unique prescription items prescribed over the previous 12 months for 1777 patients. Main outcome measures Prevalence of prescribing and monitoring errors, and severity of errors, using validated definitions. Results Prescribing and/or monitoring errors were detected in 4.9% (296/6048) of all prescription items (95% confidence interval 4.4 - 5.5%). The vast majority of errors were of mild to moderate severity, with 0.2% (11/6048) of items having a severe error. After adjusting for covariates, patient-related factors associated with an increased risk of prescribing and/or monitoring errors were: age less than 15 (Odds Ratio (OR) 1.87, 1.19 to 2.94, p=0.006) or greater than 64 years (OR 1.68, 1.04 to 2.73, p=0.035), and higher numbers of unique medication items prescribed (OR 1.16, 1.12 to 1.19, p<0.001). Conclusion Prescribing and monitoring errors are common in English general practice, although severe errors are unusual. Many factors increase the risk of error. Having identified the most common and important errors, and the factors associated with these, strategies to prevent future errors should be developed based on the study findings.
Resumo:
Traditional derivations of available potential energy, in a variety of contexts, involve combining some form of mass conservation together with energy conservation. This raises the questions of why such constructions are required in the first place, and whether there is some general method of deriving the available potential energy for an arbitrary fluid system. By appealing to the underlying Hamiltonian structure of geophysical fluid dynamics, it becomes clear why energy conservation is not enough, and why other conservation laws such as mass conservation need to be incorporated in order to construct an invariant, known as the pseudoenergy, that is a positive‐definite functional of disturbance quantities. The available potential energy is just the non‐kinetic part of the pseudoenergy, the construction of which follows a well defined algorithm. Two notable features of the available potential energy defined thereby are first, that it is a locally defined quantity, and second, that it is inherently definable at finite amplitude (though one may of course always take the small‐amplitude limit if this is appropriate). The general theory is made concrete by systematic derivations of available potential energy in a number of different contexts. All the well known expressions are recovered, and some new expressions are obtained. The possibility of generalizing the concept of available potential energy to dynamically stable basic flows (as opposed to statically stable basic states) is also discussed.