225 resultados para common method variance
Resumo:
A narrow absorption feature in an atomic or molecular gas (such as iodine or methane) is used as the frequency reference in many stabilized lasers. As part of the stabilization scheme an optical frequency dither is applied to the laser. In optical heterodyne experiments, this dither is transferred to the RF beat signal, reducing the spectral power density and hence the signal to noise ratio over that in the absence of dither. We removed the dither by mixing the raw beat signal with a dithered local oscillator signal. When the dither waveform is matched to that of the reference laser the output signal from the mixer is rendered dither free. Application of this method to a Winters iodine-stabilized helium-neon laser reduced the bandwidth of the beat signal from 6 MHz to 390 kHz, thereby lowering the detection threshold from 5 pW of laser power to 3 pW. In addition, a simple signal detection model is developed which predicts similar threshold reductions.
Resumo:
Intracellular Wolbachia infections are extremely common in arthropods and exert profound control over the reproductive biology of the host. However, very little is known about the underlying molecular mechanisms which mediate these interactions with the host. We examined protein synthesis by Wolbachia in a Drosophila host in vivo by selective metabolic labelling of prokaryotic proteins and subsequent analysis by 1D and 2D gel electrophoresis. Using this method we could identify the major proteins synthesized by Wolbachia in ovaries and testes of flies. Of these proteins the most abundant was of low molecular weight and showed size variation between Wolbachia strains which correlated with the reproductive phenotype they generated in flies. Using the gel systems we employed it was not possible to identify any proteins of Wolbachia origin in the mature sperm cells of infected flies.
Resumo:
Clifford Geertz was best known for his pioneering excursions into symbolic or interpretive anthropology, especially in relation to Indonesia. Less well recognised are his stimulating explorations of the modern economic history of Indonesia. His thinking on the interplay of economics and culture was most fully and vigorously expounded in Agricultural Involution. That book deployed a succinctly packaged past in order to solve a pressing contemporary puzzle, Java's enduring rural poverty and apparent social immobility. Initially greeted with acclaim, later and ironically the book stimulated the deep and multi-layered research that in fact led to the eventual rejection of Geertz's central contentions. But the veracity or otherwise of Geertz's inventive characterisation of Indonesian economic development now seems irrelevant; what is profoundly important is the extraordinary stimulus he gave to a generation of scholars to explore Indonesia's modern economic history with a depth and intensity previously unimaginable.
Resumo:
Philosophers expend considerable effort on the analysis of concepts, but the value of such work is not widely appreciated. This paper principally analyses some arguments, beliefs, and presuppositions about the nature of design and the relations between design and science common in the literature to illustrate this point, and to contribute to the foundations of design theory.
Resumo:
The discussion about relations between research and design has a number of strands, and presumably motivations. Putting aside the question whether or not design or “creative endeavour” should be counted as research, for reasons to do with institutional recognition or reward, the question remains how, if at all, is design research? This question is unlikely to have attracted much interest but for matters external to Architecture within the modern university. But Architecture as a discipline now needs to understand research much better than in the past when ‘research’ was whatever went on in building science, history or people/environment studies. In this paper, I begin with some common assumptions about design, considered in relation to research, and suggest how the former can constitute or be a mode of the latter. Central to this consideration is an understanding of research as the production of publicly available knowledge. The method is that of conceptual analysis which is much more fruitful than is usually appreciated. This work is part of a larger project in philosophy of design, in roughly the analytical tradition.
Resumo:
In this review we demonstrate how the algebraic Bethe ansatz is used for the calculation of the-energy spectra and form factors (operator matrix elements in the basis of Hamiltonian eigenstates) in exactly solvable quantum systems. As examples we apply the theory to several models of current interest in the study of Bose-Einstein condensates, which have been successfully created using ultracold dilute atomic gases. The first model we introduce describes Josephson tunnelling between two coupled Bose-Einstein condensates. It can be used not only for the study of tunnelling between condensates of atomic gases, but for solid state Josephson junctions and coupled Cooper pair boxes. The theory is also applicable to models of atomic-molecular Bose-Einstein condensates, with two examples given and analysed. Additionally, these same two models are relevant to studies in quantum optics; Finally, we discuss the model of Bardeen, Cooper and Schrieffer in this framework, which is appropriate for systems of ultracold fermionic atomic gases, as well as being applicable for the description of superconducting correlations in metallic grains with nanoscale dimensions.; In applying all the above models to. physical situations, the need for an exact analysis of small-scale systems is established due to large quantum fluctuations which render mean-field approaches inaccurate.
Resumo:
A graph H is said to divide a graph G if there exists a set S of subgraphs of G, all isomorphic to H, such that the edge set of G is partitioned by the edge sets of the subgraphs in S. Thus, a graph G is a common multiple of two graphs if each of the two graphs divides G.
Resumo:
In this paper, we propose a fast adaptive importance sampling method for the efficient simulation of buffer overflow probabilities in queueing networks. The method comprises three stages. First, we estimate the minimum cross-entropy tilting parameter for a small buffer level; next, we use this as a starting value for the estimation of the optimal tilting parameter for the actual (large) buffer level. Finally, the tilting parameter just found is used to estimate the overflow probability of interest. We study various properties of the method in more detail for the M/M/1 queue and conjecture that similar properties also hold for quite general queueing networks. Numerical results support this conjecture and demonstrate the high efficiency of the proposed algorithm.
Resumo:
The Coefficient of Variance (mean standard deviation/mean Response time) is a measure of response time variability that corrects for differences in mean Response time (RT) (Segalowitz & Segalowitz, 1993). A positive correlation between decreasing mean RTs and CVs (rCV-RT) has been proposed as an indicator of L2 automaticity and more generally as an index of processing efficiency. The current study evaluates this claim by examining lexical decision performance by individuals from three levels of English proficiency (Intermediate ESL, Advanced ESL and L1 controls) on stimuli from four levels of item familiarity, as defined by frequency of occurrence. A three-phase model of skill development defined by changing rCV-RT.values was tested. Results showed that RTs and CVs systematically decreased as a function of increasing proficiency and frequency levels, with the rCV-RT serving as a stable indicator of individual differences in lexical decision performance. The rCV-RT and automaticity/restructuring account is discussed in light of the findings. The CV is also evaluated as a more general quantitative index of processing efficiency in the L2.
Resumo:
A modified formula for the integral transform of a nonlinear function is proposed for a class of nonlinear boundary value problems. The technique presented in this paper results in analytical solutions. Iterations and initial guess, which are needed in other techniques, are not required in this novel technique. The analytical solutions are found to agree surprisingly well with the numerically exact solutions for two examples of power law reaction and Langmuir-Hinshelwood reaction in a catalyst pellet.
Resumo:
The Direct Simulation Monte Carlo (DSMC) method is used to simulate the flow of rarefied gases. In the Macroscopic Chemistry Method (MCM) for DSMC, chemical reaction rates calculated from local macroscopic flow properties are enforced in each cell. Unlike the standard total collision energy (TCE) chemistry model for DSMC, the new method is not restricted to an Arrhenius form of the reaction rate coefficient, nor is it restricted to a collision cross-section which yields a simple power-law viscosity. For reaction rates of interest in aerospace applications, chemically reacting collisions are generally infrequent events and, as such, local equilibrium conditions are established before a significant number of chemical reactions occur. Hence, the reaction rates which have been used in MCM have been calculated from the reaction rate data which are expected to be correct only for conditions of thermal equilibrium. Here we consider artificially high reaction rates so that the fraction of reacting collisions is not small and propose a simple method of estimating the rates of chemical reactions which can be used in the Macroscopic Chemistry Method in both equilibrium and non-equilibrium conditions. Two tests are presented: (1) The dissociation rates under conditions of thermal non-equilibrium are determined from a zero-dimensional Monte-Carlo sampling procedure which simulates ‘intra-modal’ non-equilibrium; that is, equilibrium distributions in each of the translational, rotational and vibrational modes but with different temperatures for each mode; (2) The 2-D hypersonic flow of molecular oxygen over a vertical plate at Mach 30 is calculated. In both cases the new method produces results in close agreement with those given by the standard TCE model in the same highly nonequilibrium conditions. We conclude that the general method of estimating the non-equilibrium reaction rate is a simple means by which information contained within non-equilibrium distribution functions predicted by the DSMC method can be included in the Macroscopic Chemistry Method.
Resumo:
The reconstruction of power industries has brought fundamental changes to both power system operation and planning. This paper presents a new planning method using multi-objective optimization (MOOP) technique, as well as human knowledge, to expand the transmission network in open access schemes. The method starts with a candidate pool of feasible expansion plans. Consequent selection of the best candidates is carried out through a MOOP approach, of which multiple objectives are tackled simultaneously, aiming at integrating the market operation and planning as one unified process in context of deregulated system. Human knowledge has been applied in both stages to ensure the selection with practical engineering and management concerns. The expansion plan from MOOP is assessed by reliability criteria before it is finalized. The proposed method has been tested with the IEEE 14-bus system and relevant analyses and discussions have been presented.
Resumo:
Modeling volcanic phenomena is complicated by free-surfaces often supporting large rheological gradients. Analytical solutions and analogue models provide explanations for fundamental characteristics of lava flows. But more sophisticated models are needed, incorporating improved physics and rheology to capture realistic events. To advance our understanding of the flow dynamics of highly viscous lava in Peléean lava dome formation, axi-symmetrical Finite Element Method (FEM) models of generic endogenous dome growth have been developed. We use a novel technique, the level-set method, which tracks a moving interface, leaving the mesh unaltered. The model equations are formulated in an Eulerian framework. In this paper we test the quality of this technique in our numerical scheme by considering existing analytical and experimental models of lava dome growth which assume a constant Newtonian viscosity. We then compare our model against analytical solutions for real lava domes extruded on Soufrière, St. Vincent, W.I. in 1979 and Mount St. Helens, USA in October 1980 using an effective viscosity. The level-set method is found to be computationally light and robust enough to model the free-surface of a growing lava dome. Also, by modeling the extruded lava with a constant pressure head this naturally results in a drop in extrusion rate with increasing dome height, which can explain lava dome growth observables more appropriately than when using a fixed extrusion rate. From the modeling point of view, the level-set method will ultimately provide an opportunity to capture more of the physics while benefiting from the numerical robustness of regular grids.