970 resultados para Electric current measurement
Resumo:
Circuit QED is a promising solid-state quantum computing architecture. It also has excellent potential as a platform for quantum control-especially quantum feedback control-experiments. However, the current scheme for measurement in circuit QED is low efficiency and has low signal-to-noise ratio for single-shot measurements. The low quality of this measurement makes the implementation of feedback difficult, and here we propose two schemes for measurement in circuit QED architectures that can significantly improve signal-to-noise ratio and potentially achieve quantum-limited measurement. Such measurements would enable the implementation of quantum feedback protocols and we illustrate this with a simple entanglement-stabilization scheme.
Resumo:
This research extends the consumer-based brand equity measurement approach to the measurement of the equity associated with retailers. This paper also addresses some of the limitations associated with current retailer equity measurement such as a lack of clarity regarding its nature and dimensionality. We conceptualise retailer equity as a four-dimensional construct comprising retailer awareness, retailer associations, perceived retailer quality, and retailer loyalty. The paper reports the result of an empirical study of a convenience sample of 601 shopping mall consumers at an Australian state capital city. Following a confirmatory factor analysis using structural equation modelling to examine the dimensionality of the retailer equity construct, the proposed model is tested for two retailer categories: department stores and speciality stores. Results confirm the hypothesised four-dimensional structure.
Resumo:
Few educational campaigns have focused on bowel cancer, though studies have indicated that members of the community need and want current information about relevant issues. In order to facilitate research in this area, reliable and valid measures of community attitudes are needed. Content validity of a survey instrument was obtained through use of a Delphi process with Directors of Education from the Australia Cancer Council and focus group discussions with informed members of the public. The subsequent survey of community perceptions about colorectal cancer included a broad range of content areas related to the risk of bowel cancer, preventing and coping with bowel cancer and beliefs about susceptibility and severity. The construct validity of these content areas was investigated by use of a factor analysis and confirmation of an association with related predictor variables. Two measures related to personal influence and anticipated coping responses showed favourable psychometric properties, including moderate to high levels of internal consistency and test-retest reliability. A test of the concurrent validity of these measures requires further development of instruments related to colorectal cancer or adaptation of measures from other areas of health research. (C) 2000 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper evaluates a low-frequency FDTD method applied to the problem of induced E-fields/eddy currents in the human body resulting from the pulsed magnetic field gradients in MRI. In this algorithm, a distributed equivalent magnetic current (DEMC) is proposed as the electromagnetic source and is obtained by quasistatic calculation of the empty coil's vector potential or measurements therein. This technique circumvents the discretizing of complicated gradient coil geometries into a mesh of Yee cells, and thereby enables any type of gradient coil modeling or other complex low frequency sources. The proposed method has been verified against an example with an analytical solution. Results are presented showing the spatial distribution of gradient-induced electric fields in a multilayered spherical phantom model and a complete body model.
Resumo:
Conventional detection scheme for self-mixing sensors uses an integrated photodiode within the laser package to monitor the self mixing signal. This arrangement can be simplified by directly obtaining the self-mixing signals across the laser diode itself and omitting the photodiode. This work reports on a Vertical-Cavity Surface-Emitting Laser (VCSEL) based selfmixing sensor using the laser junction voltage to obtain the selfmixing signal. We show that the same information can be obtained with only minor changes to the extraction circuitry leading to potential cost saving with reductions in component costs and complexity and significant increase in bandwidth favoring high speed modulation. Experiments using both photo current and voltage detection were carried out and the results obtained show good agreement with the theory.
Resumo:
Purpose – To investigate the impact of performance measurement in strategic planning process. Design/methodology/approach – A large scale survey was conducted online with Warwick Business School alumni. The questionnaire was based on the Strategic Development Process model by Dyson. The questionnaire was designed to map the current practice of strategic planning and to determine its most influential factors on the effectiveness of the process. All questions were close ended and a seven-point Likert scale used. The independent variables were grouped into four meaningful factors by factor analysis (Varimax, coefficient of rotation 0.4). The factors produced were used to build regression models (stepwise) for the five assessments of strategic planning process. Regression models were developed for the totality of the responses, comparing SMEs and large organizations and comparing organizations operating in slowly and rapidly changing environments. Findings – The results indicate that performance measurement stands as one of the four main factors characterising the current practice of strategic planning. This research has determined that complexity coming from organizational size and rate of change in the sector creates variation in the impact of performance measurement in strategic planning. Large organizations and organizations operating in rapidly changing environments make greater use of performance measurement. Research limitations/implications – This research is based on subjective data, therefore the conclusions do not concern the impact of strategic planning process' elements on the organizational performance achievements, but on the success/effectiveness of the strategic planning process itself. Practical implications – This research raises a series of questions about the use and potential impact of performance measurement, especially in the categories of organizations that are not significantly influenced by its utilisation. It contributes to the field of performance measurement impact. Originality/value – This research fills in the gap literature concerning the lack of large scale surveys on strategic development processes and performance measurement. It also contributes in the literature of this field by providing empirical evidences on the impact of performance measurement upon the strategic planning process.
Resumo:
Fare, Grosskopf, Norris and Zhang developed a non-parametric productivity index, Malmquist index, using data envelopment analysis (DEA). The Malmquist index is a measure of productivity progress (regress) and it can be decomposed to different components such as 'efficiency catch-up' and 'technology change'. However, Malmquist index and its components are based on two period of time which can capture only a part of the impact of investment in long-lived assets. The effects of lags in the investment process on the capital stock have been ignored in the current model of Malmquist index. This paper extends the recent dynamic DEA model introduced by Emrouznejad and Thanassoulis and Emrouznejad for dynamic Malmquist index. This paper shows that the dynamic productivity results for Organisation for Economic Cooperation and Development countries should reflect reality better than those based on conventional model.
Resumo:
This study is concerned with several proposals concerning multiprocessor systems and with the various possible methods of evaluating such proposals. After a discussion of the advantages and disadvantages of several performance evaluation tools, the author decides that simulation is the only tool powerful enough to develop a model which would be of practical use, in the design, comparison and extension of systems. The main aims of the simulation package developed as part of this study are cost effectiveness, ease of use and generality. The methodology on which the simulation package is based is described in detail. The fundamental principles are that model design should reflect actual systems design, that measuring procedures should be carried out alongside design that models should be well documented and easily adaptable and that models should be dynamic. The simulation package itself is modular, and in this way reflects current design trends. This approach also aids documentation and ensures that the model is easily adaptable. It contains a skeleton structure and a library of segments which can be added to or directly swapped with segments of the skeleton structure, to form a model which fits a user's requirements. The study also contains the results of some experimental work carried out using the model, the first part of which tests• the model's capabilities by simulating a large operating system, the ICL George 3 system; the second part deals with general questions and some of the many proposals concerning multiprocessor systems.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
Resumo:
The aim of this study was to determine whether an ophthalmophakometric technique could offer a feasible means of investigating ocular component contributions to residual astigmatism in human eyes. Current opinion was gathered on the prevalence, magnitude and source of residual astigmatism. It emerged that a comprehensive evaluation of the astigmatic contributions of the eye's internal ocular surfaces and their respective axial separations (effectivity) had not been carried out to date. An ophthalmophakometric technique was developed to measure astigmatism arising from the internal ocular components. Procedures included the measurement of refractive error (infra-red autorefractometry), anterior corneal surface power (computerised video keratography), axial distances (A-scan ultrasonography) and the powers of the posterior corneal surface in addition to both surfaces of the crystalline lens (multi-meridional still flash ophthalmophakometry). Computing schemes were developed to yield the required biometric data. These included (1) calculation of crystalline lens surface powers in the absence of Purkinje images arising from its anterior surface, (2) application of meridional analysis to derive spherocylindrical surface powers from notional powers calculated along four pre-selected meridians, (3) application of astigmatic decomposition and vergence analysis to calculate contributions to residual astigmatism of ocular components with obliquely related cylinder axes, (4) calculation of the effect of random experimental errors on the calculated ocular component data. A complete set of biometric measurements were taken from both eyes of 66 undergraduate students. Effectivity due to corneal thickness made the smallest cylinder power contribution (up to 0.25DC) to residual astigmatism followed by contributions of the anterior chamber depth (up to 0.50DC) and crystalline lens thickness (up to 1.00DC). In each case astigmatic contributions were predominantly direct. More astigmatism arose from the posterior corneal surface (up to 1.00DC) and both crystalline lens surfaces (up to 2.50DC). The astigmatic contributions of the posterior corneal and lens surfaces were found to be predominantly inverse whilst direct astigmatism arose from the anterior lens surface. Very similar results were found for right versus left eyes and males versus females. Repeatability was assessed on 20 individuals. The ophthalmophakometric method was found to be prone to considerable accumulated experimental errors. However, these errors are random in nature so that group averaged data were found to be reasonably repeatable. A further confirmatory study was carried out on 10 individuals which demonstrated that biometric measurements made with and without cycloplegia did not differ significantly.
Resumo:
A second-harmonic direct current (DC) ripple compensation technique is presented for a multi-phase, fault-tolerant, permanent magnet machine. The analysis has been undertaken in a general manner for any pair of phases in operation with the remaining phases inactive. The compensation technique determines the required alternating currents in the machine to eliminate the second-harmonic DC-link current, while at the same time minimising the total rms current in the windings. An additional benefit of the compensation technique is a reduction in the magnitude of the electromagnetic torque ripple. Practical results are included from a 70 kW, five-phase generator system to validate the analysis and illustrate the performance of the compensation technique.
Resumo:
The electrostatic model for osmotic flow across a porous membrane in our previous study (Akinaga et al. 2008)" was extended to include the streaming potential, for solutes and pores of like charge and fixed surface charge densities. The magnitude of the streaming potential was determined to satisfy zero current condition along the pore axis. It was found that the streaming potential affects the velocity profiles of the pressure driven flow as well as the osmotic flow through the pore, and decreases their flow rates, particularly in the case of large Debye length relative to the pore radius, whereas it has little effect on the reflection coefficients of spherical solutes through cylindrical pores.
Resumo:
Purpose – This paper aims to respond to John Rossiter's call for a “Marketing measurement revolution” in the current issue of EJM, as well as providing broader comment on Rossiter's C-OAR-SE framework, and measurement practice in marketing in general. Design/methodology/approach – The paper is purely theoretical, based on interpretation of measurement theory. Findings – The authors find that much of Rossiter's diagnosis of the problems facing measurement practice in marketing and social science is highly relevant. However, the authors find themselves opposed to the revolution advocated by Rossiter. Research limitations/implications – The paper presents a comment based on interpretation of measurement theory and observation of practices in marketing and social science. As such, the interpretation is itself open to disagreement. Practical implications – There are implications for those outside academia who wish to use measures derived from academic work as well as to derive their own measures of key marketing and other social variables. Originality/value – This paper is one of the few to explicitly respond to the C-OAR-SE framework proposed by Rossiter, and presents a number of points critical to good measurement theory and practice, which appear to remain underdeveloped in marketing and social science.
Resumo:
For a Switched Reluctance Motor (SRM), the flux linkage characteristic is the most basic magnetic characteristic, and many other quantities, including the incremental inductance, back emf, and electromagnetic torque can be determined indirectly from it. In this paper, two methods of measuring the flux linkage profile of an SRM from the phase winding voltage and current measurements, with and without rotor locking devices, are presented. Torque, incremental inductance and back emf characteristics of the SRM are then obtained from the flux linkage measurements. The torque of the SRM is also measured directly as a comparison, and the closeness of the calculated and directly measured torque curves suggests the validity of the method to obtain the SRM torque, incremental inductance and back emf profiles from the flux linkage measurements. © 2013 IEEE.
Resumo:
Naturally-occurring, endogenous electric fields (EFs) have been detected at skin wounds, damaged tissue sites and vasculature. Applied EFs guide migration of many types of cells, including endothelial cells to migrate directionally. Homing of endothelial progenitor cells (EPCs) to an injury site is important for repair of vasculature and also for angiogenesis. However, it has not been reported whether EPCs respond to applied EFs. Aiming to explore the possibility to use electric stimulation to regulate the progenitor cells and angiogenesis, we tested the effects of direct-current (DC) EFs on EPCs. We first used immunofluorescence to confirm the expression of endothelial progenitor markers in three lines of EPCs. We then cultured the progenitor cells in EFs. Using time-lapse video microscopy, we demonstrated that an applied DC EF directs migration of the EPCs toward the cathode. The progenitor cells also align and elongate in an EF. Inhibition of vascular endothelial growth factor (VEGF) receptor signaling completely abolished the EF-induced directional migration of the progenitor cells. We conclude that EFs are an effective signal that guides EPC migration through VEGF receptor signaling in vitro. Applied EFs may be used to control behaviors of EPCs in tissue engineering, in homing of EPCs to wounds and to an injury site in the vasculature.