17 resultados para Electric current measurement

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Infection is a major clinical problem associated with the use of intravenous catheters.The efficacy of a direct electric current (10µA, 9V) via electrode-conducting carbon impregnated catheters to prevent colonisation of catheters by micro-organisms was investigated. The range of organisms susceptible to 10µA was determined by a zone of inhibition test. The catheters acting as the anode and the cathode were inserted into a nutrient agar plate inoculated with a lawn of bacteria. There was no zone of inhibition observed around the anode. Organisms susceptible to 10µA at the cathode were Staphylococcus aureus (2 strains), Staphylococcus epidermidis (5 strains), Escherichia coli and Klebsiella pneumoniae (2 strains each), and one strain of the following micro-organisms: Staphylococcus hominis, Proteus mirabilis, Pseudomonas aeruginosa and Candida albicans. The zones ranged from 6 to 16 mm in diameter according to the organisms under test. The zone size was proportional to the amperage (10 - 100 µA) and the number of organisms on the plate. Ten µA did not prevent adhesion of staphylococci to the cathode nor did it affect their growth in nutrient broth. However, it was bactericidal to adherent bacteria on the cathodal catheter and significantly reduced the number of bacteria on the catheter after 4 to 24 h application of electricity. The antimicrobial activity of low amperage electric current under anaerobic conditions and in the absence of chloride ions against bacteria attached to the surface of a current carrying electrode was also investigated.The mechanisms of the bactericidal activity associated with the cathode were investigated with S. epidermidis and S. aureus. The inhibition zone was greatly reduced in the presence of catalase. There was no zone around the cathode when the test was carried out under anaerobic conditions. Hydrogen peroxide was produced at the cathode surface under aerobic conditions, but not in the absence of oxygen. A salt-bridge apparatus was used to demonstrate further that hydrogen peroxide was produced at the cathode, and chlorine at the anode. The antimicrobial activity of low amperage electric current under anaerobic conditions and in the absence of chloride ions against bacteria attached to the surface of a current carrying electrode was also investigated. Antibacterial activity was reduced under anaerobic conditions, which is compatible with the role of hydrogen peroxide as a primary bactericidal agent of electricity associated with the cathode. A reduction in chloride ions did not significantly reduce the antibacterial activity suggesting chlorine plays only a minor role in the bactericidal activity against organisms attached to anodal electrode surfaces. The bactericidal activity of electric current associated with the cathode and H202 was greatly reduced in the presence of 50 μM to 0.5 mM magnesium ions in the test menstrum. Ten μA applied via the catheters did not prevent the initial biofilm growth by the adherent bacteria but reduced the number of bacteria in the biofilm by 2 log order aiter 24 h. The results suggested that 10 μA may prevent the colonisation of catheters by both the extra~ and intra-luminal routes. The localised production of hydrogen peroxide and chlorine and the intrinsic activity due to electric current may offer a useful method for the eradication of bacteria from catheter surfaces.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Modern engineering requirements are frequently near the limits of application of conventional materials. For many purposes, particularly tribological, the most satisfactory solution is frequently the application of a resistant coating to the surface of a common metal. Electrodeposited cermet coatings have proved very satisfactory: some of the factors underlying the cernet electrodeposition process have been investigated. A ceramic particle in contact with an electrolyte solution will carry a charge which may affect the kinetics of the suspended particle under electroplating conditions. Measurerment has been made of this charge on particles of silicon carbide, chrornium diboride and quartz, in contiact with solutions of copper sulphate/ sulphuric acid in terms of the electrokinetic (zeta) potential and also as surface charge density. The methocl used was that of streaming potential and streaming current measurement

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a diagnostic and prognostic condition monitoring method for insulated-gate bipolar transistor (IGBT) power modules for use primarily in electric vehicle applications. The wire-bond-related failure, one of the most commonly observed packaging failures, is investigated by analytical and experimental methods using the on-state voltage drop as a failure indicator. A sophisticated test bench is developed to generate and apply the required current/power pulses to the device under test. The proposed method is capable of detecting small changes in the failure indicators of the IGBTs and freewheeling diodes and its effectiveness is validated experimentally. The novelty of the work lies in the accurate online testing capacity for diagnostics and prognostics of the power module with a focus on the wire bonding faults, by injecting external currents into the power unit during the idle time. Test results show that the IGBT may sustain a loss of half the bond wires before the impending fault becomes catastrophic. The measurement circuitry can be embedded in the IGBT drive circuits and the measurements can be performed in situ when the electric vehicle stops in stop-and-go, red light traffic conditions, or during routine servicing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – To investigate the impact of performance measurement in strategic planning process. Design/methodology/approach – A large scale survey was conducted online with Warwick Business School alumni. The questionnaire was based on the Strategic Development Process model by Dyson. The questionnaire was designed to map the current practice of strategic planning and to determine its most influential factors on the effectiveness of the process. All questions were close ended and a seven-point Likert scale used. The independent variables were grouped into four meaningful factors by factor analysis (Varimax, coefficient of rotation 0.4). The factors produced were used to build regression models (stepwise) for the five assessments of strategic planning process. Regression models were developed for the totality of the responses, comparing SMEs and large organizations and comparing organizations operating in slowly and rapidly changing environments. Findings – The results indicate that performance measurement stands as one of the four main factors characterising the current practice of strategic planning. This research has determined that complexity coming from organizational size and rate of change in the sector creates variation in the impact of performance measurement in strategic planning. Large organizations and organizations operating in rapidly changing environments make greater use of performance measurement. Research limitations/implications – This research is based on subjective data, therefore the conclusions do not concern the impact of strategic planning process' elements on the organizational performance achievements, but on the success/effectiveness of the strategic planning process itself. Practical implications – This research raises a series of questions about the use and potential impact of performance measurement, especially in the categories of organizations that are not significantly influenced by its utilisation. It contributes to the field of performance measurement impact. Originality/value – This research fills in the gap literature concerning the lack of large scale surveys on strategic development processes and performance measurement. It also contributes in the literature of this field by providing empirical evidences on the impact of performance measurement upon the strategic planning process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fare, Grosskopf, Norris and Zhang developed a non-parametric productivity index, Malmquist index, using data envelopment analysis (DEA). The Malmquist index is a measure of productivity progress (regress) and it can be decomposed to different components such as 'efficiency catch-up' and 'technology change'. However, Malmquist index and its components are based on two period of time which can capture only a part of the impact of investment in long-lived assets. The effects of lags in the investment process on the capital stock have been ignored in the current model of Malmquist index. This paper extends the recent dynamic DEA model introduced by Emrouznejad and Thanassoulis and Emrouznejad for dynamic Malmquist index. This paper shows that the dynamic productivity results for Organisation for Economic Cooperation and Development countries should reflect reality better than those based on conventional model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study is concerned with several proposals concerning multiprocessor systems and with the various possible methods of evaluating such proposals. After a discussion of the advantages and disadvantages of several performance evaluation tools, the author decides that simulation is the only tool powerful enough to develop a model which would be of practical use, in the design, comparison and extension of systems. The main aims of the simulation package developed as part of this study are cost effectiveness, ease of use and generality. The methodology on which the simulation package is based is described in detail. The fundamental principles are that model design should reflect actual systems design, that measuring procedures should be carried out alongside design that models should be well documented and easily adaptable and that models should be dynamic. The simulation package itself is modular, and in this way reflects current design trends. This approach also aids documentation and ensures that the model is easily adaptable. It contains a skeleton structure and a library of segments which can be added to or directly swapped with segments of the skeleton structure, to form a model which fits a user's requirements. The study also contains the results of some experimental work carried out using the model, the first part of which tests• the model's capabilities by simulating a large operating system, the ICL George 3 system; the second part deals with general questions and some of the many proposals concerning multiprocessor systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to determine whether an ophthalmophakometric technique could offer a feasible means of investigating ocular component contributions to residual astigmatism in human eyes. Current opinion was gathered on the prevalence, magnitude and source of residual astigmatism. It emerged that a comprehensive evaluation of the astigmatic contributions of the eye's internal ocular surfaces and their respective axial separations (effectivity) had not been carried out to date. An ophthalmophakometric technique was developed to measure astigmatism arising from the internal ocular components. Procedures included the measurement of refractive error (infra-red autorefractometry), anterior corneal surface power (computerised video keratography), axial distances (A-scan ultrasonography) and the powers of the posterior corneal surface in addition to both surfaces of the crystalline lens (multi-meridional still flash ophthalmophakometry). Computing schemes were developed to yield the required biometric data. These included (1) calculation of crystalline lens surface powers in the absence of Purkinje images arising from its anterior surface, (2) application of meridional analysis to derive spherocylindrical surface powers from notional powers calculated along four pre-selected meridians, (3) application of astigmatic decomposition and vergence analysis to calculate contributions to residual astigmatism of ocular components with obliquely related cylinder axes, (4) calculation of the effect of random experimental errors on the calculated ocular component data. A complete set of biometric measurements were taken from both eyes of 66 undergraduate students. Effectivity due to corneal thickness made the smallest cylinder power contribution (up to 0.25DC) to residual astigmatism followed by contributions of the anterior chamber depth (up to 0.50DC) and crystalline lens thickness (up to 1.00DC). In each case astigmatic contributions were predominantly direct. More astigmatism arose from the posterior corneal surface (up to 1.00DC) and both crystalline lens surfaces (up to 2.50DC). The astigmatic contributions of the posterior corneal and lens surfaces were found to be predominantly inverse whilst direct astigmatism arose from the anterior lens surface. Very similar results were found for right versus left eyes and males versus females. Repeatability was assessed on 20 individuals. The ophthalmophakometric method was found to be prone to considerable accumulated experimental errors. However, these errors are random in nature so that group averaged data were found to be reasonably repeatable. A further confirmatory study was carried out on 10 individuals which demonstrated that biometric measurements made with and without cycloplegia did not differ significantly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A second-harmonic direct current (DC) ripple compensation technique is presented for a multi-phase, fault-tolerant, permanent magnet machine. The analysis has been undertaken in a general manner for any pair of phases in operation with the remaining phases inactive. The compensation technique determines the required alternating currents in the machine to eliminate the second-harmonic DC-link current, while at the same time minimising the total rms current in the windings. An additional benefit of the compensation technique is a reduction in the magnitude of the electromagnetic torque ripple. Practical results are included from a 70 kW, five-phase generator system to validate the analysis and illustrate the performance of the compensation technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The electrostatic model for osmotic flow across a porous membrane in our previous study (Akinaga et al. 2008)" was extended to include the streaming potential, for solutes and pores of like charge and fixed surface charge densities. The magnitude of the streaming potential was determined to satisfy zero current condition along the pore axis. It was found that the streaming potential affects the velocity profiles of the pressure driven flow as well as the osmotic flow through the pore, and decreases their flow rates, particularly in the case of large Debye length relative to the pore radius, whereas it has little effect on the reflection coefficients of spherical solutes through cylindrical pores.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – This paper aims to respond to John Rossiter's call for a “Marketing measurement revolution” in the current issue of EJM, as well as providing broader comment on Rossiter's C-OAR-SE framework, and measurement practice in marketing in general. Design/methodology/approach – The paper is purely theoretical, based on interpretation of measurement theory. Findings – The authors find that much of Rossiter's diagnosis of the problems facing measurement practice in marketing and social science is highly relevant. However, the authors find themselves opposed to the revolution advocated by Rossiter. Research limitations/implications – The paper presents a comment based on interpretation of measurement theory and observation of practices in marketing and social science. As such, the interpretation is itself open to disagreement. Practical implications – There are implications for those outside academia who wish to use measures derived from academic work as well as to derive their own measures of key marketing and other social variables. Originality/value – This paper is one of the few to explicitly respond to the C-OAR-SE framework proposed by Rossiter, and presents a number of points critical to good measurement theory and practice, which appear to remain underdeveloped in marketing and social science.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For a Switched Reluctance Motor (SRM), the flux linkage characteristic is the most basic magnetic characteristic, and many other quantities, including the incremental inductance, back emf, and electromagnetic torque can be determined indirectly from it. In this paper, two methods of measuring the flux linkage profile of an SRM from the phase winding voltage and current measurements, with and without rotor locking devices, are presented. Torque, incremental inductance and back emf characteristics of the SRM are then obtained from the flux linkage measurements. The torque of the SRM is also measured directly as a comparison, and the closeness of the calculated and directly measured torque curves suggests the validity of the method to obtain the SRM torque, incremental inductance and back emf profiles from the flux linkage measurements. © 2013 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Naturally-occurring, endogenous electric fields (EFs) have been detected at skin wounds, damaged tissue sites and vasculature. Applied EFs guide migration of many types of cells, including endothelial cells to migrate directionally. Homing of endothelial progenitor cells (EPCs) to an injury site is important for repair of vasculature and also for angiogenesis. However, it has not been reported whether EPCs respond to applied EFs. Aiming to explore the possibility to use electric stimulation to regulate the progenitor cells and angiogenesis, we tested the effects of direct-current (DC) EFs on EPCs. We first used immunofluorescence to confirm the expression of endothelial progenitor markers in three lines of EPCs. We then cultured the progenitor cells in EFs. Using time-lapse video microscopy, we demonstrated that an applied DC EF directs migration of the EPCs toward the cathode. The progenitor cells also align and elongate in an EF. Inhibition of vascular endothelial growth factor (VEGF) receptor signaling completely abolished the EF-induced directional migration of the progenitor cells. We conclude that EFs are an effective signal that guides EPC migration through VEGF receptor signaling in vitro. Applied EFs may be used to control behaviors of EPCs in tissue engineering, in homing of EPCs to wounds and to an injury site in the vasculature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aircraft assembly is the most important part of aircraft manufacturing. A large number of assembly fixtures must be used to ensure the assembly accuracy in the aircraft assembly process. Traditional fixed assembly fixture could not satisfy the change of the aircraft types, so the digital flexible assembly fixture was developed and was gradually applied in the aircraft assembly. Digital flexible assembly technology has also become one of the research directions in the field of aircraft manufacturing. The aircraft flexible assembly can be divided into three assembly stages that include component-level flexible assembly, large component-level flexible assembly, and large components alignment and joining. This article introduces the architecture of flexible assembly systems and the principles of three types of flexible assembly fixtures. The key technologies of the digital flexible assembly are also discussed. The digital metrology system provides the basis for the accurate digital flexible assembly. Aircraft flexible assembly systems mainly use laser tracking metrology systems and indoor Global Positioning System metrology systems. With the development of flexible assembly technology, the digital flexible assembly system will be widely used in current aircraft manufacturing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research develops a methodology and model formulation which suggests locations for rapid chargers to help assist infrastructure development and enable greater battery electric vehicle (BEV) usage. The model considers the likely travel patterns of BEVs and their subsequent charging demands across a large road network, where no prior candidate site information is required. Using a GIS-based methodology, polygons are constructed which represent the charging demand zones for particular routes across a real-world road network. The use of polygons allows the maximum number of charging combinations to be considered whilst limiting the input intensity needed for the model. Further polygons are added to represent deviation possibilities, meaning that placement of charge points away from the shortest path is possible, given a penalty function. A validation of the model is carried out by assessing the expected demand at current rapid charging locations and comparing to recorded empirical usage data. Results suggest that the developed model provides a good approximation to real world observations, and that for the provision of charging, location matters. The model is also implemented where no prior candidate site information is required. As such, locations are chosen based on the weighted overlay between several different routes where BEV journeys may be expected. In doing so many locations, or types of locations, could be compared against one another and then analysed in relation to siting practicalities, such as cost, land permission and infrastructure availability. Results show that efficient facility location, given numerous siting possibilities across a large road network can be achieved. Slight improvements to the standard greedy adding technique are made by adding combination weightings which aim to reward important long distance routes that require more than one charge to complete.