875 resultados para measurement of power loss


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose and demonstrate a novel scheme for simultaneous measurement of liquid level and temperature based on a simple uniform fiber Bragg grating (FBG) by monitoring both the short-wavelength-loss peaks and its Bragg resonance. The liquid level can be measured from the amplitude changes of the short-wavelength-loss peaks, while temperature can be measured from the wavelength shift of the Bragg resonance. Both theoretical simulation results and experimental results are presented. Such a scheme has some advantages including robustness, simplicity, flexibility in choosing sensitivity and simultaneous temperature measurement capability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Whilst research on work group diversity has proliferated in recent years, relatively little attention has been paid to the precise definition of diversity or its measurement. One of the few studies to do so is Harrison and Klein’s (2007) typology, which defined three types of diversity – separation, variety and disparity – and suggested possible indices with which they should be measured. However, their typology is limited by its association of diversity types with variable measurement, by a lack of clarity over the meaning of variety, and by the absence of a clear guidance about which diversity index should be employed. In this thesis I develop an extended version of the typology, including four diversity types (separation, range, spread and disparity), and propose specific indices to be used for each type of diversity with each variable type (ratio, interval, ordinal and nominal). Indices are chosen or derived from first principles based on the precise definition of the diversity type. I then test the usefulness of these indices in predicting outcomes of diversity compared with other indices, using both an extensive simulated data set (to estimate the effects of mis-specification of diversity type or index) and eight real data sets (to examine whether the proposed indices produce the strongest relationships with hypothesised outcomes). The analyses lead to the conclusion that the indices proposed in the typology are at least as good as, and usually better than, other indices in terms of both measuring effect sizes and power to find significant results, and thus provide evidence to support the typology. Implications for theory and methodology are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to determine whether an ophthalmophakometric technique could offer a feasible means of investigating ocular component contributions to residual astigmatism in human eyes. Current opinion was gathered on the prevalence, magnitude and source of residual astigmatism. It emerged that a comprehensive evaluation of the astigmatic contributions of the eye's internal ocular surfaces and their respective axial separations (effectivity) had not been carried out to date. An ophthalmophakometric technique was developed to measure astigmatism arising from the internal ocular components. Procedures included the measurement of refractive error (infra-red autorefractometry), anterior corneal surface power (computerised video keratography), axial distances (A-scan ultrasonography) and the powers of the posterior corneal surface in addition to both surfaces of the crystalline lens (multi-meridional still flash ophthalmophakometry). Computing schemes were developed to yield the required biometric data. These included (1) calculation of crystalline lens surface powers in the absence of Purkinje images arising from its anterior surface, (2) application of meridional analysis to derive spherocylindrical surface powers from notional powers calculated along four pre-selected meridians, (3) application of astigmatic decomposition and vergence analysis to calculate contributions to residual astigmatism of ocular components with obliquely related cylinder axes, (4) calculation of the effect of random experimental errors on the calculated ocular component data. A complete set of biometric measurements were taken from both eyes of 66 undergraduate students. Effectivity due to corneal thickness made the smallest cylinder power contribution (up to 0.25DC) to residual astigmatism followed by contributions of the anterior chamber depth (up to 0.50DC) and crystalline lens thickness (up to 1.00DC). In each case astigmatic contributions were predominantly direct. More astigmatism arose from the posterior corneal surface (up to 1.00DC) and both crystalline lens surfaces (up to 2.50DC). The astigmatic contributions of the posterior corneal and lens surfaces were found to be predominantly inverse whilst direct astigmatism arose from the anterior lens surface. Very similar results were found for right versus left eyes and males versus females. Repeatability was assessed on 20 individuals. The ophthalmophakometric method was found to be prone to considerable accumulated experimental errors. However, these errors are random in nature so that group averaged data were found to be reasonably repeatable. A further confirmatory study was carried out on 10 individuals which demonstrated that biometric measurements made with and without cycloplegia did not differ significantly.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose and demonstrate a novel scheme for simultaneous measurement of liquid level and temperature based on a simple uniform fiber Bragg grating (FBG) by monitoring both the short-wavelength-loss peaks and its Bragg resonance. The liquid level can be measured from the amplitude changes of the short-wavelength-loss peaks, while temperature can be measured from the wavelength shift of the Bragg resonance. Both theoretical simulation results and experimental results are presented. Such a scheme has some advantages including robustness, simplicity, flexibility in choosing sensitivity and simultaneous temperature measurement capability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The longitudinal distribution of the Stokes-component power in a Raman fibre laser with a random distributed feedback and unidirectional pumping is measured. The fibre parameters (linear loss and Rayleigh backscattering coefficient) are calculated based on the distributions obtained. A numerical model is developed to describe the lasing power distribution. The simulation results are in good agreement with the experimental data. © 2012 Kvantovaya Elektronika and Turpion Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose - To investigate if the accuracy of intraocular pressure (IOP) measurements using rebound tonometry over disposable hydrogel (etafilcon A) contact lenses (CL) is affected by the positive power of the CLs. Methods - The experimental group comprised 26 subjects, (8 male, 18 female). IOP measurements were undertaken on the subjects’ right eyes in random order using a Rebound Tonometer (ICare). The CLs had powers of +2.00 D and +6.00 D. Measurements were taken over each contact lens and also before and after the CLs had been worn. Results - The IOP measure obtained with both CLs was significantly lower compared to the value without CLs (t test; p < 0.001) but no significant difference was found between the two powers of CLs. Conclusions - Rebound tonometry over positive hydrogel CLs leads to a certain degree of IOP underestimation. This result did not change for the two positive lenses used in the experiment, despite their large difference in power and therefore in lens thickness. Optometrists should bear this in mind when measuring IOP with the rebound tonometer over plus power contact lenses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Non-Destructive Testing (NDT) of deep foundations has become an integral part of the industry's standard manufacturing processes. It is not unusual for the evaluation of the integrity of the concrete to include the measurement of ultrasonic wave speeds. Numerous methods have been proposed that use the propagation speed of ultrasonic waves to check the integrity of concrete for drilled shaft foundations. All such methods evaluate the integrity of the concrete inside the cage and between the access tubes. The integrity of the concrete outside the cage remains to be considered to determine the location of the border between the concrete and the soil in order to obtain the diameter of the drilled shaft. It is also economic to devise a methodology to obtain the diameter of the drilled shaft using the Cross-Hole Sonic Logging system (CSL). Performing such a methodology using the CSL and following the CSL tests is performed and used to check the integrity of the inside concrete, thus allowing the determination of the drilled shaft diameter without having to set up another NDT device.^ This proposed new method is based on the installation of galvanized tubes outside the shaft across from each inside tube, and performing the CSL test between the inside and outside tubes. From the performed experimental work a model is developed to evaluate the relationship between the thickness of concrete and the ultrasonic wave properties using signal processing. The experimental results show that there is a direct correlation between concrete thicknesses outside the cage and maximum amplitude of the received signal obtained from frequency domain data. This study demonstrates how this new method to measuring the diameter of drilled shafts during construction using a NDT method overcomes the limitations of currently-used methods. ^ In the other part of study, a new method is proposed to visualize and quantify the extent and location of the defects. It is based on a color change in the frequency amplitude of the signal recorded by the receiver probe in the location of defects and it is called Frequency Tomography Analysis (FTA). Time-domain data is transferred to frequency-domain data of the signals propagated between tubes using Fast Fourier Transform (FFT). Then, distribution of the FTA will be evaluated. This method is employed after CSL has determined the high probability of an anomaly in a given area and is applied to improve location accuracy and to further characterize the feature. The technique has a very good resolution and clarifies the exact depth location of any void or defect through the length of the drilled shaft for the voids inside the cage. ^ The last part of study also evaluates the effect of voids inside and outside the reinforcement cage and corrosion in the longitudinal bars on the strength and axial load capacity of drilled shafts. The objective is to quantify the extent of loss in axial strength and stiffness of drilled shafts due to presence of different types of symmetric voids and corrosion throughout their lengths.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optical nanofibres are ultrathin optical fibres with a waist diameter typically less than the wavelength of light being guided through them. Cold atoms can couple to the evanescent field of the nanofibre-guided modes and such systems are emerging as promising technologies for the development of atom-photon hybrid quantum devices. Atoms within the evanescent field region of an optical nanofibre can be probed by sending near or on-resonant light through the fibre; however, the probe light can detrimentally affect the properties of the atoms. In this paper, we report on the modification of the local temperature of laser-cooled 87Rb atoms in a magneto-optical trap centred around an optical nanofibre when near-resonant probe light propagates through it. A transient absorption technique has been used to measure the temperature of the affected atoms and temperature variations from 160 μk to 850 μk, for a probe power ranging from 0 to 50 nW, have been observed. This effect could have implications in relation to using optical nanofibres for probing and manipulating cold or ultracold atoms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the 14th expedition of the research vessel "Meteor" from the 2nd of July to the 7th of August 1968 continously recording instruments for measuring the CO2 partial pressure of seawater and atmospheric CO2 were developped by the Meteorological Institute, University of Frankfurt/M. During the Faroer expedition instrumental constants, such as relative and absolute accuracy, inertia and solvent power were tested. The performance of discontinous analyses of water samples was adopted to shipboard conditiones and correction factors depending on water volume, depth of sampling and water temperature were measured. After having computed average values of the continous records (atmosp. CO2 content, CO2 partial pressure, water temperature) geographical distribution, diurnal variation and dependence of diurnal averages were tested. At four different locations CO2 partial pressure was measured in various depths. During the voyage from the Faroer islands to Helgoland the measured concentrations of atmospheric CO2 content and CO2 partial pressure were tested with respect to a correlation of the geographical latitude.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissolved CO2 measurements are usually made using a Severinghaus electrode, which is bulky and can suffer from electrical interference. In contrast, optical sensors for gaseous CO2, whilst not suffering these problems, are mainly used for making gaseous (not dissolved) CO2 measurements, due to dye leaching and protonation, especially at high ionic strengths (>0.01 M) and acidity (<pH 4). This is usually prevented by coating the sensor with a gas-permeable, but ion-impermeable, membrane (GPM). Herein, we introduce a highly sensitive, colourimetric-based, plastic film sensor for the measurement of both gaseous and dissolved CO2, in which a pH-sensitive dye, thymol blue (TB) is coated onto particles of hydrophilic silica to create a CO2-sensitive, TB-based pigment, which is then extruded into low density polyethylene (LDPE) to create a GPM-free, i.e. naked, TB plastic sensor film for gaseous and dissolved CO2 measurements. When used for making dissolved CO2 measurements, the hydrophobic nature of the LDPE renders the film: (i) indifferent to ionic strength, (ii) highly resistant to acid attack and (iii) stable when stored under ambient (dark) conditions for >8 months, with no loss of colour or function. Here, the performance of the TB plastic film is primarily assessed as a dissolved CO2 sensor in highly saline (3.5 wt%) water. The TB film is blue in the absence of CO2 and yellow in its presence, exhibiting 50% transition in its colour at ca. 0.18% CO2. This new type of CO2 sensor has great potential in the monitoring of CO2 levels in the hydrosphere, as well as elsewhere, e.g. food packaging and possibly patient monitoring.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Charge carrier lifetime measurements in bulk or unfinished photovoltaic (PV) materials allow for a more accurate estimate of power conversion efficiency in completed solar cells. In this work, carrier lifetimes in PV- grade silicon wafers are obtained by way of quasi-steady state photoconductance measurements. These measurements use a contactless RF system coupled with varying narrow spectrum input LEDs, ranging in wavelength from 460 nm to 1030 nm. Spectral dependent lifetime measurements allow for determination of bulk and surface properties of the material, including the intrinsic bulk lifetime and the surface recombination velocity. The effective lifetimes are fit to an analytical physics-based model to determine the desired parameters. Passivated and non-passivated samples are both studied and are shown to have good agreement with the theoretical model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report a measurement of the flux of cosmic rays with unprecedented precision and Statistics using the Pierre Auger Observatory Based on fluorescence observations in coincidence with at least one Surface detector we derive a spectrum for energies above 10(18) eV We also update the previously published energy spectrum obtained with the surface detector array The two spectra are combined addressing the systematic uncertainties and, in particular. the influence of the energy resolution on the spectral shape The spectrum can be described by a broken power law E-gamma with index gamma = 3 3 below the ankle which is measured at log(10)(E-ankle/eV) = 18 6 Above the ankle the spectrum is described by a power law with index 2 6 followed by a flux suppression, above about log(10)(E/eV) = 19 5, detected with high statistical significance (C) 2010 Elsevier B V All rights reserved

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two trends are emerging from modern electric power systems: the growth of renewable (e.g., solar and wind) generation, and the integration of information technologies and advanced power electronics. The former introduces large, rapid, and random fluctuations in power supply, demand, frequency, and voltage, which become a major challenge for real-time operation of power systems. The latter creates a tremendous number of controllable intelligent endpoints such as smart buildings and appliances, electric vehicles, energy storage devices, and power electronic devices that can sense, compute, communicate, and actuate. Most of these endpoints are distributed on the load side of power systems, in contrast to traditional control resources such as centralized bulk generators. This thesis focuses on controlling power systems in real time, using these load side resources. Specifically, it studies two problems.

(1) Distributed load-side frequency control: We establish a mathematical framework to design distributed frequency control algorithms for flexible electric loads. In this framework, we formulate a category of optimization problems, called optimal load control (OLC), to incorporate the goals of frequency control, such as balancing power supply and demand, restoring frequency to its nominal value, restoring inter-area power flows, etc., in a way that minimizes total disutility for the loads to participate in frequency control by deviating from their nominal power usage. By exploiting distributed algorithms to solve OLC and analyzing convergence of these algorithms, we design distributed load-side controllers and prove stability of closed-loop power systems governed by these controllers. This general framework is adapted and applied to different types of power systems described by different models, or to achieve different levels of control goals under different operation scenarios. We first consider a dynamically coherent power system which can be equivalently modeled with a single synchronous machine. We then extend our framework to a multi-machine power network, where we consider primary and secondary frequency controls, linear and nonlinear power flow models, and the interactions between generator dynamics and load control.

(2) Two-timescale voltage control: The voltage of a power distribution system must be maintained closely around its nominal value in real time, even in the presence of highly volatile power supply or demand. For this purpose, we jointly control two types of reactive power sources: a capacitor operating at a slow timescale, and a power electronic device, such as a smart inverter or a D-STATCOM, operating at a fast timescale. Their control actions are solved from optimal power flow problems at two timescales. Specifically, the slow-timescale problem is a chance-constrained optimization, which minimizes power loss and regulates the voltage at the current time instant while limiting the probability of future voltage violations due to stochastic changes in power supply or demand. This control framework forms the basis of an optimal sizing problem, which determines the installation capacities of the control devices by minimizing the sum of power loss and capital cost. We develop computationally efficient heuristics to solve the optimal sizing problem and implement real-time control. Numerical experiments show that the proposed sizing and control schemes significantly improve the reliability of voltage control with a moderate increase in cost.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Non-Destructive Testing (NDT) of deep foundations has become an integral part of the industry’s standard manufacturing processes. It is not unusual for the evaluation of the integrity of the concrete to include the measurement of ultrasonic wave speeds. Numerous methods have been proposed that use the propagation speed of ultrasonic waves to check the integrity of concrete for drilled shaft foundations. All such methods evaluate the integrity of the concrete inside the cage and between the access tubes. The integrity of the concrete outside the cage remains to be considered to determine the location of the border between the concrete and the soil in order to obtain the diameter of the drilled shaft. It is also economic to devise a methodology to obtain the diameter of the drilled shaft using the Cross-Hole Sonic Logging system (CSL). Performing such a methodology using the CSL and following the CSL tests is performed and used to check the integrity of the inside concrete, thus allowing the determination of the drilled shaft diameter without having to set up another NDT device. This proposed new method is based on the installation of galvanized tubes outside the shaft across from each inside tube, and performing the CSL test between the inside and outside tubes. From the performed experimental work a model is developed to evaluate the relationship between the thickness of concrete and the ultrasonic wave properties using signal processing. The experimental results show that there is a direct correlation between concrete thicknesses outside the cage and maximum amplitude of the received signal obtained from frequency domain data. This study demonstrates how this new method to measuring the diameter of drilled shafts during construction using a NDT method overcomes the limitations of currently-used methods. In the other part of study, a new method is proposed to visualize and quantify the extent and location of the defects. It is based on a color change in the frequency amplitude of the signal recorded by the receiver probe in the location of defects and it is called Frequency Tomography Analysis (FTA). Time-domain data is transferred to frequency-domain data of the signals propagated between tubes using Fast Fourier Transform (FFT). Then, distribution of the FTA will be evaluated. This method is employed after CSL has determined the high probability of an anomaly in a given area and is applied to improve location accuracy and to further characterize the feature. The technique has a very good resolution and clarifies the exact depth location of any void or defect through the length of the drilled shaft for the voids inside the cage. The last part of study also evaluates the effect of voids inside and outside the reinforcement cage and corrosion in the longitudinal bars on the strength and axial load capacity of drilled shafts. The objective is to quantify the extent of loss in axial strength and stiffness of drilled shafts due to presence of different types of symmetric voids and corrosion throughout their lengths.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.