990 resultados para common-mode filters
Resumo:
BackgroundConsensus-based approaches provide an alternative to evidence-based decision making, especially in situations where high-level evidence is limited. Our aim was to demonstrate a novel source of information, objective consensus based on recommendations in decision tree format from multiple sources.MethodsBased on nine sample recommendations in decision tree format a representative analysis was performed. The most common (mode) recommendations for each eventuality (each permutation of parameters) were determined. The same procedure was applied to real clinical recommendations for primary radiotherapy for prostate cancer. Data was collected from 16 radiation oncology centres, converted into decision tree format and analyzed in order to determine the objective consensus.ResultsBased on information from multiple sources in decision tree format, treatment recommendations can be assessed for every parameter combination. An objective consensus can be determined by means of mode recommendations without compromise or confrontation among the parties. In the clinical example involving prostate cancer therapy, three parameters were used with two cut-off values each (Gleason score, PSA, T-stage) resulting in a total of 27 possible combinations per decision tree. Despite significant variations among the recommendations, a mode recommendation could be found for specific combinations of parameters.ConclusionRecommendations represented as decision trees can serve as a basis for objective consensus among multiple parties.
Resumo:
The Gravity field and steady-state Ocean Circulation Explorer (GOCE) was the first Earth explorer core mission of the European Space Agency. It was launched on March 17, 2009 into a Sun-synchronous dusk-dawn orbit and re-entered into the Earth’s atmosphere on November 11, 2013. The satellite altitude was between 255 and 225 km for the measurement phases. The European GOCE Gravity consortium is responsible for the Level 1b to Level 2 data processing in the frame of the GOCE High-level processing facility (HPF). The Precise Science Orbit (PSO) is one Level 2 product, which was produced under the responsibility of the Astronomical Institute of the University of Bern within the HPF. This PSO product has been continuously delivered during the entire mission. Regular checks guaranteed a high consistency and quality of the orbits. A correlation between solar activity, GPS data availability and quality of the orbits was found. The accuracy of the kinematic orbit primarily suffers from this. Improvements in modeling the range corrections at the retro-reflector array for the SLR measurements were made and implemented in the independent SLR validation for the GOCE PSO products. The satellite laser ranging (SLR) validation finally states an orbit accuracy of 2.42 cm for the kinematic and 1.84 cm for the reduced-dynamic orbits over the entire mission. The common-mode accelerations from the GOCE gradiometer were not used for the official PSO product, but in addition to the operational HPF work a study was performed to investigate to which extent common-mode accelerations improve the reduced-dynamic orbit determination results. The accelerometer data may be used to derive realistic constraints for the empirical accelerations estimated for the reduced-dynamic orbit determination, which already improves the orbit quality. On top of that the accelerometer data may further improve the orbit quality if realistic constraints and state-of-the-art background models such as gravity field and ocean tide models are used for the reduced-dynamic orbit determination.
Resumo:
Background. Because our hands are the most common mode of transmission for bacteria causing hospital acquired infections, hand hygiene practices are the most effective method of preventing the spread of these pathogens, limiting the occurrence of healthcare-associated infections and reducing transmission of multi-drug resistant organisms. Yet, compliance rates are below 40% on the average. ^ Objective. This culminating experience project is primarily a literature review on hand hygiene to help determine the barriers to hand hygiene compliance and offer solutions on improving these rates and to build on a hand hygiene evaluation performed during my infection control internship completed at Memorial Hermann Hospital during the fall semester of 2005. ^ Method. A review of peer-reviewed literature using Ovid Medline, Ebsco Medline and PubMed databases using keywords: hand hygiene, hand hygiene compliance, alcohol based handrub, healthcare-associated infections, hospital-acquired infections, and infection control. ^ Results. A total of eight hand hygiene studies are highlighted. At a children's hospital in Seattle, hand hygiene compliance rates increases from 62% to 81% after five periods of interventions. In Thailand, 26 nurses dramatically increased compliance from 6.3% to 81.2% after just 7 months of training. Automated alcohol based handrub dispensers improved compliance rates in Chicago from 36.3% to 70.1%. Using education and increased distribution of alcohol based handrubs increased hand hygiene rates from 59% to 79% for Ebnother, from 54% to 85% for Hussein and from 32% to 63% for Randle. Spartanburg Regional Medical Center increased their rates from 72.5% to 90.3%. A level III NICU achieved 100% compliance after a month long educational campaign but fell back down to its baseline rate of 89% after 3 months. ^ Discussion. The interventions used to promote hand hygiene in the highlighted studies varied from low tech approaches such as printed materials to advanced electronic gadgets that alerted individuals automatically to perform hand hygiene. All approaches were effective and increased compliance rates. Overcoming hand hygiene barriers, receiving and accepting feedback is the key to maintaining consistently high hand hygiene adherence. ^
Resumo:
SRAM-based FPGAs are sensitive to radiation effects. Soft errors can appear and accumulate, potentially defeating mitigation strategies deployed at the Application Layer. Therefore, Configuration Memory scrubbing is required to improve radiation tolerance of such FPGAs in space applications. Virtex FPGAs allow runtime scrubbing by means of dynamic partial reconfiguration. Even with scrubbing, intra-FPGA TMR systems are subjected to common-mode errors affecting more than one design domain. This is solved in inter-FPGA TMR systems at the expense of a higher cost, power and mass. In this context, a self-reference scrubber for device-level TMR system based on Xilinx Virtex FPGAs is presented. This scrubber allows for a fast SEU/MBU detection and correction by peer frame comparison without needing to access a golden configuration memory
Resumo:
A total of 92 samples of street dust were collected in Luanda, Angola, were sieved below 100 μm, and analysed by ICP-MS for 35 elements after an aqua-regia digestion. The concentration and spatial heterogeneity of trace elements in the street dust of Luanda are generally lower than in most industrialized cities in the Northern hemisphere. These observations reveal a predominantly “natural” origin for the street dust in Luanda, which is also manifested in that some geochemical processes that occur in natural soils are preserved in street dust: the separation of uranium from thorium, and the retention of the former by carbonate materials, or the high correlation between arsenic and vanadium due to their common mode of adsorption on solid particles in the form of oxyanions. The only distinct anthropogenic fingerprint in the composition of Luanda's street dust is the association Pb–Cd–Sb–Cu (and to a lesser extent, Ba–Cr–Zn). The use of risk assessment strategies has proved helpful in identifying the routes of exposure to street dust and the trace elements therein of most concern in terms of potential adverse health effects. In Luanda the highest levels of risk seem to be associated (a) with the presence of As and Pb in the street dust and (b) with the route of ingestion of dust particles, for all the elements included in the study except Hg, for which inhalation of vapours presents a slightly higher risk than ingestion. However, given the large uncertainties associated with the estimates of toxicity values and exposure factors, and the absence of site-specific biometric factors, these results should be regarded as preliminary and further research should be undertaken before any definite conclusions regarding potential health effects are drawn.
Resumo:
Hybrid Stepper Motors are widely used in open-loop position applications. They are the choice of actuation for the collimators in the Large Hadron Collider, the largest particle accelerator at CERN. In this case the positioning requirements and the highly radioactive operating environment are unique. The latter forces both the use of long cables to connect the motors to the drives which act as transmission lines and also prevents the use of standard position sensors. However, reliable and precise operation of the collimators is critical for the machine, requiring the prevention of step loss in the motors and maintenance to be foreseen in case of mechanical degradation. In order to make the above possible, an approach is proposed for the application of an Extended Kalman Filter to a sensorless stepper motor drive, when the motor is separated from its drive by long cables. When the long cables and high frequency pulse width modulated control voltage signals are used together, the electrical signals difer greatly between the motor and drive-side of the cable. Since in the considered case only drive-side data is available, it is therefore necessary to estimate the motor-side signals. Modelling the entire cable and motor system in an Extended Kalman Filter is too computationally intensive for standard embedded real-time platforms. It is, in consequence, proposed to divide the problem into an Extended Kalman Filter, based only on the motor model, and separated motor-side signal estimators, the combination of which is less demanding computationally. The efectiveness of this approach is shown in simulation. Then its validity is experimentally demonstrated via implementation in a DSP based drive. A testbench to test its performance when driving an axis of a Large Hadron Collider collimator is presented along with the results achieved. It is shown that the proposed method is capable of achieving position and load torque estimates which allow step loss to be detected and mechanical degradation to be evaluated without the need for physical sensors. These estimation algorithms often require a precise model of the motor, but the standard electrical model used for hybrid stepper motors is limited when currents, which are high enough to produce saturation of the magnetic circuit, are present. New model extensions are proposed in order to have a more precise model of the motor independently of the current level, whilst maintaining a low computational cost. It is shown that a significant improvement in the model It is achieved with these extensions, and their computational performance is compared to study the cost of model improvement versus computation cost. The applicability of the proposed model extensions is demonstrated via their use in an Extended Kalman Filter running in real-time for closed-loop current control and mechanical state estimation. An additional problem arises from the use of stepper motors. The mechanics of the collimators can wear due to the abrupt motion and torque profiles that are applied by them when used in the standard way, i.e. stepping in open-loop. Closed-loop position control, more specifically Field Oriented Control, would allow smoother profiles, more respectful to the mechanics, to be applied but requires position feedback. As mentioned already, the use of sensors in radioactive environments is very limited for reliability reasons. Sensorless control is a known option but when the speed is very low or zero, as is the case most of the time for the motors used in the LHC collimator, the loss of observability prevents its use. In order to allow the use of position sensors without reducing the long term reliability of the whole system, the possibility to switch from closed to open loop is proposed and validated, allowing the use of closed-loop control when the position sensors function correctly and open-loop when there is a sensor failure. A different approach to deal with the switched drive working with long cables is also presented. Switched mode stepper motor drives tend to have poor performance or even fail completely when the motor is fed through a long cable due to the high oscillations in the drive-side current. The design of a stepper motor output fillter which solves this problem is thus proposed. A two stage filter, one devoted to dealing with the diferential mode and the other with the common mode, is designed and validated experimentally. With this ?lter the drive performance is greatly improved, achieving a positioning repeatability even better than with the drive working without a long cable, the radiated emissions are reduced and the overvoltages at the motor terminals are eliminated.
Resumo:
The aims of the project were twofold: 1) To investigate classification procedures for remotely sensed digital data, in order to develop modifications to existing algorithms and propose novel classification procedures; and 2) To investigate and develop algorithms for contextual enhancement of classified imagery in order to increase classification accuracy. The following classifiers were examined: box, decision tree, minimum distance, maximum likelihood. In addition to these the following algorithms were developed during the course of the research: deviant distance, look up table and an automated decision tree classifier using expert systems technology. Clustering techniques for unsupervised classification were also investigated. Contextual enhancements investigated were: mode filters, small area replacement and Wharton's CONAN algorithm. Additionally methods for noise and edge based declassification and contextual reclassification, non-probabilitic relaxation and relaxation based on Markov chain theory were developed. The advantages of per-field classifiers and Geographical Information Systems were investigated. The conclusions presented suggest suitable combinations of classifier and contextual enhancement, given user accuracy requirements and time constraints. These were then tested for validity using a different data set. A brief examination of the utility of the recommended contextual algorithms for reducing the effects of data noise was also carried out.
Resumo:
Tonal, textural and contextual properties are used in manual photointerpretation of remotely sensed data. This study has used these three attributes to produce a lithological map of semi arid northwest Argentina by semi automatic computer classification procedures of remotely sensed data. Three different types of satellite data were investigated, these were LANDSAT MSS, TM and SIR-A imagery. Supervised classification procedures using tonal features only produced poor classification results. LANDSAT MSS produced classification accuracies in the range of 40 to 60%, while accuracies of 50 to 70% were achieved using LANDSAT TM data. The addition of SIR-A data produced increases in the classification accuracy. The increased classification accuracy of TM over the MSS is because of the better discrimination of geological materials afforded by the middle infra red bands of the TM sensor. The maximum likelihood classifier consistently produced classification accuracies 10 to 15% higher than either the minimum distance to means or decision tree classifier, this improved accuracy was obtained at the cost of greatly increased processing time. A new type of classifier the spectral shape classifier, which is computationally as fast as a minimum distance to means classifier is described. However, the results for this classifier were disappointing, being lower in most cases than the minimum distance or decision tree procedures. The classification results using only tonal features were felt to be unacceptably poor, therefore textural attributes were investigated. Texture is an important attribute used by photogeologists to discriminate lithology. In the case of TM data, texture measures were found to increase the classification accuracy by up to 15%. However, in the case of the LANDSAT MSS data the use of texture measures did not provide any significant increase in the accuracy of classification. For TM data, it was found that second order texture, especially the SGLDM based measures, produced highest classification accuracy. Contextual post processing was found to increase classification accuracy and improve the visual appearance of classified output by removing isolated misclassified pixels which tend to clutter classified images. Simple contextual features, such as mode filters were found to out perform more complex features such as gravitational filter or minimal area replacement methods. Generally the larger the size of the filter, the greater the increase in the accuracy. Production rules were used to build a knowledge based system which used tonal and textural features to identify sedimentary lithologies in each of the two test sites. The knowledge based system was able to identify six out of ten lithologies correctly.
Resumo:
A methodology is presented which can be used to produce the level of electromagnetic interference, in the form of conducted and radiated emissions, from variable speed drives, the drive that was modelled being a Eurotherm 583 drive. The conducted emissions are predicted using an accurate circuit model of the drive and its associated equipment. The circuit model was constructed from a number of different areas, these being: the power electronics of the drive, the line impedance stabilising network used during the experimental work to measure the conducted emissions, a model of an induction motor assuming near zero load, an accurate model of the shielded cable which connected the drive to the motor, and finally the parasitic capacitances that were present in the drive modelled. The conducted emissions were predicted with an error of +/-6dB over the frequency range 150kHz to 16MHz, which compares well with the limits set in the standards which specify a frequency range of 150kHz to 30MHz. The conducted emissions model was also used to predict the current and voltage sources which were used to predict the radiated emissions from the drive. Two methods for the prediction of the radiated emissions from the drive were investigated, the first being two-dimensional finite element analysis and the second three-dimensional transmission line matrix modelling. The finite element model took account of the features of the drive that were considered to produce the majority of the radiation, these features being the switching of the IGBT's in the inverter, the shielded cable which connected the drive to the motor as well as some of the cables that were present in the drive.The model also took account of the structure of the test rig used to measure the radiated emissions. It was found that the majority of the radiation produced came from the shielded cable and the common mode currents that were flowing in the shield, and that it was feasible to model the radiation from the drive by only modelling the shielded cable. The radiated emissions were correctly predicted in the frequency range 30MHz to 200MHz with an error of +10dB/-6dB. The transmission line matrix method modelled the shielded cable which connected the drive to the motor and also took account of the architecture of the test rig. Only limited simulations were performed using the transmission line matrix model as it was found to be a very slow method and not an ideal solution to the problem. However the limited results obtained were comparable, to within 5%, to the results obtained using the finite element model.
Resumo:
Photonic signal processing is used to implement common mode signal cancellation across a very wide bandwidth utilising phase modulation of radio frequency (RF) signals onto a narrow linewidth laser carrier. RF spectra were observed using narrow-band, tunable optical filtering using a scanning Fabry Perot etalon. Thus functions conventionally performed using digital signal processing techniques in the electronic domain have been replaced by analog techniques in the photonic domain. This technique was able to observe simultaneous cancellation of signals across a bandwidth of 1400 MHz, limited only by the free spectral range of the etalon. © 2013 David M. Benton.
Resumo:
A series of N1-benzylidene pyridine-2-carboxamidrazone anti-tuberculosis compounds has been evaluated for their cytotoxicity using human mononuclear leucocytes (MNL) as target cells. All eight compounds were significantly more toxic than dimethyl sulphoxide control and isoniazid (INH) with the exception of a 4-methoxy-3-(2-phenylethyloxy) derivative, which was not significantly different in toxicity compared with INH. The most toxic agent was an ethoxy derivative, followed by 3-nitro, 4-methoxy, dimethylpropyl, 4-methylbenzyloxy, 3-methoxy-4-(-2-phenylethyloxy) and 4-benzyloxy in rank order. In comparison with the effect of selected carboxamidrazone agents on cells alone, the presence of either N-acetyl cysteine (NAC) or glutathione caused a significant reduction in the toxicity of INH, as well as on the 4-benzyloxy derivative, although both increased the toxicity of a 4-N,N-dimethylamino-1-naphthylidene and a 2-t-butylthio derivative. The derivatives from this and three previous studies were subjected to computational analysis in order to derive equations designed to establish quantitative structure activity relationships for these agents. Twenty-five compounds were thus resolved into two groups (1 and 2), which on analysis yielded equations with r2 values in the range 0.65-0.92. Group 1 shares a common mode of toxicity related to hydrophobicity, where cytotoxicity peaked at logP of 3.2, while Group 2 toxicity was strongly related to ionisation potential. The presence of thiols such as NAC and GSH both promoted and attenuated toxicity in selected compounds from Group 1, suggesting that secondary mechanisms of toxicity were operating. These studies will facilitate the design of future low toxicity high activity anti-tubercular carboxamidrazone agents. © 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Liquid-level sensing technologies have attracted great prominence, because such measurements are essential to industrial applications, such as fuel storage, flood warning and in the biochemical industry. Traditional liquid level sensors are based on electromechanical techniques; however they suffer from intrinsic safety concerns in explosive environments. In recent years, given that optical fiber sensors have lots of well-established advantages such as high accuracy, costeffectiveness, compact size, and ease of multiplexing, several optical fiber liquid level sensors have been investigated which are based on different operating principles such as side-polishing the cladding and a portion of core, using a spiral side-emitting optical fiber or using silica fiber gratings. The present work proposes a novel and highly sensitive liquid level sensor making use of polymer optical fiber Bragg gratings (POFBGs). The key elements of the system are a set of POFBGs embedded in silicone rubber diaphragms. This is a new development building on the idea of determining liquid level by measuring the pressure at the bottom of a liquid container, however it has a number of critical advantages. The system features several FBG-based pressure sensors as described above placed at different depths. Any sensor above the surface of the liquid will read the same ambient pressure. Sensors below the surface of the liquid will read pressures that increase linearly with depth. The position of the liquid surface can therefore be approximately identified as lying between the first sensor to read an above-ambient pressure and the next higher sensor. This level of precision would not in general be sufficient for most liquid level monitoring applications; however a much more precise determination of liquid level can be made by linear regression to the pressure readings from the sub-surface sensors. There are numerous advantages to this multi-sensor approach. First, the use of linear regression using multiple sensors is inherently more accurate than using a single pressure reading to estimate depth. Second, common mode temperature induced wavelength shifts in the individual sensors are automatically compensated. Thirdly, temperature induced changes in the sensor pressure sensitivity are also compensated. Fourthly, the approach provides the possibility to detect and compensate for malfunctioning sensors. Finally, the system is immune to changes in the density of the monitored fluid and even to changes in the effective force of gravity, as might be obtained in an aerospace application. The performance of an individual sensor was characterized and displays a sensitivity (54 pm/cm), enhanced by more than a factor of 2 when compared to a sensor head configuration based on a silica FBG published in the literature, resulting from the much lower elastic modulus of POF. Furthermore, the temperature/humidity behavior and measurement resolution were also studied in detail. The proposed configuration also displays a highly linear response, high resolution and good repeatability. The results suggest the new configuration can be a useful tool in many different applications, such as aircraft fuel monitoring, and biochemical and environmental sensing, where accuracy and stability are fundamental. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Resumo:
One of the most popular sports globally, soccer has seen a rise in the demands of the game over recent years. An increase in intensity and playing demands, coupled with growing social and economic pressures on soccer players means that optimal preparation is of paramount importance. Recent research has found the modern game, depending on positional role, to consist of approximately 60% more sprint distance in the English Premier League, which was also found to be the case for frequency and success of discrete technical actions (Bush et al., 2015). As a result, the focus on soccer training and player preparedness is becoming more prevalent in scientific research. By designing the appropriate training load, and thus periodization strategies, the aim is to achieve peak fitness in the most efficient way, whilst minimising the risk of injury and illness. Traditionally, training intensity has been based on heart rate responses, however, the emergence of tracking microtechnology such as global positioning system (GPS) and inertial sensors are now able to further quantify biomechanical load as well as physiological stress. Detailed pictures of internal and external loading indices such as these then combine to produce a more holistic view of training load experience by the player during typical drills and phases of training in soccer. The premise of this research is to gain greater understanding of the physical demands of common training methodologies in elite soccer to support optimal match performance. The coaching process may then benefit from being able to prescribe the most effective training to support these. The first experimental chapter in this thesis began by quantify gross training loads of the pre-season and in-season phases in soccer. A broader picture of the training loads inherent in these distinct phases brought more detail as to the type and extent of external loading experienced by soccer players at these times, and how the inclusion of match play influences weekly training rhythms. Training volume (total distance) was found to be high at the start compared to the end of pre-season (37 kilometres and 28 kilometres), where high cardiovascular loads were attained as part of the conditioning focus. This progressed transiently, however, to involve higher-speed, acceleration and change-of-direction stimuli at the end of pre-season compared to the start and to that in-season (1.18 kilometres, 0.70 kilometres and 0.42 kilometres high-intensity running; with 37, 25 and 23 accelerations >3m/s2 respectively) . The decrease in volume and increase in maximal anaerobic activity was evident in the training focus as friendly matches were introduced before the competitive season. The influence of match-play as being a large physical dose in the training week may then determine the change in weekly periodisation and how resulting training loads applied and tapered, if necessary. The focus of research was then directed more specifically to the most common mode of training in soccer, that also featured regularly in the pre-season period in the present study, small-sided games (SSG). The subsequent studies examined numerous manipulations of this specific form of soccer conditioning, such as player numbers as well as absolute and relative playing space available. In contrast to some previous literature, changing the number of players did not seem to influence training responses significantly, although playing format in the possession style brought about larger effects for heart rate (89.9%HRmax) and average velocity (7.6km/h-1). However, the following studies (Chapters 5, 6 and 7) revealed a greater influence of relative playing space available to players in SSG. The larger area at their disposal brought about greater aerobic responses (~90%HRmax), by allowing higher average and peak velocities (>25km/h-1), as well as greater distance acceleration behaviour at greater thresholds (>2.8m/s2). Furthermore, the data points towards space as being a large determinant in strategy of the player in small-sided games (SSG), subsequently shaping their movement behaviour and resulting physical responses. For example, higher average velocities in a possession format (8km/h-1) reflects higher work rate and heart rate load but makes achieving significant neuromuscular accelerations at a high level difficult given higher starting velocities prior to the most intense accelerations (4.2km/h-1). By altering space available and even through intentional numerical imbalances in team numbers, it may be easier for coaches to achieve the desired stimulus for the session or individual player, whether that is for aerobic and neuromuscular conditioning. Large effects were found for heart rate being higher in the underloaded team (85-90%HRmax) compared to the team with more players (80-85%HRmax) as well as for RPE (5AU versus 7AU). This was also apparent for meterage and therefore average velocity. It would also seem neuromuscular load through high acceleration and deceleration efforts were more pronounced with less numbers (given the need to press and close down opponents, and in a larger area relative to the number of players on the underloaded team. The peak accelerations and deceleration achieved was also higher when playing with less players (3-6.2m/s2 and 3-6.1m/s2) Having detailed ways in which to reach desired physical loading responses in common small training formats, Chapter 8 compared SSG to larger 9v9 formats with full-size 11v11 friendly matches. This enabled absolute and relative comparisons to be made and to understand the extent to which smaller training formats are able to replicate the required movements to be successful in competition. In relative terms, it was revealed that relative acceleration distance and Player Load were higher in smaller 4v4 games than match-play (1.1m.min-1 and 0.3m.min-1 >3m/s2; 16.9AU versus 12AU). Although the smallest format did not replicate the high-velocity demands of matches, the results confirmed their efficacy in providing significant neuromuscular load during the training week, which may then be supplemented by high-intensity interval running in order to gain exposure to more maximal speed work. In summary, the data presented provide valuable information from GPS and inertial sensor microtechnology which may then be used to understand training better to manipulate types of load according to physical conditioning objectives. For example, a library of resources to direct planning of drills of varying cardiovascular, neuromuscular and perceptual load can be created to give more confidence in session outcomes. Combining external and internal load data of common soccer training drills, and their application across different phases and training objectives may give coaches a powerful tool to plan and periodize training.
Resumo:
Power electronic circuits are moving towards higher switching frequencies, exploiting the capabilities of novel devices to shrink the dimension of passive components. This trend demands sensors capable enough to operate at such high frequencies. This thesis aims to demonstrate through experimental characterization, the broadband capability of a fully integrated CMOS X-Hall current sensor in current mode interfaced with a transimpedance amplifier (TIA), chip CH09, realized in CMOS technology for power electronics applications such as power converters. The system exploits a common-mode control system to operate the dual supply system, 5-V for the X-Hall probe and 1.2-V for the readout. The developed prototype achieves a maximum acquisition bandwidth of 12 MHz, a power consumption of 11.46 mW, resolution of 39 mArms, a sensitivity of 8 % /T, and a FoM of 569-MHz/A2mW, significantly higher than current state-of-the-art. Further enhancements were proposed to CH09 as a new chip CH100, aiming for accuracy levels prerequisite for a real-time power electronic application. The TIA was optimized for a wider bandwidth of 26.7 MHz with nearly 30% reduction of the integrated input referred noise of 26.69 nArms at the probe-AFE interface in the frequency band of DC-30 MHz, and a 10% improvement in the dynamic range. The expected input range is 5-A. The chip incorporates a dual sensing chain for differential sensing to overcome common mode interferences. A novel offset cancellation technique is proposed that would require switching of polarity of bias currents. Thermal gain drift was improved by a factor of 8 and will be digitally calibrated utilizing a new built-in temperature sensor with a post calibration measurement accuracy greater than 1%. The estimated power consumption of the entire system is 55.6 mW. Both prototypes have been implemented through a 90-nm microelectronic process from STMicroelectronics and occupy a silicon area of 2.4 mm2.
Resumo:
One of the major issues for power converters that are connected to the electric grid are the measurement of three phase Conduced Emissions (CE), which are regulated by international and regional standards. CE are composed of two components which are Common Mode (CM) noise and Differential Mode (DM) noise. To achieve compliance with these regulations the Equipment Under Test (EUT) includes filtering and other electromagnetic emission control strategies. The separation of differential mode and common mode noise in Electromagnetic Interference (EMI) analysis is a well-known procedure which is useful especially for the optimization of the EMI filter, to improve the CM or DM attenuation depending on which component of the conducted emissions is predominant, and for the analysis and the understanding of interference phenomena of switched mode power converters. However, separating both components is rarely done during measurements. Therefore, in this thesis an active device for the separation of the CM and DM EMI noise in three phase power electronic systems has been designed and experimentally analysed.