996 resultados para Series compensation
Resumo:
Handling unbalanced and non-linear loads in a three-phase AC power supply has always been a difficult issue. This has been addressed in the literature by either using fast controllers in the fundamental rotating reference frame or using separate controllers in reference frames specific to the harmonics. In the former case, the controller needs to be fast and in the latter case, besides the need for many controllers, negative-sequence components need to be extracted from the measured signal. This study proposes a control scheme for harmonic and unbalance compensation of a three-phase uninterruptible power supply wherein the problems mentioned above are addressed. The control takes place in the fundamental positive-sequence reference frame using only a set of feedback and feed-forward compensators. The harmonic components are extracted by a process of frame transformations and used as feed-forward compensation terms in the positive-sequence fundamental reference frame. This study uses a method wherein the measured signal itself is used for fundamental negative-sequence compensation. As the feed-forward compensator handles the high-bandwidth components, the feedback compensator can be a simple low-bandwidth one. This control algorithm is explained and validated experimentally.
Resumo:
Handling unbalanced and non-linear loads in a three-phase AC power supply has always been a difficult issue. This has been addressed in the literature by either using fast controllers in the fundamental rotating reference frame or using separate controllers in reference frames specific to the harmonics. In the former case, the controller needs to be fast and in the lattercase, besides the need for many controllers, negative-sequence components need to be extracted from the measured signal.This study proposes a control scheme for harmonic and unbalance compensation of a three-phase uninterruptible power supply wherein the problems mentioned above are addressed. The control takes place in the fundamental positive-sequence reference frame using only a set of feedback and feed-forward compensators. The harmonic components are extracted by process of frame transformations and used as feed-forward compensation terms in the positive-sequence fundamental reference frame. This study uses a method wherein the measured signal itself is used for fundamental negative-sequence compensation. As the feed-forward compensator handles the high-bandwidth components, the feedback compensator can be a simple low-bandwidth one. This control algorithm is explained and validated experimentally.
Resumo:
At head of title: Department of Commerce and Labor. Bureau of Labor.
Resumo:
Superseded by Its Benefit Series Service, Unemployment Insurance Report
Resumo:
Networked control systems (NCSs) offer many advantages over conventional control; however, they also demonstrate challenging problems such as network-induced delay and packet losses. This paper proposes an approach of predictive compensation for simultaneous network-induced delays and packet losses. Different from the majority of existing NCS control methods, the proposed approach addresses co-design of both network and controller. It also alleviates the requirements of precise process models and full understanding of NCS network dynamics. For a series of possible sensor-to-actuator delays, the controller computes a series of corresponding redundant control values. Then, it sends out those control values in a single packet to the actuator. Once receiving the control packet, the actuator measures the actual sensor-to-actuator delay and computes the control signals from the control packet. When packet dropout occurs, the actuator utilizes past control packets to generate an appropriate control signal. The effectiveness of the approach is demonstrated through examples.
Resumo:
The effect of the context of the flanking sequence on ligand binding to DNA oligonucleotides that contain consensus binding sites was investigated for the binding of the intercalator 7-amino actinomycin D. Seven self-complementary DNA oligomers each containing a centrally located primary binding site, 5'-A-G-C-T-3', flanked on either side by the sequences (AT)(n) or (AA)(n) (with n = 2, 3, 4) and AA(AT)(2), were studied. For different flanking sequences, (AA)(n)-series or (AT)(n)-series, differential fluorescence enhancements of the ligand due to binding were observed. Thermodynamic studies indicated that the flanking sequences not only affected DNA stability and secondary structure but also modulated ligand binding to the primary binding site. The magnitude of the ligand binding affinity to the primary site was inversely related to the sequence dependent stability. The enthalpy of ligand binding was directly measured by isothermal titration calorimetry, and this made it possible to parse the binding free energy into its energetic and entropic terms.
Resumo:
Industrial robotic manipulators can be found in most factories today. Their tasks are accomplished through actively moving, placing and assembling parts. This movement is facilitated by actuators that apply a torque in response to a command signal. The presence of friction and possibly backlash have instigated the development of sophisticated compensation and control methods in order to achieve the desired performance may that be accurate motion tracking, fast movement or in fact contact with the environment. This thesis presents a dual drive actuator design that is capable of physically linearising friction and hence eliminating the need for complex compensation algorithms. A number of mathematical models are derived that allow for the simulation of the actuator dynamics. The actuator may be constructed using geared dc motors, in which case the benefits of torque magnification is retained whilst the increased non-linear friction effects are also linearised. An additional benefit of the actuator is the high quality, low latency output position signal provided by the differencing of the two drive positions. Due to this and the linearised nature of friction, the actuator is well suited for low velocity, stop-start applications, micro-manipulation and even in hard-contact tasks. There are, however, disadvantages to its design. When idle, the device uses power whilst many other, single drive actuators do not. Also the complexity of the models mean that parameterisation is difficult. Management of start-up conditions still pose a challenge.
Resumo:
Mr. Pechersky set out to examine a specific feature of the employer-employee relationship in Russian business organisations. He wanted to study to what extent the so-called "moral hazard" is being solved (if it is being solved at all), whether there is a relationship between pay and performance, and whether there is a correlation between economic theory and Russian reality. Finally, he set out to construct a model of the Russian economy that better reflects the way it actually functions than do certain other well-known models (for example models of incentive compensation, the Shapiro-Stiglitz model etc.). His report was presented to the RSS in the form of a series of manuscripts in English and Russian, and on disc, with many tables and graphs. He begins by pointing out the different examples of randomness that exist in the relationship between employee and employer. Firstly, results are frequently affected by circumstances outside the employee's control that have nothing to do with how intelligently, honestly, and diligently the employee has worked. When rewards are based on results, uncontrollable randomness in the employee's output induces randomness in their incomes. A second source of randomness involves the outside events that are beyond the control of the employee that may affect his or her ability to perform as contracted. A third source of randomness arises when the performance itself (rather than the result) is measured, and the performance evaluation procedures include random or subjective elements. Mr. Pechersky's study shows that in Russia the third source of randomness plays an important role. Moreover, he points out that employer-employee relationships in Russia are sometimes opposite to those in the West. Drawing on game theory, he characterises the Western system as follows. The two players are the principal and the agent, who are usually representative individuals. The principal hires an agent to perform a task, and the agent acquires an information advantage concerning his actions or the outside world at some point in the game, i.e. it is assumed that the employee is better informed. In Russia, on the other hand, incentive contracts are typically negotiated in situations in which the employer has the information advantage concerning outcome. Mr. Pechersky schematises it thus. Compensation (the wage) is W and consists of a base amount, plus a portion that varies with the outcome, x. So W = a + bx, where b is used to measure the intensity of the incentives provided to the employee. This means that one contract will be said to provide stronger incentives than another if it specifies a higher value for b. This is the incentive contract as it operates in the West. The key feature distinguishing the Russian example is that x is observed by the employer but is not observed by the employee. So the employer promises to pay in accordance with an incentive scheme, but since the outcome is not observable by the employee the contract cannot be enforced, and the question arises: is there any incentive for the employer to fulfil his or her promises? Mr. Pechersky considers two simple models of employer-employee relationships displaying the above type of information symmetry. In a static framework the obtained result is somewhat surprising: at the Nash equilibrium the employer pays nothing, even though his objective function contains a quadratic term reflecting negative consequences for the employer if the actual level of compensation deviates from the expectations of the employee. This can lead, for example, to labour turnover, or the expenses resulting from a bad reputation. In a dynamic framework, the conclusion can be formulated as follows: the higher the discount factor, the higher the incentive for the employer to be honest in his/her relationships with the employee. If the discount factor is taken to be a parameter reflecting the degree of (un)certainty (the higher the degree of uncertainty is, the lower is the discount factor), we can conclude that the answer to the formulated question depends on the stability of the political, social and economic situation in a country. Mr. Pechersky believes that the strength of a market system with private property lies not just in its providing the information needed to compute an efficient allocation of resources in an efficient manner. At least equally important is the manner in which it accepts individually self-interested behaviour, but then channels this behaviour in desired directions. People do not have to be cajoled, artificially induced, or forced to do their parts in a well-functioning market system. Instead, they are simply left to pursue their own objectives as they see fit. Under the right circumstances, people are led by Adam Smith's "invisible hand" of impersonal market forces to take the actions needed to achieve an efficient, co-ordinated pattern of choices. The problem is that, as Mr. Pechersky sees it, there is no reason to believe that the circumstances in Russia are right, and the invisible hand is doing its work properly. Political instability, social tension and other circumstances prevent it from doing so. Mr. Pechersky believes that the discount factor plays a crucial role in employer-employee relationships. Such relationships can be considered satisfactory from a normative point of view, only in those cases where the discount factor is sufficiently large. Unfortunately, in modern Russia the evidence points to the typical discount factor being relatively small. This fact can be explained as a manifestation of aversion to risk of economic agents. Mr. Pechersky hopes that when political stabilisation occurs, the discount factors of economic agents will increase, and the agent's behaviour will be explicable in terms of more traditional models.
Resumo:
The potential for significant human populations to experience long-term inhalation of formaldehyde and reports of symptomatology due to this exposure has led to a considerable interest in the toxicologic assessment of risk from subchronic formaldehyde exposures using animal models. Since formaldehyde inhalation depresses certain respiratory parameters in addition to its other forms of toxicity, there is a potential for the alteration of the actual dose received by the exposed individual (and the resulting toxicity) due to this respiratory effect. The respiratory responses to formaldehyde inhalation and the subsequent pattern of deposition were therefore investigated in animals that had received subchronic exposure to the compound, and the potential for changes in the formaldehyde dose received due to long-term inhalation evaluated. Male Sprague-Dawley rats were exposed to either 0, 0.5, 3, or 15 ppm formaldehyde for 6 hours/day, 5 days/week for up to 6 months. The patterns of respiratory response, deposition and the compensation mechanisms involved were then determined in a series of formaldehyde test challenges to both the upper and to the lower respiratory tracts in separate groups of subchronically exposed animals and age-specific controls (four concentration groups, two time points). In both the control and pre-exposed animals, there was a characteristic recovery of respiratory parameters initially depressed by formaldehyde inhalation to at or approaching pre-exposure levels within 10 minutes of the initiation of exposure. Also, formaldehyde deposition was found to remain very high in the upper and lower tracts after long-term exposure. Therefore, there was probably little subsequent effect on the dose received by the exposed individual that was attributable to the repeated exposures. There was a diminished initial minute volume response in test challenges of both the upper and lower tracts of animals that had received at least 16 weeks of exposure to 15 ppm, with compensatory increases in tidal volume in the upper tract and respiratory rate in the lower tract. However, this dose-related effect was probably not relevant to human risk estimation because this formaldehyde dose is in excess of that experienced by human populations. ^
Resumo:
CaCO3, Corg, and biogenic SiO2 were measured in Eocene equatorial Pacific sediments from Sites 1218 and 1219, and bulk oxygen and carbon isotopes were measured on selected intervals from Site 1219. These data delineate a series of CaCO3 events that first appeared at ~48 Ma and continued to the Eocene/Oligocene boundary. Each event lasted 1-2 m.y. and is separated from the next by a low CaCO3 interval of a similar time span. The largest of these carbonate accumulation events (CAE-3) is in Magnetochron 18. It began at ~42.2 Ma, lasted until ~40.3 Ma, and was marked by higher than average productivity. The end of CAE-3 was abrupt and was associated with a large-scale carbon transfer to the oceans prior to warming of high-latitude regions. Changes in carbonate compensation depth associated with CAE excursions were small in the early part of the middle Eocene but increased to as much as 800 m by the late middle Eocene before decreasing into the late Eocene. Oxygen isotope data indicate that the carbonate events are associated with cooling conditions and may mark small glaciations in the Eocene.
Resumo:
In this work, we present a novel method to compensate the movement in images acquired during free breathing using first-pass gadolinium enhanced, myocardial perfusion magnetic resonance imaging (MRI). First, we use independent component analysis (ICA) to identify the optimal number of independent components (ICs) that separate the breathing motion from the intensity change induced by the contrast agent. Then, synthetic images are created by recombining the ICs, but other then in previously published work (Milles et al. 2008), we omit the component related to motion, and therefore, the resulting reference image series is free of motion. Motion compensation is then achieved by using a multi-pass non-rigid image registration scheme. We tested our method on 15 distinct image series (5 patients) consisting of 58 images each and we validated our method by comparing manually tracked intensity profiles of the myocardial sections to automatically generated ones before and after registration. The average correlation to the manually obtained curves before registration 0:89 0:11 was increased to 0:98 0:02
Resumo:
Images acquired during free breathing using first-pass gadolinium-enhanced myocardial perfusion magnetic resonance imaging (MRI) exhibit a quasiperiodic motion pattern that needs to be compensated for if a further automatic analysis of the perfusion is to be executed. In this work, we present a method to compensate this movement by combining independent component analysis (ICA) and image registration: First, we use ICA and a time?frequency analysis to identify the motion and separate it from the intensity change induced by the contrast agent. Then, synthetic reference images are created by recombining all the independent components but the one related to the motion. Therefore, the resulting image series does not exhibit motion and its images have intensities similar to those of their original counterparts. Motion compensation is then achieved by using a multi-pass image registration procedure. We tested our method on 39 image series acquired from 13 patients, covering the basal, mid and apical areas of the left heart ventricle and consisting of 58 perfusion images each. We validated our method by comparing manually tracked intensity profiles of the myocardial sections to automatically generated ones before and after registration of 13 patient data sets (39 distinct slices). We compared linear, non-linear, and combined ICA based registration approaches and previously published motion compensation schemes. Considering run-time and accuracy, a two-step ICA based motion compensation scheme that first optimizes a translation and then for non-linear transformation performed best and achieves registration of the whole series in 32 ± 12 s on a recent workstation. The proposed scheme improves the Pearsons correlation coefficient between manually and automatically obtained time?intensity curves from .84 ± .19 before registration to .96 ± .06 after registration
Resumo:
The most straightforward European single energy market design would entail a European system operator regulated by a single European regulator. This would ensure the predictable development of rules for the entire EU, significantly reducing regulatory uncertainty for electricity sector investments. But such a first-best market design is unlikely to be politically realistic in the European context for three reasons. First, the necessary changes compared to the current situation are substantial and would produce significant redistributive effects. Second, a European solution would deprive member states of the ability to manage their energy systems nationally. And third, a single European solution might fall short of being well-tailored to consumers’ preferences, which differ substantially across the EU. To nevertheless reap significant benefits from an integrated European electricity market, we propose the following blueprint: First, we suggest adding a European system-management layer to complement national operation centres and help them to better exchange information about the status of the system, expected changes and planned modifications. The ultimate aim should be to transfer the day-to-day responsibility for the safe and economic operation of the system to the European control centre. To further increase efficiency, electricity prices should be allowed to differ between all network points between and within countries. This would enable throughput of electricity through national and international lines to be safely increased without any major investments in infrastructure. Second, to ensure the consistency of national network plans and to ensure that they contribute to providing the infrastructure for a functioning single market, the role of the European ten year network development plan (TYNDP) needs to be upgraded by obliging national regulators to only approve projects planned at European level unless they can prove that deviations are beneficial. This boosted role of the TYNDP would need to be underpinned by resolving the issues of conflicting interests and information asymmetry. Therefore, the network planning process should be opened to all affected stakeholders (generators, network owners and operators, consumers, residents and others) and enable the European Agency for the Cooperation of Energy Regulators (ACER) to act as a welfare-maximising referee. An ultimate political decision by the European Parliament on the entire plan will open a negotiation process around selecting alternatives and agreeing compensation. This ensures that all stakeholders have an interest in guaranteeing a certain degree of balance of interest in the earlier stages. In fact, transparent planning, early stakeholder involvement and democratic legitimisation are well suited for minimising as much as possible local opposition to new lines. Third, sharing the cost of network investments in Europe is a critical issue. One reason is that so far even the most sophisticated models have been unable to identify the individual long-term net benefit in an uncertain environment. A workable compromise to finance new network investments would consist of three components: (i) all easily attributable cost should be levied on the responsible party; (ii) all network users that sit at nodes that are expected to receive more imports through a line extension should be obliged to pay a share of the line extension cost through their network charges; (iii) the rest of the cost is socialised to all consumers. Such a cost-distribution scheme will involve some intra-European redistribution from the well-developed countries (infrastructure-wise) to those that are catching up. However, such a scheme would perform this redistribution in a much more efficient way than the Connecting Europe Facility’s ad-hoc disbursements to politically chosen projects, because it would provide the infrastructure that is really needed.