943 resultados para Electric Machine drive systems
Resumo:
The location of ground faults in railway electric lines in 2 × 5 kV railway power supply systems is a difficult task. In both 1 × 25 kV and transmission power systems it is common practice to use distance protection relays to clear ground faults and localize their positions. However, in the particular case of this 2 × 25 kV system, due to the widespread use of autotransformers, the relation between the distance and the impedance seen by the distance protection relays is not linear and therefore the location is not accurate enough. This paper presents a simple and economical method to identify the subsection between autotransformers and the conductor (catenary or feeder) where the ground fault is happening. This method is based on the comparison of the angle between the current and the voltage of the positive terminal in each autotransformer. Consequently, after the identification of the subsection and the conductor with the ground defect, only the subsection where the ground fault is present will be quickly removed from service, with the minimum effect on rail traffic. This method has been validated through computer simulations and laboratory tests with positive results.
Resumo:
To “control” a system is to make it behave (hopefully) according to our “wishes,” in a way compatible with safety and ethics, at the least possible cost. The systems considered here are distributed—i.e., governed (modeled) by partial differential equations (PDEs) of evolution. Our “wish” is to drive the system in a given time, by an adequate choice of the controls, from a given initial state to a final given state, which is the target. If this can be achieved (respectively, if we can reach any “neighborhood” of the target) the system, with the controls at our disposal, is exactly (respectively, approximately) controllable. A very general (and fuzzy) idea is that the more a system is “unstable” (chaotic, turbulent) the “simplest,” or the “cheapest,” it is to achieve exact or approximate controllability. When the PDEs are the Navier–Stokes equations, it leads to conjectures, which are presented and explained. Recent results, reported in this expository paper, essentially prove the conjectures in two space dimensions. In three space dimensions, a large number of new questions arise, some new results support (without proving) the conjectures, such as generic controllability and cases of decrease of cost of control when the instability increases. Short comments are made on models arising in climatology, thermoelasticity, non-Newtonian fluids, and molecular chemistry. The Introduction of the paper and the first part of all sections are not technical. Many open questions are mentioned in the text.
Resumo:
Application of electric fields tangent to the plane of a confined patch of fluid bilayer membrane can create lateral concentration gradients of the lipids. A thermodynamic model of this steady-state behavior is developed for binary systems and tested with experiments in supported lipid bilayers. The model uses Flory’s approximation for the entropy of mixing and allows for effects arising when the components have different molecular areas. In the special case of equal area molecules the concentration gradient reduces to a Fermi–Dirac distribution. The theory is extended to include effects from charged molecules in the membrane. Calculations show that surface charge on the supporting substrate substantially screens electrostatic interactions within the membrane. It also is shown that concentration profiles can be affected by other intermolecular interactions such as clustering. Qualitative agreement with this prediction is provided by comparing phosphatidylserine- and cardiolipin-containing membranes.
Resumo:
A "green beard" refers to a gene, or group of genes, that is able to recognize itself in other individuals and direct benefits to these individuals. Green-beard effects have been dismissed as implausible by authors who have implicitly assumed sophisticated mechanisms of perception and complex behavioral responses. However, many simple mechanisms for genes to "recognize" themselves exist at the maternal-fetal interface of viviparous organisms. Homophilic cell adhesion molecules, for example, are able to interact with copies of themselves on other cells. Thus, the necessary components of a green-beard effect -- feature, recognition, and response -- can be different aspects of the phenotype of a single gene. Other green-beard effects could involve coalitions of genes at closely linked loci. In fact, any form of epistasis between a locus expressed in a mother and a closely linked locus expressed in the fetus has the property of "self-recognition." Green-beard effects have many formal similarities to systems of meiotic drive and, like them, can be a source of intragenomic conflict.
Resumo:
Earthquake zones in the upper crust are usually more conductive than the surrounding rocks, and electrical geophysical measurements can be used to map these zones. Magnetotelluric (MT) measurements across fault zones that are parallel to the coast and not too far away can also give some important information about the lower crustal zone. This is because the long-period electric currents coming from the ocean gradually leak into the mantle, but the lower crust is usually very resistive and very little leakage takes place. If a lower crustal zone is less resistive it will be a leakage zone, and this can be seen because the MT phase will change as the ocean currents leave the upper crust. The San Andreas Fault is parallel to the ocean boundary and close enough to have a lot of extra ocean currents crossing the zone. The Loma Prieta zone, after the earthquake, showed a lot of ocean electric current leakage, suggesting that the lower crust under the fault zone was much more conductive than normal. It is hard to believe that water, which is responsible for the conductivity, had time to get into the lower crustal zone, so it was probably always there, but not well connected. If this is true, then the poorly connected water would be at a pressure close to the rock pressure, and it may play a role in modifying the fluid pressure in the upper crust fault zone. We also have telluric measurements across the San Andreas Fault near Palmdale from 1979 to 1990, and beginning in 1985 we saw changes in the telluric signals on the fault zone and east of the fault zone compared with the signals west of the fault zone. These measurements were probably seeing a better connection of the lower crust fluids taking place, and this may result in a fluid flow from the lower crust to the upper crust. This could be a factor in changing the strength of the upper crust fault zone.
Resumo:
The scientific bases for human-machine communication by voice are in the fields of psychology, linguistics, acoustics, signal processing, computer science, and integrated circuit technology. The purpose of this paper is to highlight the basic scientific and technological issues in human-machine communication by voice and to point out areas of future research opportunity. The discussion is organized around the following major issues in implementing human-machine voice communication systems: (i) hardware/software implementation of the system, (ii) speech synthesis for voice output, (iii) speech recognition and understanding for voice input, and (iv) usability factors related to how humans interact with machines.
Resumo:
This paper describes a range of opportunities for military and government applications of human-machine communication by voice, based on visits and contacts with numerous user organizations in the United States. The applications include some that appear to be feasible by careful integration of current state-of-the-art technology and others that will require a varying mix of advances in speech technology and in integration of the technology into applications environments. Applications that are described include (1) speech recognition and synthesis for mobile command and control; (2) speech processing for a portable multifunction soldier's computer; (3) speech- and language-based technology for naval combat team tactical training; (4) speech technology for command and control on a carrier flight deck; (5) control of auxiliary systems, and alert and warning generation, in fighter aircraft and helicopters; and (6) voice check-in, report entry, and communication for law enforcement agents or special forces. A phased approach for transfer of the technology into applications is advocated, where integration of applications systems is pursued in parallel with advanced research to meet future needs.
Resumo:
Oscillating electric fields can be rectified by proteins in cell membranes to give rise to a dc transport of a substance across the membrane or a net conversion of a substrate to a product. This provides a basis for signal averaging and may be important for understanding the effects of weak extremely low frequency (ELF) electric fields on cellular systems. We consider the limits imposed by thermal and "excess" biological noise on the magnitude and exposure duration of such electric field-induced membrane activity. Under certain circumstances, the excess noise leads to an increase in the signal-to-noise ratio in a manner similar to processes labeled "stochastic resonance." Numerical results indicate that it is difficult to reconcile biological effects with low field strengths.
Resumo:
Os motores de corrente contínua convencionais são muito bem conhecidos pela sua robustez e pelo seu alto nível de controlabilidade, alem do fato de possibilitarem a operação na região de enfraquecimento de campo (modo motor), quando esta situação se fizer necessária. Por estas características, as máquinas de corrente contínua ainda são empregadas nos dias atuais em nichos específicos de utilização. Não obstante, a máquina c.c. apresenta algumas desvantagens, principalmente a intensiva e dispendiosa manutenção eletromecânica necessária para sua operação. Como opção de sanar este problema, surgiram na década de 60, as máquinas elétricas de corrente contínua sem escovas (brushless) com excitação por ímãs permanentes de fluxo trapezoidal. O problema destas máquinas se deve justamente a impossibilidade da variação de fluxo de excitação uma vez que são produzidos puramente pelos ímãs. Sendo assim, este trabalho tem como propósito, o estudo de topologias diferenciadas da máquina elétrica, através de um circuito magnético não convencional para aplicação e utilização em sistemas de tração elétrica para operação na região de enfraquecimento de campo através da variação do fluxo resultante no entreferro. Como objeto de estudo, foi focada a topologia de fluxo axial com excitação híbrida, ou seja, dupla excitação (excitação a ímãs permanentes e excitação elétrica). Para o projeto da topologia proposta, nesta tese, adicionalmente ao método analítico, foram realizadas simulações computacionais para a comparação e refinamento dos resultados das grandezas eletromagnéticas da máquina.
Resumo:
High-quality software, delivered on time and budget, constitutes a critical part of most products and services in modern society. Our government has invested billions of dollars to develop software assets, often to redevelop the same capability many times. Recognizing the waste involved in redeveloping these assets, in 1992 the Department of Defense issued the Software Reuse Initiative. The vision of the Software Reuse Initiative was "To drive the DoD software community from its current "re-invent the software" cycle to a process-driven, domain-specific, architecture-centric, library-based way of constructing software.'' Twenty years after issuing this initiative, there is evidence of this vision beginning to be realized in nonembedded systems. However, virtually every large embedded system undertaken has incurred large cost and schedule overruns. Investigations into the root cause of these overruns implicates reuse. Why are we seeing improvements in the outcomes of these large scale nonembedded systems and worse outcomes in embedded systems? This question is the foundation for this research. The experiences of the Aerospace industry have led to a number of questions about reuse and how the industry is employing reuse in embedded systems. For example, does reuse in embedded systems yield the same outcomes as in nonembedded systems? Are the outcomes positive? If the outcomes are different, it may indicate that embedded systems should not use data from nonembedded systems for estimation. Are embedded systems using the same development approaches as nonembedded systems? Does the development approach make a difference? If embedded systems develop software differently from nonembedded systems, it may mean that the same processes do not apply to both types of systems. What about the reuse of different artifacts? Perhaps there are certain artifacts that, when reused, contribute more or are more difficult to use in embedded systems. Finally, what are the success factors and obstacles to reuse? Are they the same in embedded systems as in nonembedded systems? The research in this dissertation is comprised of a series of empirical studies using professionals in the aerospace and defense industry as its subjects. The main focus has been to investigate the reuse practices of embedded systems professionals and nonembedded systems professionals and compare the methods and artifacts used against the outcomes. The research has followed a combined qualitative and quantitative design approach. The qualitative data were collected by surveying software and systems engineers, interviewing senior developers, and reading numerous documents and other studies. Quantitative data were derived from converting survey and interview respondents' answers into coding that could be counted and measured. From the search of existing empirical literature, we learned that reuse in embedded systems are in fact significantly different from nonembedded systems, particularly in effort in model based development approach and quality where the development approach was not specified. The questionnaire showed differences in the development approach used in embedded projects from nonembedded projects, in particular, embedded systems were significantly more likely to use a heritage/legacy development approach. There was also a difference in the artifacts used, with embedded systems more likely to reuse hardware, test products, and test clusters. Nearly all the projects reported using code, but the questionnaire showed that the reuse of code brought mixed results. One of the differences expressed by the respondents to the questionnaire was the difficulty in reuse of code for embedded systems when the platform changed. The semistructured interviews were performed to tell us why the phenomena in the review of literature and the questionnaire were observed. We asked respected industry professionals, such as senior fellows, fellows and distinguished members of technical staff, about their experiences with reuse. We learned that many embedded systems used heritage/legacy development approaches because their systems had been around for many years, before models and modeling tools became available. We learned that reuse of code is beneficial primarily when the code does not require modification, but, especially in embedded systems, once it has to be changed, reuse of code yields few benefits. Finally, while platform independence is a goal for many in nonembedded systems, it is certainly not a goal for the embedded systems professionals and in many cases it is a detriment. However, both embedded and nonembedded systems professionals endorsed the idea of platform standardization. Finally, we conclude that while reuse in embedded systems and nonembedded systems is different today, they are converging. As heritage embedded systems are phased out, models become more robust and platforms are standardized, reuse in embedded systems will become more like nonembedded systems.
Resumo:
Ser eficiente é um requisito para a sustentabilidade das empresas concessionárias de distribuição de energia elétrica no Brasil. A busca pela eficiência deve estar em harmonia com a melhoria contínua da qualidade, da segurança e da satisfação dos consumidores e das partes envolvidas. O desafio de atender múltiplos objetivos requer que as empresas do setor desenvolvam soluções inovadoras, com a mudança de processos, tecnologia, estrutura e a capacitação das pessoas. Desenvolver um modelo operacional eficiente e uma gestão rigorosa dos custos são fatores-chave para o sucesso das empresas, considerando o contexto regulatório de revisão tarifária que incentiva a melhoria do desempenho. O modelo operacional é definido a partir da organização logística dos recursos para atendimento da demanda de serviços, que define também os custos fixos e variáveis de pessoal (salário, horas extras, refeições), infraestrutura (manutenção de prédios, ferramentas e equipamentos) e deslocamentos (manutenção de veículos, combustível), por exemplo. A melhor alocação e o melhor dimensionamento de bases operacionais possibilitam a redução dos custos com deslocamento e infraestrutura, favorecendo o aproveitamento da força de trabalho em campo, a melhoria do atendimento dos clientes e da segurança dos colaboradores. Este trabalho apresenta uma metodologia de otimização de custos através da alocação de bases e equipes operacionais, com o modelamento matemático dos objetivos e restrições do negócio e a aplicação de algoritmo evolutivo para busca das melhores soluções, sendo uma aplicação de Pesquisa Operacional, no campo da Localização de Instalações, em distribuição de energia elétrica. O modelo de otimização desenvolvido possibilita a busca pelo ponto de equilíbrio ótimo que minimiza o custo total formado pelos custos de infraestrutura, frota (veículos e deslocamentos) e pessoal. O algoritmo evolutivo aplicado no modelo oferece soluções otimizadas pelo melhoramento de conjuntos de variáveis binárias com base em conceitos da evolução genética. O modelo de otimização fornece o detalhamento de toda a estrutura operacional e de custos para uma determinada solução do problema, utilizando premissas de produtividade e deslocamentos (velocidades e distâncias) para definir as abrangências de atuação das bases operacionais, recursos (equipes, pessoas, veículos) necessários para atendimento da demanda de serviços, e projetar todos os custos fixos e variáveis associados. A metodologia desenvolvida neste trabalho considera também a projeção de demanda futura para a aplicação no estudo de caso, que evidenciou a efetividade da metodologia como ferramenta para a melhoria da eficiência operacional em empresas de distribuição de energia elétrica.
Resumo:
Climate change is becoming more visible in the political arena. Electric generating companies will likely be impacted by future regulation of climate change related emissions. Even though few climate related programs are mandatory, electric generating companies should begin to implement greenhouse gas management systems. This report includes a review of issues facing the electric generating industry, an examination of current emission management programs, and recommendations for an effective greenhouse gas management framework. An effective greenhouse gas management program allows a company to continually improve their impact on climate change by reducing emissions using the plan, do, check, act process. To ease the reporting burden, companies should apply de minimis exemptions to sources that produce less than 5% of emissions.
Resumo:
Solar heating of potable water has traditionally been accomplished through the use of solar thermal (ST) collectors. With the recent increases in availability and lower cost of photovoltaic (PV) panels, the potential of coupling PV solar arrays to electrically heated domestic hot water (DHW) tanks has been considered. Additionally, innovations in the SDHW industry have led to the creation of photovoltaic/thermal (PV/T) collectors, which heat water using both electrical and thermal energy. The current work compared the performance and cost-effectiveness of a traditional solar thermal (ST) DHW system to PV-solar-electric DHW systems and a PV/T DHW system. To accomplish this, a detailed TRNSYS model of the solar hot water systems was created and annual simulations were performed for 250 L/day and 325 L/day loads in Toronto, Vancouver, Montreal, Halifax, and Calgary. It was shown that when considering thermal performance, PV-DHW systems were not competitive when compared to ST-DHW and PVT-DHW systems. As an example, for Toronto the simulated annual solar fractions of PV-DHW systems were approximately 30%, while the ST-DHW and PVT-DHW systems achieved 65% and 71% respectively. With current manufacturing and system costs, the PV-DHW system was the most cost-effective system for domestic purposes. The capital cost of the PV-DHW systems were approximately $1,923-$2,178 depending on the system configuration, and the ST-DHW and PVT system were estimated to have a capital cost of $2,288 and $2,373 respectively. Although the capital cost of the PVT-DHW system was higher than the other systems, a Present Worth analysis for a 20-year period showed that for a 250 L/day load in Toronto the Present Worth of the PV/T system was approximately $4,597, with PV-DHW systems costing approximately $7,683-$7,816 and the ST-DHW system costing $5,238.
Resumo:
EINLEITUNG Anhand eines Pelvitrainer Modells wurde ein sogenannter „Handheld Roboter“ (Kymerax© Precision- Drive Articulating Surgical System von Terumo©) mit konventionellen laparoskopischen Instrumenten verglichen. Das Kymerax© System verfügt über eine Instrumentenspitze, welche durch Knöpfe am Handgriff zusätzlich abgewinkelt und rotiert werden kann. METHODE 45 Probanden wurden in 2 Erfahrungsgruppen aufgeteilt: 20 ExpertInnen (mehr als 50 selbstständig durchgeführte laparoskopische Operationen pro Jahr) und 25 StudentInnen (keine Erfahrung in der Laparoskopie). Sie führten 6 standardisierte Übungen durch, wobei die ersten beiden Übungen jeweils nur der Instrumenteninstruktion dienten und nicht ausgewertet wurden. In den restlichen 4 Übungen wurden Zeit, Fehleranzahl und Präzision erfasst. Es wurde in 2 Gruppen randomisiert. Eine Gruppe führte die Übungen zuerst mit dem konventionellen System und dann mit dem Kymerax© System durch. Bei der anderen Gruppe erfolgten die Übungen in umgekehrter Reihenfolge. Am Ende beantworteten die Teilnehmer Fragen zu den Übungen und den Operationssystemen. Die Daten wurden mittels Varianzanalyse ausgewertet. RESULTATE In allen 4 gemessenen Übungen brauchten die Probanden mit Kymerax© signifikant mehr Zeit (20%-40%). Vorteile des Kymerax© Systems waren eine bessere Nadelkontrolle bei einer auf den Operateur gerichteten Stichrichtung, eine geringere Abweichung beim Schneiden einer graden Linie, sowie ein geringeres Ausfransen der Schnittlinie beim graden wie beim runden Schneiden. Im Gegensatz zu den Experten kamen Studenten, welche das Kymerax© System in der zweiten Runde verwendeten, besser mit diesem zu Recht, als Ihre Studentenkollegen, die das Kymerax© System in der ersten Runde verwendeten. In der Befragung gaben über 90% der Teilnehmer an, dass das Kymerax© System bei der Durchführung der Übungen einen Vorteil bringt. Die Probanden empfanden jedoch die Bedienung als gewöhnungsbedürftig und erschöpften mit dem Kymerax© System schneller. Bemängelt wurde beim Kymerax© System die nicht freie Rotation, die eingeschränkte Abwinklung, die Sichteinschränkung durch den 7mm Schaft sowie die Ergonomie des Handgriffs. DISKUSSION Das Kymerax© System bringt Vorteile bei gewissen komplexen laparoskopischen Aufgaben. Der Preis hierfür ist die langsamere Durchführung der Aufgaben, die längere Angewöhnungszeit an das Instrument sowie die schnellere Ermüdung des Benutzers. Das System zeigt ein grosses Potential für die laparoskopische Chirurgie, jedoch sind weitere Verbesserungen notwendig. Von der Firma Terumo© wurde zwischenzeitlich das Operationssystem vom Markt genommen.
Resumo:
Article is devoted to design of optimum electromagnets for magnetic levitation of transport systems. The method of electromagnets design based on the inverse problem solution of electrical equipment is offered. The method differs from known by introducing a stage of minimization the target functions providing the stated levitation force and magnetic induction in a gap, and also the mass of an electromagnet. Initial values of parameters are received, using approximate formulas of the theory of electric devices and electrical equipment. The example of realization of a method is given. The received results show its high efficiency at design. It is practical to use the offered method and the computer program realizing it as a part of system of the automated design of electric equipment for transport with a magnetic levitation.