836 resultados para Overhead conductors


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents the study and development of fault-tolerant techniques for programmable architectures, the well-known Field Programmable Gate Arrays (FPGAs), customizable by SRAM. FPGAs are becoming more valuable for space applications because of the high density, high performance, reduced development cost and re-programmability. In particular, SRAM-based FPGAs are very valuable for remote missions because of the possibility of being reprogrammed by the user as many times as necessary in a very short period. SRAM-based FPGA and micro-controllers represent a wide range of components in space applications, and as a result will be the focus of this work, more specifically the Virtex® family from Xilinx and the architecture of the 8051 micro-controller from Intel. The Triple Modular Redundancy (TMR) with voters is a common high-level technique to protect ASICs against single event upset (SEU) and it can also be applied to FPGAs. The TMR technique was first tested in the Virtex® FPGA architecture by using a small design based on counters. Faults were injected in all sensitive parts of the FPGA and a detailed analysis of the effect of a fault in a TMR design synthesized in the Virtex® platform was performed. Results from fault injection and from a radiation ground test facility showed the efficiency of the TMR for the related case study circuit. Although TMR has showed a high reliability, this technique presents some limitations, such as area overhead, three times more input and output pins and, consequently, a significant increase in power dissipation. Aiming to reduce TMR costs and improve reliability, an innovative high-level technique for designing fault-tolerant systems in SRAM-based FPGAs was developed, without modification in the FPGA architecture. This technique combines time and hardware redundancy to reduce overhead and to ensure reliability. It is based on duplication with comparison and concurrent error detection. The new technique proposed in this work was specifically developed for FPGAs to cope with transient faults in the user combinational and sequential logic, while also reducing pin count, area and power dissipation. The methodology was validated by fault injection experiments in an emulation board. The thesis presents comparison results in fault coverage, area and performance between the discussed techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ao mesmo tempo que os ativos intangíveis têm sido amplamente discutidos e apresentados como condutores de sucesso, ainda são poucas as reflexões sobre a influência destes ativos no valor das empresas. Este trabalho surge portanto com o objetivo de identificar uma maneira de se contemplar os ativos intangíveis no valor de uma empresa, quando a mesma é avaliada pelo método de Fluxo de Caixa Descontado. A partir desta metodologia foram analisadas como os intangíveis marca, reputação, redes e alianças estratégicas, tecnologias e processos, capital humano, capital intelectual, inovação, adaptabilidade, cultura organizacional, liderança, responsabilidade socioambiental, e comunicação e transparência podem impactar o valor da empresa, e como eles relacionam-se entre si.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents DCE, or Dynamic Conditional Execution, as an alternative to reduce the cost of mispredicted branches. The basic idea is to fetch all paths produced by a branch that obey certain restrictions regarding complexity and size. As a result, a smaller number of predictions is performed, and therefore, a lesser number of branches are mispredicted. DCE fetches through selected branches avoiding disruptions in the fetch flow when these branches are fetched. Both paths of selected branches are executed but only the correct path commits. In this thesis we propose an architecture to execute multiple paths of selected branches. Branches are selected based on the size and other conditions. Simple and complex branches can be dynamically predicated without requiring a special instruction set nor special compiler optimizations. Furthermore, a technique to reduce part of the overhead generated by the execution of multiple paths is proposed. The performance achieved reaches levels of up to 12% when comparing a Local predictor used in DCE against a Global predictor used in the reference machine. When both machines use a Local predictor, the speedup is increased by an average of 3-3.5%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the ever increasing demands for high complexity consumer electronic products, market pressures demand faster product development and lower cost. SoCbased design can provide the required design flexibility and speed by allowing the use of IP cores. However, testing costs in the SoC environment can reach a substantial percent of the total production cost. Analog testing costs may dominate the total test cost, as testing of analog circuits usually require functional verification of the circuit and special testing procedures. For RF analog circuits commonly used in wireless applications, testing is further complicated because of the high frequencies involved. In summary, reducing analog test cost is of major importance in the electronic industry today. BIST techniques for analog circuits, though potentially able to solve the analog test cost problem, have some limitations. Some techniques are circuit dependent, requiring reconfiguration of the circuit being tested, and are generally not usable in RF circuits. In the SoC environment, as processing and memory resources are available, they could be used in the test. However, the overhead for adding additional AD and DA converters may be too costly for most systems, and analog routing of signals may not be feasible and may introduce signal distortion. In this work a simple and low cost digitizer is used instead of an ADC in order to enable analog testing strategies to be implemented in a SoC environment. Thanks to the low analog area overhead of the converter, multiple analog test points can be observed and specific analog test strategies can be enabled. As the digitizer is always connected to the analog test point, it is not necessary to include muxes and switches that would degrade the signal path. For RF analog circuits, this is specially useful, as the circuit impedance is fixed and the influence of the digitizer can be accounted for in the design phase. Thanks to the simplicity of the converter, it is able to reach higher frequencies, and enables the implementation of low cost RF test strategies. The digitizer has been applied successfully in the testing of both low frequency and RF analog circuits. Also, as testing is based on frequency-domain characteristics, nonlinear characteristics like intermodulation products can also be evaluated. Specifically, practical results were obtained for prototyped base band filters and a 100MHz mixer. The application of the converter for noise figure evaluation was also addressed, and experimental results for low frequency amplifiers using conventional opamps were obtained. The proposed method is able to enhance the testability of current mixed-signal designs, being suitable for the SoC environment used in many industrial products nowadays.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O objetivo deste estudo é identificar e descrever uma amostra dos sistemas de custos bancários existentes e em operação no País, inferindo por este meio o grau de desenvolvimento destes sistemas. O assunto foi tratado através de e utilização pesquisa de campo, dividida em três fases, distintas e diferenciadas. Na primeira etapa foram enviados questionários pelo correio a 54 bancos nacionais. Destinavam-se a formar um quadro geral do estágio de desenvolvimento dos sistemas de custos em instituições financeiras. A primeira fase da pesquisa forneceu resultados modestos, basicamente quantitativos e poucas conclusões firmes. Dessa abordagem genérica, o trabalho evoluiu para as outras fases. A segunda fase consistiu em 5 entrevistas levadas a efeito em bancos com experiências diferenciadas na área de custos. A terceira fase, também sob a forma de entrevistas pessoais, foi efetivada a partir da constatação de diversas questões que haviam ficado em aberto a partir da abordagem dada ao assunto na segunda fase. Estas duas últimas fases particularizaram e melhoraram os resultados obtidos, formando um quadro representativo da situação dos sistemas de custos em instituições financeiras. Este quadro indicou uma grande preferência por sistemas de custeio direto, com alocação de "overhead", em detrimento de sistemas de custeio-padrão. Informações de rentabilidade,particularmente em relação aos clientes, estão entre os objetivos mais citados. A pesquisa evidenciou os problemas enfrentados na operacionalização dos sistemas, basicamente na coleta de dados físicos e nos diversos critérios de aproximação que são utilizados. Observou-se que as soluções que melhoram a qualidade da informação gerada pelo sistema passam necessariamente pela otimização da coleta e atualização dos dados físicos. Observouse, ainda, que todos os sistemas estudados tem restrições de ordem metodológica, apontando-se a ampla discussão dos problemas como alternativa para o seu constante aperfeiçoamento.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dada a crescente estruturação das organizações permanentes em organizações temporárias, aqui representadas como projetos, há uma demanda proporcional por profissionais qualificados que atuem como os maestros dessas empreitadas – os gerentes de projetos (GP’s). Esta pesquisa de dissertação de mestrado tem por objetivo estudar como diferentes traços psicodemográficos, à saber: gênero, etnia, orientação sexual e aparência física, impactam na vida e na progressão profissional do GP que os detenha, pela perspectiva deste profissional. Na busca por possíveis respostas para essa questão, percorremos a teoria procurando os subsídios necessários ao GP para exercer satisfatoriamente sua atividade e ascender na carreira. Vimos que as hard skills desses profissionais são desenhadas e enriquecidas por meio das metodologias e boas práticas trazidas pelas diversas associações internacionais de gestão de projetos. Todavia, as ações prescritivas dessas metodologias apenas delineiam os contornos mais amplos das soft skills. Este conjunto de habilidades se mostrou fundamental para o GP em seu trabalho, sendo empiricamente apreendido e enriquecido pela experiência prática. Ademais, a progressão na carreira de GP é percebida frente à crescente complexidade dos projetos à ele subordinados, como também é percebida ao enveredar por caminhos mais estratégicos e menos operacionais. Calçamos a pesquisa sobre a cultura brasileira, usada como arcabouço para entender como tais profissionais exercem suas relações de liderança, comunicação, respeito e colaboração para com seus stakeholders, ou seja, suas soft skills, e orquestram as atividades necessárias para a conclusão satisfatória de um projeto. Sobre tal base, buscamos entender possíveis preconceitos e barreiras impostas às diferentes psicodemografias para, em sequência, observar quais impactos são percebidos pelos GP’s no exercício de suas atividades, bem como na progressão de suas carreiras. Trouxemos do campo entrevistas com vários GP’s de diferentes psicodemografias. A análise do discurso nos emprestou o ferramental para estudar os dados de campo, e abarcar conclusões sustentadas pela base teórica. Como resultado, percebemos que a cultura brasileira ainda apresenta mecanismos de dominação encarnados por um grupo androcêntrico, etnicamente branco e heterossexual, com prescrições e normas sobre a aparência física. Vimos também que as soft skills são influenciadas pela cultura local, principalmente quando são exercidas por minorias rejeitadas pelo grupo dominante. Percebemos a existência de portas e tetos de vidro na vida profissional de um GP que encarne determinados traços psicodemográficos, bem como uma introjeção de tal lógica androcêntrica. Tais profissionais exercem suas funções com um custo maior do que aquele realizado por homens, brancos, heterossexuais e de boa aparência, bem como precisam se mostrar resilientes à tal dominação. As barreiras para o gênero feminino também se erguem quando a GP mulher, mesmo em um contexto de liderança de uma organização temporária, exerce a maternidade.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Tungsten/copper composites are commonly used for electrical and thermal objectives like heat sinks and lectrical conductors, propitiating an excellent thermal and electrical conductivity. These properties are dependents of the composition, crystallite size and production process. The high energy milling of the powder of W-Cu produces an dispersion high and homogenization levels with crystallite size of W very small in the ductile Cu phase. This work discusses the effect of the HEM in preparation of the W-25Cu composite powders. Three techniques of powder preparation were utilized: milling the dry with powder of thick Cu, milling the dry with powder of fine Cu and milling the wet with powder of thick Cu. The form, size and composition of the particles of the powders milled were observed by scanning electron microscopy (SEM). The X-ray diffraction (XRD) was used to analyse the phases, lattice parameters, size and microstrain of the crystallite. The analyse of the crystalline structure of the W-25Cu powders milled made by Rietveld Method suggests the partial solid solubility of the constituent elements of the Cu in lattice of the W. This analyse shows too that the HEM produces the reduction high on the crystallite size and the increase in the lattice strain of both phases, this is more intense in the phase W

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ionic liquids (ILs) are organic compounds liquid at room temperature, good electrical conductors, with the potential to form as a means for electrolyte on electrolysis of water, in which the electrodes would not be subjected to such extreme conditions demanding chemistry [1]. This paper describes the synthesis, characterization and study of the feasibility of ionic liquid ionic liquid 1-methyl-3(2,6-(S)-dimethyloct-2-ene)-imidazole tetrafluoroborate (MDI-BF4) as electrolyte to produce hydrogen through electrolysis of water. The MDI-BF4 synthesized was characterized by thermal methods of analysis (Thermogravimetric Analysis - TG and Differential Scanning Calorimetry - DSC), mid-infrared spectroscopy with Fourier transform by method of attenuated total reflectance (FTIR-ATR), nuclear magnetic resonance spectroscopy of hydrogen (NMR 1H) and cyclic voltammetry (CV). Where thermal methods were used to calculate the yield of the synthesis of MDI-BF4 which was 88.84%, characterized infrared spectroscopy functional groups of the compound and the binding B-F 1053 cm-1; the NMR 1H analyzed and compared with literature data defines the structure of MDI-BF4 and the current density achieved by MDI-BF4 in the voltammogram shows that the LI can conduct electrical current indicating that the MDI-BF4 is a good electrolyte, and that their behavior does not change with the increasing concentration of water

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Exploratory, descriptive and quantitative study with prospective data, performed in the Mobile Emergency Care Service in the metropolitan region of Natal/RN, in order to identify the knowledge of the multidisciplinary team about the rules of standard precautions and worker safety, to identify occupational hazards peculiar to the activities of this service; characterize work-related accidents (WRA) and know the procedures adopted after each WRA. The population consisted of 162 professionals and data were collected between the months of November and December 2010. As for personal and professional characteristics, of the 162 professional, 12,96% were physicians; 6,79%, nurses; 33,95%, nursing technicians, 46,29%, conductors; 74,70% were male; 43,21% were between 31 and 40 years old; 69,33% lived in Natal/RN, 50,00% had completed high school; 58,64% were married; 69,75% had children, 46,91% were between 1 and 4 years of training; 61,73% had improvement courses; 59,25% had 3 to 4 years of service; 54,32%, with 1-4 years experience in emergency; 44,44% received 1-2 minimum wages; 78,40% received insalubrity premium; 67,28% worked in Basic Support Unit (BSU); 83,95% had journey on SAMU Metropolitano of 31-40 hours per week; 52,47% had other employments. As for knowledge of rules of standard precautions, safety and occupational hazards, 99,38% knew what it was WRA; 62,96% gave incomplete answers; 74,07% knew the rules of prevent WRA; 46,67% acquired this knowledge in lectures; 53,09% knew Personal Protective Equipment (PPE); 71,60% gave incorrect answers about the importance of standard precautions; 45,06% never received an educational intervention on this issue; 89,51% said that educational interventions in the prevention of WRA are very important; 90,12% pointed out this as a very important issue in the workplace; 27,00% suggested guidance on the topic in the workplace; regarding the physical hazards, 34,57% considered noise as the most important; about chemical hazards, 78,40% chose the gases and smoke; for biological hazards, 48,77% reported contact with the blood; for mechanical hazards, 80,86% said that were transport accidents; about ergonomic risks, 40,12% say it is the tension/stress in the care of critically ill, psychiatric and aggressive patients; and there was an average of 4,5 to the feeling of safety in the workplace. Regarding the data on the WRAs occurred, 31,48% experienced at least one accident event; 72,55% did not notify it; 60,98% answered that there was no routine for notification; 56,86% were performing patient transportation; 49,02% were hurt in the Basic Support Unit/Rescue Unit (BSU/RH); 60,78% occurred during the day; 96,08% of professionals were in normal work schedule (24 hours on duty); 31,37% had contusion; 58.82% had damage to members/pelvic girdle; 43,14% had traffic accidents. About the evolution of the WRA, 62,75% did not have to take time away from work; 76,47% had no sequelae; 88,24% did not require rehabilitation; no professional had a change of occupation. And by means of univariate logistic regression, showed that the nurses and male sex were risk factors for the occurrence of WRA. We conclude that there were gaps in the knowledge of staff regarding WRA, emphasizing the need for continuing education in biosafety in the service.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Knowledge Management in organizations is a continuous process of learning that is given by the integration of data, information and the ability of people to use this information. The Management Skills is concerned to understand the powers of officials in the face of organizational skills (teams) and professionals (or tasks positions) want. Joints are located in the greater context of the economy of organizatio s and have the same assumption: that the possession of scarce resources, valuable and difficult to imitate gives the organization a competitive advantage. In this sense, this thesis proposes a model of knowledge management based on analysis of GAP of powers, namely the gap between the skills needed to reach the expected performance and skills already available in the organization, officials and trainees of Coordination of Registration of conductors of DETRAN-RN. Using the method of survey research could make an analysis of academic skills, techniques and emotional individual and a team of officials and trainees, identifying levels of GAP's competence in that sector of the organization and suggest a plan for training , A level of expertise for each sector of coordination, and propose a model for Knowledge Management to help the management of GAP's identified

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The bidimensional periodic structures called frequency selective surfaces have been well investigated because of their filtering properties. Similar to the filters that work at the traditional radiofrequency band, such structures can behave as band-stop or pass-band filters, depending on the elements of the array (patch or aperture, respectively) and can be used for a variety of applications, such as: radomes, dichroic reflectors, waveguide filters, artificial magnetic conductors, microwave absorbers etc. To provide high-performance filtering properties at microwave bands, electromagnetic engineers have investigated various types of periodic structures: reconfigurable frequency selective screens, multilayered selective filters, as well as periodic arrays printed on anisotropic dielectric substrates and composed by fractal elements. In general, there is no closed form solution directly from a given desired frequency response to a corresponding device; thus, the analysis of its scattering characteristics requires the application of rigorous full-wave techniques. Besides that, due to the computational complexity of using a full-wave simulator to evaluate the frequency selective surface scattering variables, many electromagnetic engineers still use trial-and-error process until to achieve a given design criterion. As this procedure is very laborious and human dependent, optimization techniques are required to design practical periodic structures with desired filter specifications. Some authors have been employed neural networks and natural optimization algorithms, such as the genetic algorithms and the particle swarm optimization for the frequency selective surface design and optimization. This work has as objective the accomplishment of a rigorous study about the electromagnetic behavior of the periodic structures, enabling the design of efficient devices applied to microwave band. For this, artificial neural networks are used together with natural optimization techniques, allowing the accurate and efficient investigation of various types of frequency selective surfaces, in a simple and fast manner, becoming a powerful tool for the design and optimization of such structures

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis proposes the specification and performance analysis of a real-time communication mechanism for IEEE 802.11/11e standard. This approach is called Group Sequential Communication (GSC). The GSC has a better performance for dealing with small data packets when compared to the HCCA mechanism by adopting a decentralized medium access control using a publish/subscribe communication scheme. The main objective of the thesis is the HCCA overhead reduction of the Polling, ACK and QoS Null frames exchanged between the Hybrid Coordinator and the polled stations. The GSC eliminates the polling scheme used by HCCA scheduling algorithm by using a Virtual Token Passing procedure among members of the real-time group to whom a high-priority and sequential access to communication medium is granted. In order to improve the reliability of the mechanism proposed into a noisy channel, it is presented an error recovery scheme called second chance algorithm. This scheme is based on block acknowledgment strategy where there is a possibility of retransmitting when missing real-time messages. Thus, the GSC mechanism maintains the real-time traffic across many IEEE 802.11/11e devices, optimized bandwidth usage and minimal delay variation for data packets in the wireless network. For validation purpose of the communication scheme, the GSC and HCCA mechanisms have been implemented in network simulation software developed in C/C++ and their performance results were compared. The experiments show the efficiency of the GSC mechanism, especially in industrial communication scenarios.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most algorithms for state estimation based on the classical model are just adequate for use in transmission networks. Few algorithms were developed specifically for distribution systems, probably because of the little amount of data available in real time. Most overhead feeders possess just current and voltage measurements at the middle voltage bus-bar at the substation. In this way, classical algorithms are of difficult implementation, even considering off-line acquired data as pseudo-measurements. However, the necessity of automating the operation of distribution networks, mainly in regard to the selectivity of protection systems, as well to implement possibilities of load transfer maneuvers, is changing the network planning policy. In this way, some equipments incorporating telemetry and command modules have been installed in order to improve operational features, and so increasing the amount of measurement data available in real-time in the System Operation Center (SOC). This encourages the development of a state estimator model, involving real-time information and pseudo-measurements of loads, that are built from typical power factors and utilization factors (demand factors) of distribution transformers. This work reports about the development of a new state estimation method, specific for radial distribution systems. The main algorithm of the method is based on the power summation load flow. The estimation is carried out piecewise, section by section of the feeder, going from the substation to the terminal nodes. For each section, a measurement model is built, resulting in a nonlinear overdetermined equations set, whose solution is achieved by the Gaussian normal equation. The estimated variables of a section are used as pseudo-measurements for the next section. In general, a measurement set for a generic section consists of pseudo-measurements of power flows and nodal voltages obtained from the previous section or measurements in real-time, if they exist -, besides pseudomeasurements of injected powers for the power summations, whose functions are the load flow equations, assuming that the network can be represented by its single-phase equivalent. The great advantage of the algorithm is its simplicity and low computational effort. Moreover, the algorithm is very efficient, in regard to the accuracy of the estimated values. Besides the power summation state estimator, this work shows how other algorithms could be adapted to provide state estimation of middle voltage substations and networks, namely Schweppes method and an algorithm based on current proportionality, that is usually adopted for network planning tasks. Both estimators were implemented not only as alternatives for the proposed method, but also looking for getting results that give support for its validation. Once in most cases no power measurement is performed at beginning of the feeder and this is required for implementing the power summation estimations method, a new algorithm for estimating the network variables at the middle voltage bus-bar was also developed

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are some approaches that take advantage of unused computational resources in the Internet nodes - users´ machines. In the last years , the peer-to-peer networks (P2P) have gaining a momentum mainly due to its support for scalability and fault tolerance. However, current P2P architectures present some problems such as nodes overhead due to messages routing, a great amount of nodes reconfigurations when the network topology changes, routing traffic inside a specific network even when the traffic is not directed to a machine of this network, and the lack of a proximity relationship among the P2P nodes and the proximity of these nodes in the IP network. Although some architectures use the information about the nodes distance in the IP network, they use methods that require dynamic information. In this work we propose a P2P architecture to fix the problems afore mentioned. It is composed of three parts. The first part consists of a basic P2P architecture, called SGrid, which maintains a relationship of nodes in the P2P network with their position in the IP network. Its assigns adjacent key regions to nodes of a same organization. The second part is a protocol called NATal (Routing and NAT application layer) that extends the basic architecture in order to remove from the nodes the responsibility of routing messages. The third part consists of a special kind of node, called LSP (Lightware Super-Peer), which is responsible for maintaining the P2P routing table. In addition, this work also presents a simulator that validates the architecture and a module of the Natal protocol to be used in Linux routers

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work has as main objective the study of arrays of microstrip antennas with superconductor rectangular patch. The phases and the radiation patterns are analyzed. A study of the main theories is presented that explain the microscopic and macroscopic phenomena of superconductivity. The BCS, London equations and the Two Fluid Model, are theories used in the applications of superconductors, at the microstrip antennas and antennas arrays. Phase Arrangements will be analyzed in linear and planar configurations. The arrangement factors of these configurations are obtained, and the phase criteria and the spacing between the elements, are examined in order to minimize losses in the superconductor, compared with normal conductors. The new rectangular patch antenna, consist of a superconducting material, with the critical temperature of 233 K, whose formula is Tl5Ba4Ca2Cu9Oy, is analyzed by the method of the Transverse nTransmission Line (TTL), developed by H. C. C. Fernandes, applied in the Fourier Transform Domain (FTD). The TTL is a full-wave method, which has committed to obtaining the electromagnetic fields in terms of the transverse components of the structure. The inclusion of superconducting patch is made using the complex resistive boundary condition, using the impedance of the superconductor in the Dyadic Green function, in the structure. Results are obtained from the resonance frequency depending on the parameters of the antenna using superconducting material, radiation patterns in E-Plane and H -Plane, the phased antennas array in linear and planar configurations, for different values of phase angles and different spacing between the elements