929 resultados para two input two output
Resumo:
In this paper, we consider multiple-input multiple- output (MIMO) maximal ratio combining (MRC) systems and assess the system performance in terms of average symbol error probability (SEP), outage probability and ergodic capacity in double-correlated Rayleigh-and-Lognormal fading channels. In order to derive the receive and transmit correlation functions needed for the performance analysis, a three-dimensional (3D) MIMO mobile-to-mobile (M-to-M) channel model, which takes into account the effects of fast fading and shadowing is used. Numerical results are provided to show the effects of system parameters, such as maximum elevation angle of scatterers, orientation angle of antenna array in the x-y plane, angle between x-y plane and the antenna array orientation, and degree of scattering in the x-y plane, on the system performance.
Resumo:
An evidence-based review of the potential impact that the introduction of genetically-modified (GM) cereal and oilseed crops could have for the UK was carried out. The inter-disciplinary research project addressed the key research questions using scenarios for the uptake, or not, of GM technologies. This was followed by an extensive literature review, stakeholder consultation and financial modelling. The world area of canola, oilseed rape (OSR) low in both erucic acid in the oil and glucosinolates in the meal, was 34M ha in 2012 of which 27% was GM; Canada is the lead producer but it is also grown in the USA, Australia and Chile. Farm level effects of adopting GM OSR include: lower production costs; higher yields and profits; and ease of farm management. Growing GM OSR instead of conventional OSR reduces both herbicide usage and environmental impact. Some 170M ha of maize was grown in the world in 2011 of which 28% was GM; the main producers are the USA, China and Brazil. Spain is the main EU producer of GM maize although it is also grown widely in Portugal. Insect resistant (IR) and herbicide tolerant (HT) are the GM maize traits currently available commercially. Farm level benefits of adopting GM maize are lower costs of production through reduced use of pesticides and higher profits. GM maize adoption results in less pesticide usage than on conventional counterpart crops leading to less residues in food and animal feed and allowing increasing diversity of bees and other pollinators. In the EU, well-tried coexistence measures for growing GM crops in the proximity of conventional crops have avoided gene flow issues. Scientific evidence so far seems to indicate that there has been no environmental damage from growing GM crops. They may possibly even be beneficial to the environment as they result in less pesticides and herbicides being applied and improved carbon sequestration from less tillage. A review of work on GM cereals relevant for the UK found input trait work on: herbicide and pathogen tolerance; abiotic stress such as from drought or salinity; and yield traits under different field conditions. For output traits, work has mainly focussed on modifying the nutritional components of cereals and in connection with various enzymes, diagnostics and vaccines. Scrutiny of applications submitted for field trial testing of GM cereals found around 9000 applications in the USA, 15 in Australia and 10 in the EU since 1996. There have also been many patent applications and granted patents for GM cereals in the USA for both input and output traits;an indication of the scale of such work is the fact that in a 6 week period in the spring of 2013, 12 patents were granted relating to GM cereals. A dynamic financial model has enabled us to better understand and examine the likely performance of Bt maize and HT OSR for the south of the UK, if cultivation is permitted in the future. It was found that for continuous growing of Bt maize and HT OSR, unless there was pest pressure for the former and weed pressure for the latter, the seed premia and likely coexistence costs for a buffer zone between other crops would reduce the financial returns for the GM crops compared with their conventional counterparts. When modelling HT OSR in a four crop rotation, it was found that gross margins increased significantly at the higher levels of such pest or weed pressure, particularly for farm businesses with larger fields where coexistence costs would be scaled down. The impact of the supply of UK-produced GM crops on the wider supply chain was examined through an extensive literature review and widespread stakeholder consultation with the feed supply chain. The animal feed sector would benefit from cheaper supplies of raw materials if GM crops were grown and, in the future, they might also benefit from crops with enhanced nutritional profile (such as having higher protein levels) becoming available. This would also be beneficial to livestock producers enabling lower production costs and higher margins. Whilst coexistence measures would result in increased costs, it is unlikely that these would cause substantial changes in the feed chain structure. Retailers were not concerned about a future increase in the amount of animal feed coming from GM crops. To conclude, we (the project team) feel that the adoption of currently available and appropriate GM crops in the UK in the years ahead would benefit farmers, consumers and the feed chain without causing environmental damage. Furthermore, unless British farmers are allowed to grow GM crops in the future, the competitiveness of farming in the UK is likely to decline relative to that globally.
Resumo:
In multiple-input multiple-output (MIMO) radar systems, the transmitters emit orthogonal waveforms to increase the spatial resolution. New frequency hopping (FH) codes based on chaotic sequences are proposed. The chaotic sequences have the characteristics of good encryption, anti-jamming properties and anti-intercept capabilities. The main idea of chaotic FH is based on queuing theory. According to the sensitivity to initial condition, these sequences can achieve good Hamming auto-correlation while also preserving good average correlation. Simulation results show that the proposed FH signals can achieve lower autocorrelation side lobe level and peak cross-correlation level with the increasing of iterations. Compared to the LFM signals, this sequence has higher range-doppler resolution.
Resumo:
The stratigraphic subdivision and correlation of dune deposits is difficult, especially when age datings are not available. A better understanding of the controls on texture and composition of eolian sands is necessary to interpret ancient eolian sediments. The Imbituba-Jaguaruna coastal zone (Southern Brazil, 28 degrees-29 degrees S) stands out due to its four well-preserved Late Pleistocene (eolian generation 1) to Holocene eolian units (eolian generations 2, 3, and 4). In this study, we evaluate the grain-size and heavy-mineral characteristics of the Imbituba-Jaguartma eolian units through statistical analysis of hundreds of sediment samples. Grain-size parameters and heavy-mineral content allow us to distinguish the Pleistocene from the Holocene units. The grain size displays a pattern of fining and better sorting from generation 1 (older) to 4 (younger), whereas the content of mechanically stable (dense and hard) heavy minerals decreases from eolian generation 1 to 4. The variation in grain size and heavy-mineral content records shifts in the origin and balance (input versus output) of eolian sediment supply attributable mainly to relative sea-level changes. Dunefields submitted to relative sea-level lowstand conditions (eolian generation 1) are characterized by lower accumulation rates and intense post-depositional dissection by fluvial incision. Low accumulation rates favor deflation in the eolian system, which promotes concentration of denser and stable heavy minerals (increase of ZTR index) as well as coarsening of eolian sands. Dissection involves the selective removal of finer sediments and less dense heavy minerals to the coastal source area. Under a high rate of relative sea-level rise and transgression (eolian generation 2), coastal erosion prevents deflation through high input of sediments to the coastal eolian source. This condition favors dunefield growth. Coastal erosion feeds sand from local sources to the eolian system. including sands from previous dunefields (eolian generation 1) and from drowned incised valleys. Therefore, dunefields corresponding to transgressive phases inherit the grain-size and heavy-mineral characteristics of previous dunefields, leading to selective enrichment of finer sands and lighter minerals. Eolian generations 3 and 4 developed during a regressive-progradational phase (Holocene relative sea level highstand). The high rate of sediment supply during the highstand phase prevents deflation. The lack of coastal erosion favors sediment supply from distal sources (fluvial sediments rich in unstable heavy minerals). Thus, dunefields of transgressive and highstand systems tracts may be distinguished from dunefields of the lowstand systems tract through high rates of accumulation (low deflation) in the former. The sediment source of the transgressive dunefields (high input of previously deposited coastal sands) differs from that of the highstand dunefields (high input of fluvial distal sands). Based on this case study, we propose a general framework for the relation between relative sea level, sediment supply and the texture and mineralogy of eolian sediments deposited in siliciclastic wet coastal zones similar to the Imbituba-Jaguaruna coast (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Solar plus heat pump systems are often very complex in design, with sometimes special heat pump arrangements and control. Therefore detailed heat pump models can give very slow system simulations and still not so accurate results compared to real heat pump performance in a system. The idea here is to start from a standard measured performance map of test points for a heat pump according to EN 14825 and then determine characteristic parameters for a simplified correlation based model of the heat pump. By plotting heat pump test data in different ways including power input and output form and not only as COP, a simplified relation could be seen. By using the same methodology as in the EN 12975 QDT part in the collector test standard it could be shown that a very simple model could describe the heat pump test data very accurately, by identifying 4 parameters in the correlation equation found. © 2012 The Authors.
Resumo:
Esta tese objetiva identificar os impactos dos investimentos em Tecnologia de Informação (TI) nas variáveis estratégicas e na eficiência dos bancos brasileiros. Para a realização da investigação, utilizaram-se vários métodos e técnicas de pesquisa: (1) entrevista com executivos para identificar o papel da TI nos bancos; (2) survey com executivos dos bancos para selecionar as variáveis estratégicas organizacionais em que os efeitos da TI são mais significativos; (3) entrevista com executivos para adaptar as variáveis como input e output observáveis em contas de balanço; e (4) método de Pesquisa Operacional para elaborar um modelo de análise de eficiência e aplicar a técnica de Data Envelopment Analysis (DEA) para avaliar a efetividade de conversão dos investimentos em TI. A entrevista exploratória com os executivos dos bancos permitiu identificar como os bancos utilizam a TI e o seu papel como ferramenta estratégica. O processo de validação e purificação do instrumento (questionário) e dos constructos utilizados na survey fez uso de procedimentos qualitativos e quantitativos, como: validade de face e conteúdo, card sorting, análise de fidedignidade (coeficiente alfa de Cronbach), análise de correlação item- total corrigido (CITC), análise fatorial exploratória nos blocos e entre blocos, e análise fatorial confirmatória. O instrumento também foi validado externamente com executivos de bancos americanos. A partir do conjunto final de construtos, foram identificados variáveis de input e output observáveis em contas de balanço visando à elaboração e à definição do modelo de análise de eficiência. O modelo de eficiência estrutura-se no conceito de efetividade de conversão, que pressupõe que os investimentos em TI, combinados com outras variáveis de input (despesas com pessoal, outras despesas administrativas, e despesas de internacionalização) transformam-se em output (receitas líquidas de intermediação financeira, de prestação de serviços e de operações internacionais). Uma característica adicional do modelo é a representação em dois estágios: os investimentos em TI geram incremento nas receitas, mas esta relação é intermediada pela acumulação de ativos, financeiros e não financeiros. Os dados de balanço dos 41 bancos incluídos na amostra, de 1995 a 1999, foram fornecidos pelo Banco Central do Brasil. A aplicação do modelo na amostra selecionada indica claramente que apenas investir em TI não proporciona efetiva eficiência. Por outro lado, os bancos que mais investiram em TI no período analisado ganharam eficiência relativamente ao conjunto de bancos analisados. Dentre os resultados desta tese, podem ser destacados: o modelo de pesquisa, o conjunto de constructos e o instrumento (questionário), o processo de observação de input e output em contas de balanço e o modelo de análise de eficiência.
Resumo:
This thesis presents the study and development of fault-tolerant techniques for programmable architectures, the well-known Field Programmable Gate Arrays (FPGAs), customizable by SRAM. FPGAs are becoming more valuable for space applications because of the high density, high performance, reduced development cost and re-programmability. In particular, SRAM-based FPGAs are very valuable for remote missions because of the possibility of being reprogrammed by the user as many times as necessary in a very short period. SRAM-based FPGA and micro-controllers represent a wide range of components in space applications, and as a result will be the focus of this work, more specifically the Virtex® family from Xilinx and the architecture of the 8051 micro-controller from Intel. The Triple Modular Redundancy (TMR) with voters is a common high-level technique to protect ASICs against single event upset (SEU) and it can also be applied to FPGAs. The TMR technique was first tested in the Virtex® FPGA architecture by using a small design based on counters. Faults were injected in all sensitive parts of the FPGA and a detailed analysis of the effect of a fault in a TMR design synthesized in the Virtex® platform was performed. Results from fault injection and from a radiation ground test facility showed the efficiency of the TMR for the related case study circuit. Although TMR has showed a high reliability, this technique presents some limitations, such as area overhead, three times more input and output pins and, consequently, a significant increase in power dissipation. Aiming to reduce TMR costs and improve reliability, an innovative high-level technique for designing fault-tolerant systems in SRAM-based FPGAs was developed, without modification in the FPGA architecture. This technique combines time and hardware redundancy to reduce overhead and to ensure reliability. It is based on duplication with comparison and concurrent error detection. The new technique proposed in this work was specifically developed for FPGAs to cope with transient faults in the user combinational and sequential logic, while also reducing pin count, area and power dissipation. The methodology was validated by fault injection experiments in an emulation board. The thesis presents comparison results in fault coverage, area and performance between the discussed techniques.
Resumo:
In the last decade mobile wireless communications have witnessed an explosive growth in the user’s penetration rate and their widespread deployment around the globe. It is expected that this tendency will continue to increase with the convergence of fixed Internet wired networks with mobile ones and with the evolution to the full IP architecture paradigm. Therefore mobile wireless communications will be of paramount importance on the development of the information society of the near future. In particular a research topic of particular relevance in telecommunications nowadays is related to the design and implementation of mobile communication systems of 4th generation. 4G networks will be characterized by the support of multiple radio access technologies in a core network fully compliant with the Internet Protocol (all IP paradigm). Such networks will sustain the stringent quality of service (QoS) requirements and the expected high data rates from the type of multimedia applications to be available in the near future. The approach followed in the design and implementation of the mobile wireless networks of current generation (2G and 3G) has been the stratification of the architecture into a communication protocol model composed by a set of layers, in which each one encompasses some set of functionalities. In such protocol layered model, communications is only allowed between adjacent layers and through specific interface service points. This modular concept eases the implementation of new functionalities as the behaviour of each layer in the protocol stack is not affected by the others. However, the fact that lower layers in the protocol stack model do not utilize information available from upper layers, and vice versa, downgrades the performance achieved. This is particularly relevant if multiple antenna systems, in a MIMO (Multiple Input Multiple Output) configuration, are implemented. MIMO schemes introduce another degree of freedom for radio resource allocation: the space domain. Contrary to the time and frequency domains, radio resources mapped into the spatial domain cannot be assumed as completely orthogonal, due to the amount of interference resulting from users transmitting in the same frequency sub-channel and/or time slots but in different spatial beams. Therefore, the availability of information regarding the state of radio resources, from lower to upper layers, is of fundamental importance in the prosecution of the levels of QoS expected from those multimedia applications. In order to match applications requirements and the constraints of the mobile radio channel, in the last few years researches have proposed a new paradigm for the layered architecture for communications: the cross-layer design framework. In a general way, the cross-layer design paradigm refers to a protocol design in which the dependence between protocol layers is actively exploited, by breaking out the stringent rules which restrict the communication only between adjacent layers in the original reference model, and allowing direct interaction among different layers of the stack. An efficient management of the set of available radio resources demand for the implementation of efficient and low complexity packet schedulers which prioritize user’s transmissions according to inputs provided from lower as well as upper layers in the protocol stack, fully compliant with the cross-layer design paradigm. Specifically, efficiently designed packet schedulers for 4G networks should result in the maximization of the capacity available, through the consideration of the limitations imposed by the mobile radio channel and comply with the set of QoS requirements from the application layer. IEEE 802.16e standard, also named as Mobile WiMAX, seems to comply with the specifications of 4G mobile networks. The scalable architecture, low cost implementation and high data throughput, enable efficient data multiplexing and low data latency, which are attributes essential to enable broadband data services. Also, the connection oriented approach of Its medium access layer is fully compliant with the quality of service demands from such applications. Therefore, Mobile WiMAX seems to be a promising 4G mobile wireless networks candidate. In this thesis it is proposed the investigation, design and implementation of packet scheduling algorithms for the efficient management of the set of available radio resources, in time, frequency and spatial domains of the Mobile WiMAX networks. The proposed algorithms combine input metrics from physical layer and QoS requirements from upper layers, according to the crosslayer design paradigm. Proposed schedulers are evaluated by means of system level simulations, conducted in a system level simulation platform implementing the physical and medium access control layers of the IEEE802.16e standard.
Resumo:
Tests on printed circuit boards and integrated circuits are widely used in industry,resulting in reduced design time and cost of a project. The functional and connectivity tests in this type of circuits soon began to be a concern for the manufacturers, leading to research for solutions that would allow a reliable, quick, cheap and universal solution. Initially, using test schemes were based on a set of needles that was connected to inputs and outputs of the integrated circuit board (bed-of-nails), to which signals were applied, in order to verify whether the circuit was according to the specifications and could be assembled in the production line. With the development of projects, circuit miniaturization, improvement of the production processes, improvement of the materials used, as well as the increase in the number of circuits, it was necessary to search for another solution. Thus Boundary-Scan Testing was developed which operates on the border of integrated circuits and allows testing the connectivity of the input and the output ports of a circuit. The Boundary-Scan Testing method was converted into a standard, in 1990, by the IEEE organization, being known as the IEEE 1149.1 Standard. Since then a large number of manufacturers have adopted this standard in their products. This master thesis has, as main objective: the design of Boundary-Scan Testing in an image sensor in CMOS technology, analyzing the standard requirements, the process used in the prototype production, developing the design and layout of Boundary-Scan and analyzing obtained results after production. Chapter 1 presents briefly the evolution of testing procedures used in industry, developments and applications of image sensors and the motivation for the use of architecture Boundary-Scan Testing. Chapter 2 explores the fundamentals of Boundary-Scan Testing and image sensors, starting with the Boundary-Scan architecture defined in the Standard, where functional blocks are analyzed. This understanding is necessary to implement the design on an image sensor. It also explains the architecture of image sensors currently used, focusing on sensors with a large number of inputs and outputs.Chapter 3 describes the design of the Boundary-Scan implemented and starts to analyse the design and functions of the prototype, the used software, the designs and simulations of the functional blocks of the Boundary-Scan implemented. Chapter 4 presents the layout process used based on the design developed on chapter 3, describing the software used for this purpose, the planning of the layout location (floorplan) and its dimensions, the layout of individual blocks, checks in terms of layout rules, the comparison with the final design and finally the simulation. Chapter 5 describes how the functional tests were performed to verify the design compliancy with the specifications of Standard IEEE 1149.1. These tests were focused on the application of signals to input and output ports of the produced prototype. Chapter 6 presents the conclusions that were taken throughout the execution of the work.
Resumo:
A pesquisa tem como objetivo desenvolver uma estrutura de controle preditivo neural, com o intuito de controlar um processo de pH, caracterizado por ser um sistema SISO (Single Input - Single Output). O controle de pH é um processo de grande importância na indústria petroquímica, onde se deseja manter constante o nível de acidez de um produto ou neutralizar o afluente de uma planta de tratamento de fluidos. O processo de controle de pH exige robustez do sistema de controle, pois este processo pode ter ganho estático e dinâmica nãolineares. O controlador preditivo neural envolve duas outras teorias para o seu desenvolvimento, a primeira referente ao controle preditivo e a outra a redes neurais artificiais (RNA s). Este controlador pode ser dividido em dois blocos, um responsável pela identificação e outro pelo o cálculo do sinal de controle. Para realizar a identificação neural é utilizada uma RNA com arquitetura feedforward multicamadas com aprendizagem baseada na metodologia da Propagação Retroativa do Erro (Error Back Propagation). A partir de dados de entrada e saída da planta é iniciado o treinamento offline da rede. Dessa forma, os pesos sinápticos são ajustados e a rede está apta para representar o sistema com a máxima precisão possível. O modelo neural gerado é usado para predizer as saídas futuras do sistema, com isso o otimizador calcula uma série de ações de controle, através da minimização de uma função objetivo quadrática, fazendo com que a saída do processo siga um sinal de referência desejado. Foram desenvolvidos dois aplicativos, ambos na plataforma Builder C++, o primeiro realiza a identificação, via redes neurais e o segundo é responsável pelo controle do processo. As ferramentas aqui implementadas e aplicadas são genéricas, ambas permitem a aplicação da estrutura de controle a qualquer novo processo
Resumo:
Currently the uncertain system has attracted much academic community from the standpoint of scientific research and also practical applications. A series of mathematical approaches emerge in order to troubleshoot the uncertainties of real physical systems. In this context, the work presented here focuses on the application of control theory in a nonlinear dynamical system with parametric variations in order and robustness. We used as the practical application of this work, a system of tanks Quanser associates, in a configuration, whose mathematical model is represented by a second order system with input and output (SISO). The control system is performed by PID controllers, designed by various techniques, aiming to achieve robust performance and stability when subjected to parameter variations. Other controllers are designed with the intention of comparing the performance and robust stability of such systems. The results are obtained and compared from simulations in Matlab-simulink.
Resumo:
The increasing of the number of attacks in the computer networks has been treated with the increment of the resources that are applied directly in the active routers equip-ments of these networks. In this context, the firewalls had been consolidated as essential elements in the input and output control process of packets in a network. With the advent of intrusion detectors systems (IDS), efforts have been done in the direction to incorporate packets filtering based in standards of traditional firewalls. This integration incorporates the IDS functions (as filtering based on signatures, until then a passive element) with the already existing functions in firewall. In opposite of the efficiency due this incorporation in the blockage of signature known attacks, the filtering in the application level provokes a natural retard in the analyzed packets, and it can reduce the machine performance to filter the others packets because of machine resources demand by this level of filtering. This work presents models of treatment for this problem based in the packets re-routing for analysis by a sub-network with specific filterings. The suggestion of implementa- tion of this model aims reducing the performance problem and opening a space for the consolidation of scenes where others not conventional filtering solutions (spam blockage, P2P traffic control/blockage, etc.) can be inserted in the filtering sub-network, without inplying in overload of the main firewall in a corporative network
Resumo:
The predictive control technique has gotten, on the last years, greater number of adepts in reason of the easiness of adjustment of its parameters, of the exceeding of its concepts for multi-input/multi-output (MIMO) systems, of nonlinear models of processes could be linearised around a operating point, so can clearly be used in the controller, and mainly, as being the only methodology that can take into consideration, during the project of the controller, the limitations of the control signals and output of the process. The time varying weighting generalized predictive control (TGPC), studied in this work, is one more an alternative to the several existing predictive controls, characterizing itself as an modification of the generalized predictive control (GPC), where it is used a reference model, calculated in accordance with parameters of project previously established by the designer, and the application of a new function criterion, that when minimized offers the best parameters to the controller. It is used technique of the genetic algorithms to minimize of the function criterion proposed and searches to demonstrate the robustness of the TGPC through the application of performance, stability and robustness criterions. To compare achieves results of the TGPC controller, the GCP and proportional, integral and derivative (PID) controllers are used, where whole the techniques applied to stable, unstable and of non-minimum phase plants. The simulated examples become fulfilled with the use of MATLAB tool. It is verified that, the alterations implemented in TGPC, allow the evidence of the efficiency of this algorithm
Resumo:
Relevant researches have been growing on electric machine without mancal or bearing and that is generally named bearingless motor or specifically, mancal motor. In this paper it is made an introductory presentation about bearingless motor and its peripherical devices with focus on the design and implementation of sensors and interfaces needed to control rotor radial positioning and rotation of the machine. The signals from the machine are conditioned in analogic inputs of DSP TMS320F2812 and used in the control program. This work has a purpose to elaborate and build a system with sensors and interfaces suitable to the input and output of DSP TMS320F2812 to control a mancal motor, bearing in mind the modularity, simplicity of circuits, low number of power used, good noise imunity and good response frequency over 10 kHz. The system is tested at a modified ordinary induction motor of 3,7 kVA to be used with a bearingless motor with divided coil
Resumo:
Self-organizing maps (SOM) are artificial neural networks widely used in the data mining field, mainly because they constitute a dimensionality reduction technique given the fixed grid of neurons associated with the network. In order to properly the partition and visualize the SOM network, the various methods available in the literature must be applied in a post-processing stage, that consists of inferring, through its neurons, relevant characteristics of the data set. In general, such processing applied to the network neurons, instead of the entire database, reduces the computational costs due to vector quantization. This work proposes a post-processing of the SOM neurons in the input and output spaces, combining visualization techniques with algorithms based on gravitational forces and the search for the shortest path with the greatest reward. Such methods take into account the connection strength between neighbouring neurons and characteristics of pattern density and distances among neurons, both associated with the position that the neurons occupy in the data space after training the network. Thus, the goal consists of defining more clearly the arrangement of the clusters present in the data. Experiments were carried out so as to evaluate the proposed methods using various artificially generated data sets, as well as real world data sets. The results obtained were compared with those from a number of well-known methods existent in the literature