929 resultados para two input two output


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O consumo energético verificado nas refinarias petrolíferas é muito elevado, sendo as fornalhas os equipamentos que mais contribuem para esse consumo. Neste estudo foi efetuada uma avaliação e otimização energética às fornalhas da Fábrica de Aromáticos da Refinaria de Matosinhos. Numa primeira fase foi efetuado um levantamento exaustivo de dados de todas as correntes de entrada e saída dos equipamentos para posteriormente efetuar os balanços de massa e energia a cada uma das fornalhas. Os dados relativos ao levantamento compreenderam dois períodos de funcionamento distintos da unidade fabril, o período de funcionamento normal e o período relativo ao arranque. O período de funcionamento normal foi relativo ao ano de 2012 entre os meses de janeiro a setembro, por sua vez o período de arranque foi de dezembro de 2012 a março de 2013. Na segunda fase foram realizados os balanços de massa e energia quantificando todas as correntes de entrada e saída das fornalhas em termos mássicos e energéticos permitindo o cálculo do rendimento térmico das fornalhas para avaliar a sua performance. A avaliação energética permitiu concluir que existe um consumo maior de energia proveniente da combustão do Fuel Gás do que do Fuel Óleo, tanto no período de funcionamento normal como no arranque. As fornalhas H0101, H0301 e a H0471 possuem os consumos mais elevados, sendo responsáveis por mais de 70% do consumo da Fábrica de Aromáticos. Na terceira fase foram enunciadas duas medidas para a otimização energética das três fornalhas mais consumidoras de energia, a limpeza mensal e o uso exclusivo de Fuel Gás como combustível. As poupanças energéticas obtidas para uma limpeza mensal foram de 0,3% na fornalha H0101, 0,7% na fornalha H0301 e uma poupança de 0,9 % na fornalha H0471. Para o uso exclusivo de Fuel Gás obteve-se uma poupança de 0,9% na fornalha H0101 e uma poupança de 1,3% nas fornalhas H0301 e H0471. A análise económica efetuada à sugestão de alteração do combustível mostra que os custos de operação sofrerão um aumento anual de 621 679 €. Apesar do aumento dos custos, a redução na emissão de 24% de dióxido de carbono, poderá justificar este aumento na despesa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to investigate the out-of-plane behaviour of masonry infill walls, quasi-static testing was performed on a masonry infill walls built inside a reinforced concrete frame by means of an airbag system to apply the uniform out-of-plane load to each component of the infill. The main advantage of this testing setup is that the out-of-plane loading can be applied more uniformly in the walls, contrarily to point load configuration. The test was performed under displacement control by selecting the mid-point of the infill as control point. Input and output air in the airbag was controlled by using a software to apply a specific displacement in the control point of the infill wall. The effect of the distance between the reaction frame of the airbag and the masonry infill on the effective contact area was previously analysed. Four load cells were attached to the reaction frame to measure the out-of-plane force. The effective contact area of the airbag was calculated by dividing the load measured in load cells by the pressure inside the airbag. When the distance between the reaction walls and the masonry infill wall is smaller, the effective area is closer to the nominal area of the airbag. Deformation and crack patterns of the infill confirm the formation of arching mechanism and two-way bending of the masonry infill. Until collapse of the horizontal interface between infill and upper beam in RC frame, the infill bends in two directions but the failure of that interface which is known as weakest interface due to difficulties in filling the mortar between bricks of last row and upper beam results in the crack opening trough a well-defined path and the consequent collapse of the infill.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Seismic investigations of typical south European masonry infilled frames were performed by testing two reduced scale specimens: one in the in-plane direction and another in the out-ofplane direction. Information about geometry and reinforcement scheme of those structures constructed in 1980s were obtained by [1]. The specimen to be tested in the in-plane direction was constructed as double leaf masonry while the specimen for testing in the out-of-plane direction is constructed with only its exterior leaf since the recent earthquakes have highlighted the vulnerability of the external leaf of the infills in out-of-plane direction [2]. The tests were performed by applying the pre-defined values of displacements in the in-plane and out-of-plane directions in the control points. For in-plane testing it was done by hydraulic actuator and for out-of-plane testing through the application of an airbag. Input and output air in the airbag was controlled by using a software to apply a specific displacement in the control point of the infill wall. Mid-point of the infill was assumed as a control point for outof- plane testing. Deformation and crack patterns of the infill confirm the formation of two-way arching mechanism of the masonry infill until collapse of the upper horizontal interface between infill and frame which is known as weakest interface due to difficulties in filling the mortar between bricks of last row and upper beam. This results in the crack opening through a welldefined path and the consequent collapse of the infill.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We focus on full-rate, fast-decodable space–time block codes (STBCs) for 2 x 2 and 4 x 2 multiple-input multiple-output (MIMO) transmission. We first derive conditions and design criteria for reduced-complexity maximum-likelihood (ML) decodable 2 x 2 STBCs, and we apply them to two families of codes that were recently discovered. Next, we derive a novel reduced-complexity 4 x 2 STBC, and show that it outperforms all previously known codes with certain constellations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multiple-input multiple-output (MIMO) techniques have become an essential part of broadband wireless communications systems. For example, the recently developed IEEE 802.16e specifications for broadband wireless access include three MIMOprofiles employing 2×2 space-time codes (STCs), and two of these MIMO schemes are mandatory on the downlink of Mobile WiMAX systems. One of these has full rate, and the other has full diversity, but neither of them has both of the desired features. The third profile, namely, Matrix C, which is not mandatory, is both a full rate and a full diversity code, but it has a high decoder complexity. Recently, the attention was turned to the decodercomplexity issue and including this in the design criteria, several full-rate STCs were proposed as alternatives to Matrix C. In this paper, we review these different alternatives and compare them to Matrix C in terms of performances and the correspondingreceiver complexities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study analyzes the impact of price shocks in three input and output markets critical to ethanol: gasoline, corn, and sugar. We investigate the impact of these shocks on ethanol and related agricultural markets in the United States and Brazil. We find that the composition of a country’s vehicle fleet determines the direction of the response of ethanol consumption to changes in the gasoline price. We also find that a change in feedstock costs affects the profitability of ethanol producers and the domestic ethanol price. In Brazil, where two commodities compete for sugarcane, changes in the sugar market affect the competing ethanol market.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Measuring school efficiency is a challenging task. First, a performance measurement technique has to be selected. Within Data Envelopment Analysis (DEA), one such technique, alternative models have been developed in order to deal with environmental variables. The majority of these models lead to diverging results. Second, the choice of input and output variables to be included in the efficiency analysis is often dictated by data availability. The choice of the variables remains an issue even when data is available. As a result, the choice of technique, model and variables is probably, and ultimately, a political judgement. Multi-criteria decision analysis methods can help the decision makers to select the most suitable model. The number of selection criteria should remain parsimonious and not be oriented towards the results of the models in order to avoid opportunistic behaviour. The selection criteria should also be backed by the literature or by an expert group. Once the most suitable model is identified, the principle of permanence of methods should be applied in order to avoid a change of practices over time. Within DEA, the two-stage model developed by Ray (1991) is the most convincing model which allows for an environmental adjustment. In this model, an efficiency analysis is conducted with DEA followed by an econometric analysis to explain the efficiency scores. An environmental variable of particular interest, tested in this thesis, consists of the fact that operations are held, for certain schools, on multiple sites. Results show that the fact of being located on more than one site has a negative influence on efficiency. A likely way to solve this negative influence would consist of improving the use of ICT in school management and teaching. Planning new schools should also consider the advantages of being located on a unique site, which allows reaching a critical size in terms of pupils and teachers. The fact that underprivileged pupils perform worse than privileged pupils has been public knowledge since Coleman et al. (1966). As a result, underprivileged pupils have a negative influence on school efficiency. This is confirmed by this thesis for the first time in Switzerland. Several countries have developed priority education policies in order to compensate for the negative impact of disadvantaged socioeconomic status on school performance. These policies have failed. As a result, other actions need to be taken. In order to define these actions, one has to identify the social-class differences which explain why disadvantaged children underperform. Childrearing and literary practices, health characteristics, housing stability and economic security influence pupil achievement. Rather than allocating more resources to schools, policymakers should therefore focus on related social policies. For instance, they could define pre-school, family, health, housing and benefits policies in order to improve the conditions for disadvantaged children.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we investigate the average andoutage performance of spatial multiplexing multiple-input multiple-output (MIMO) systems with channel state information at both sides of the link. Such systems result, for example, from exploiting the channel eigenmodes in multiantenna systems. Dueto the complexity of obtaining the exact expression for the average bit error rate (BER) and the outage probability, we deriveapproximations in the high signal-to-noise ratio (SNR) regime assuming an uncorrelated Rayleigh flat-fading channel. Moreexactly, capitalizing on previous work by Wang and Giannakis, the average BER and outage probability versus SNR curves ofspatial multiplexing MIMO systems are characterized in terms of two key parameters: the array gain and the diversity gain. Finally, these results are applied to analyze the performance of a variety of linear MIMO transceiver designs available in the literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a Bayesian approach to the design of transmit prefiltering matrices in closed-loop schemes robust to channel estimation errors. The algorithms are derived for a multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) system. Two different optimizationcriteria are analyzed: the minimization of the mean square error and the minimization of the bit error rate. In both cases, the transmitter design is based on the singular value decomposition (SVD) of the conditional mean of the channel response, given the channel estimate. The performance of the proposed algorithms is analyzed,and their relationship with existing algorithms is indicated. As withother previously proposed solutions, the minimum bit error rate algorithmconverges to the open-loop transmission scheme for very poor CSI estimates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper deals with the design of nonregenerativerelaying transceivers in cooperative systems where channel stateinformation (CSI) is available at the relay station. The conventionalnonregenerative approach is the amplify and forward(A&F) approach, where the signal received at the relay is simplyamplified and retransmitted. In this paper, we propose an alternativelinear transceiver design for nonregenerative relaying(including pure relaying and the cooperative transmission cases),making proper use of CSI at the relay station. Specifically, wedesign the optimum linear filtering performed on the data to beforwarded at the relay. As optimization criteria, we have consideredthe maximization of mutual information (that provides aninformation rate for which reliable communication is possible) fora given available transmission power at the relay station. Threedifferent levels of CSI can be considered at the relay station: onlyfirst hop channel information (between the source and relay);first hop channel and second hop channel (between relay anddestination) information, or a third situation where the relaymay have complete cooperative channel information includingall the links: first and second hop channels and also the directchannel between source and destination. Despite the latter beinga more unrealistic situation, since it requires the destination toinform the relay station about the direct channel, it is useful as anupper benchmark. In this paper, we consider the last two casesrelating to CSI.We compare the performance so obtained with theperformance for the conventional A&F approach, and also withthe performance of regenerative relays and direct noncooperativetransmission for two particular cases: narrowband multiple-inputmultiple-output transceivers and wideband single input singleoutput orthogonal frequency division multiplex transmissions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As technology geometries have shrunk to the deep submicron regime, the communication delay and power consumption of global interconnections in high performance Multi- Processor Systems-on-Chip (MPSoCs) are becoming a major bottleneck. The Network-on- Chip (NoC) architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication issues such as performance limitations of long interconnects and integration of large number of Processing Elements (PEs) on a chip. The choice of routing protocol and NoC structure can have a significant impact on performance and power consumption in on-chip networks. In addition, building a high performance, area and energy efficient on-chip network for multicore architectures requires a novel on-chip router allowing a larger network to be integrated on a single die with reduced power consumption. On top of that, network interfaces are employed to decouple computation resources from communication resources, to provide the synchronization between them, and to achieve backward compatibility with existing IP cores. Three adaptive routing algorithms are presented as a part of this thesis. The first presented routing protocol is a congestion-aware adaptive routing algorithm for 2D mesh NoCs which does not support multicast (one-to-many) traffic while the other two protocols are adaptive routing models supporting both unicast (one-to-one) and multicast traffic. A streamlined on-chip router architecture is also presented for avoiding congested areas in 2D mesh NoCs via employing efficient input and output selection. The output selection utilizes an adaptive routing algorithm based on the congestion condition of neighboring routers while the input selection allows packets to be serviced from each input port according to its congestion level. Moreover, in order to increase memory parallelism and bring compatibility with existing IP cores in network-based multiprocessor architectures, adaptive network interface architectures are presented to use multiple SDRAMs which can be accessed simultaneously. In addition, a smart memory controller is integrated in the adaptive network interface to improve the memory utilization and reduce both memory and network latencies. Three Dimensional Integrated Circuits (3D ICs) have been emerging as a viable candidate to achieve better performance and package density as compared to traditional 2D ICs. In addition, combining the benefits of 3D IC and NoC schemes provides a significant performance gain for 3D architectures. In recent years, inter-layer communication across multiple stacked layers (vertical channel) has attracted a lot of interest. In this thesis, a novel adaptive pipeline bus structure is proposed for inter-layer communication to improve the performance by reducing the delay and complexity of traditional bus arbitration. In addition, two mesh-based topologies for 3D architectures are also introduced to mitigate the inter-layer footprint and power dissipation on each layer with a small performance penalty.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is known already from 1970´s that laser beam is suitable for processing paper materials. In this thesis, term paper materials mean all wood-fibre based materials, like dried pulp, copy paper, newspaper, cardboard, corrugated board, tissue paper etc. Accordingly, laser processing in this thesis means all laser treatments resulting material removal, like cutting, partial cutting, marking, creasing, perforation etc. that can be used to process paper materials. Laser technology provides many advantages for processing of paper materials: non-contact method, freedom of processing geometry, reliable technology for non-stop production etc. Especially packaging industry is very promising area for laser processing applications. However, there are only few industrial laser processing applications worldwide even in beginning of 2010´s. One reason for small-scale use of lasers in paper material manufacturing is that there is a shortage of published research and scientific articles. Another problem, restraining the use of laser for processing of paper materials, is colouration of paper material i.e. the yellowish and/or greyish colour of cut edge appearing during cutting or after cutting. These are the main reasons for selecting the topic of this thesis to concern characterization of interaction of laser beam and paper materials. This study was carried out in Laboratory of Laser Processing at Lappeenranta University of Technology (Finland). Laser equipment used in this study was TRUMPF TLF 2700 carbon dioxide laser that produces a beam with wavelength of 10.6 μm with power range of 190-2500 W (laser power on work piece). Study of laser beam and paper material interaction was carried out by treating dried kraft pulp (grammage of 67 g m-2) with different laser power levels, focal plane postion settings and interaction times. Interaction between laser beam and dried kraft pulp was detected with different monitoring devices, i.e. spectrometer, pyrometer and active illumination imaging system. This way it was possible to create an input and output parameter diagram and to study the effects of input and output parameters in this thesis. When interaction phenomena are understood also process development can be carried out and even new innovations developed. Fulfilling the lack of information on interaction phenomena can assist in the way of lasers for wider use of technology in paper making and converting industry. It was concluded in this thesis that interaction of laser beam and paper material has two mechanisms that are dependent on focal plane position range. Assumed interaction mechanism B appears in range of average focal plane position of 3.4 mm and 2.4 mm and assumed interaction mechanism A in range of average focal plane position of 0.4 mm and -0.6 mm both in used experimental set up. Focal plane position 1.4 mm represents midzone of these two mechanisms. Holes during laser beam and paper material interaction are formed gradually: first small hole is formed to interaction area in the centre of laser beam cross-section and after that, as function of interaction time, hole expands, until interaction between laser beam and dried kraft pulp is ended. By the image analysis it can be seen that in beginning of laser beam and dried kraft pulp material interaction small holes off very good quality are formed. It is obvious that black colour and heat affected zone appear as function of interaction time. This reveals that there still are different interaction phases within interaction mechanisms A and B. These interaction phases appear as function of time and also as function of peak intensity of laser beam. Limit peak intensity is the value that divides interaction mechanism A and B from one-phase interaction into dual-phase interaction. So all peak intensity values under limit peak intensity belong to MAOM (interaction mechanism A one-phase mode) or to MBOM (interaction mechanism B onephase mode) and values over that belong to MADM (interaction mechanism A dual-phase mode) or to MBDM (interaction mechanism B dual-phase mode). Decomposition process of cellulose is evolution of hydrocarbons when temperature is between 380- 500°C. This means that long cellulose molecule is split into smaller volatile hydrocarbons in this temperature range. As temperature increases, decomposition process of cellulose molecule changes. In range of 700-900°C, cellulose molecule is mainly decomposed into H2 gas; this is why this range is called evolution of hydrogen. Interaction in this range starts (as in range of MAOM and MBOM), when a small good quality hole is formed. This is due to “direct evaporation” of pulp via decomposition process of evolution of hydrogen. And this can be seen can be seen in spectrometer as high intensity peak of yellow light (in range of 588-589 nm) which refers to temperature of ~1750ºC. Pyrometer does not detect this high intensity peak since it is not able to detect physical phase change from solid kraft pulp to gaseous compounds. As interaction time between laser beam and dried kraft pulp continues, hypothesis is that three auto ignition processes occurs. Auto ignition of substance is the lowest temperature in which it will spontaneously ignite in a normal atmosphere without an external source of ignition, such as a flame or spark. Three auto ignition processes appears in range of MADM and MBDM, namely: 1. temperature of auto ignition of hydrogen atom (H2) is 500ºC, 2. temperature of auto ignition of carbon monoxide molecule (CO) is 609ºC and 3. temperature of auto ignition of carbon atom (C) is 700ºC. These three auto ignition processes leads to formation of plasma plume which has strong emission of radiation in range of visible light. Formation of this plasma plume can be seen as increase of intensity in wavelength range of ~475-652 nm. Pyrometer shows maximum temperature just after this ignition. This plasma plume is assumed to scatter laser beam so that it interacts with larger area of dried kraft pulp than what is actual area of beam cross-section. This assumed scattering reduces also peak intensity. So result shows that assumably scattered light with low peak intensity is interacting with large area of hole edges and due to low peak intensity this interaction happens in low temperature. So interaction between laser beam and dried kraft pulp turns from evolution of hydrogen to evolution of hydrocarbons. This leads to black colour of hole edges.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis examines the application of data envelopment analysis as an equity portfolio selection criterion in the Finnish stock market during period 2001-2011. A sample of publicly traded firms in the Helsinki Stock Exchange is examined in this thesis. The sample covers the majority of the publicly traded firms in the Helsinki Stock Exchange. Data envelopment analysis is used to determine the efficiency of firms using a set of input and output financial parameters. The set of financial parameters consist of asset utilization, liquidity, capital structure, growth, valuation and profitability measures. The firms are divided into artificial industry categories, because of the industry-specific nature of the input and output parameters. Comparable portfolios are formed inside the industry category according to the efficiency scores given by the DEA and the performance of the portfolios is evaluated with several measures. The empirical evidence of this thesis suggests that with certain limitations, data envelopment analysis can successfully be used as portfolio selection criterion in the Finnish stock market when the portfolios are rebalanced at annual frequency according to the efficiency scores given by the data envelopment analysis. However, when the portfolios were rebalanced every two or three years, the results are mixed and inconclusive.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The seed bank is characterized by the amount of seeds and other viable reproductive structures in the soil and it is changed by the input and output of seeds, being classified by its permanence in the soil as transient or permanent. The tillage and crops used decisively influence this dynamic and more disturbed areas tend to have richer seed banks. The purpose of this study was to test different soil tillage and crop systems, aiming to reduce or eliminate the ryegrass in the area. The experiment was conducted from 2010 to 2012. In the first year, the effect of chemical tillage was assessed, compared to the area without tillage. From the second year on, in the area that received chemical tillage, the second experiment was installed, where it was assessed the effect of soil tillage and crop rotation in the ryegrass seed yield. The soil tillage treatment was chisel plow and non-chisel plow. The crop rotation was: fallow/soybean; wheat/soybean; black oat/maize. The samples of soil were taken three times a year and split in 0-5, 5-10, 10-15 and 15-20 cm. After sampling, the seeds were separated from the soil and sterilized. Afterwards, germination and tetrazolium test were conducted. In the same plots used for soil sampling, the emergence flow of ryegrass was assessed in the winter 2011 and 2012. In the first year it was observed that chemical tillage had considerably reduced the amount of ryegrass in the soil. The crop rotations used were more effective than soil tillage in reducing the seed banks in the soil. The rotation oat/maize and wheat/soybean, in only two years, practically zeroed the ryegrass seed banks in the area.