881 resultados para distributed generation
Resumo:
Recently Li and Xia have proposed a transmission scheme for wireless relay networks based on the Alamouti space time code and orthogonal frequency division multiplexing to combat the effect of timing errors at the relay nodes. This transmission scheme is amazingly simple and achieves a diversity order of two for any number of relays. Motivated by its simplicity, this scheme is extended to a more general transmission scheme that can achieve full cooperative diversity for any number of relays. The conditions on the distributed space time block code (DSTBC) structure that admit its application in the proposed transmission scheme are identified and it is pointed out that the recently proposed full diversity four group decodable DST-BCs from precoded co-ordinate interleaved orthogonal designs and extended Clifford algebras satisfy these conditions. It is then shown how differential encoding at the source can be combined with the proposed transmission scheme to arrive at a new transmission scheme that can achieve full cooperative diversity in asynchronous wireless relay networks with no channel information and also no timing error knowledge at the destination node. Finally, four group decodable distributed differential space time block codes applicable in this new transmission scheme for power of two number of relays are also provided.
Resumo:
Next generation wireless systems employ Orthogonal frequency division multiplexing (OFDM) physical layer owing to the high data rate transmissions that are possible without increase in bandwidth. While TCP performance has been extensively studied for interaction with link layer ARQ, little attention has been given to the interaction of TCP with MAC layer. In this work, we explore cross-layer interactions in an OFDM based wireless system, specifically focusing on channel-aware resource allocation strategies at the MAC layer and its impact on TCP congestion control. Both efficiency and fairness oriented MAC resource allocation strategies were designed for evaluating the performance of TCP. The former schemes try to exploit the channel diversity to maximize the system throughput, while the latter schemes try to provide a fair resource allocation over sufficiently long time duration. From a TCP goodput standpoint, we show that the class of MAC algorithms that incorporate a fairness metric and consider the backlog outperform the channel diversity exploiting schemes.
Resumo:
An imaging technique is developed for the controlled generation of multiple excitation nano-spots for far-field microscopy. The system point spread function (PSF) is obtained by interfering two counter-propagating extended depth-of-focus PSF (DoF-PSF), resulting in highly localized multiple excitation spots along the optical axis. The technique permits (1) simultaneous excitation of multiple planes in the specimen; (2) control of the number of spots by confocal detection; and (3) overcoming the point-by-point based excitation. Fluorescence detection from the excitation spots can be efficiently achieved by Z-scanning the detector/pinhole assembly. The technique complements most of the bioimaging techniques and may find potential application in high resolution fluorescence microscopy and nanoscale imaging.
Resumo:
Knowledge generation and innovation have been a priority for global city administrators particularly during the last couple of decades. This is mainly due to the growing consensus in identifying knowledge-based urban development as a panacea to the burgeoning economic problems. Place making has become a critical element for success in knowledge-based urban development as planning and branding places is claimed to be an effective marketing tool for attracting investment and talent. This paper aims to investigate the role of planning and branding in place making by assessing the effectiveness of planning and branding strategies in the development of knowledge and innovation milieus. The methodology of the study comprises reviewing the literature thoroughly, developing an analysis framework, and utilizing this framework in analyzing Brisbane’s knowledge community precincts—namely Boggo Road Knowledge Precinct, Kelvin Grove Urban Knowledge Village, and Sippy Downs Knowledge Town. The analysis findings generate invaluable insights in Brisbane’s journey in place making for knowledge and innovation milieus and communities. The results suggest as much as good planning, branding strategies and practice, the requirements of external and internal conditions also need to be met for successful place making in knowledge community precincts.
Resumo:
A novel dodecagonal space vector structure for induction motor drive is presented in this paper. It consists of two dodecagons, with the radius of the outer one twice the inner one. Compared to existing dodecagonal space vector structures, to achieve the same PWM output voltage quality, the proposed topology lowers the switching frequency of the inverters and reduces the device ratings to half. At the same time, other benefits obtained from existing dodecagonal space vector structure are retained here. This includes the extension of the linear modulation range and elimination of all 6+/-1 harmonics (n=odd) from the phase voltage. The proposed structure is realized by feeding an open-end winding induction motor with two conventional three level inverters. A detailed calculation of the PWM timings for switching the space vector points is also presented. Simulation and experimental results indicate the possible application of the proposed idea for high power drives.
Resumo:
Frequency multiplication (FM) can be used to design low power frequency synthesizers. This is achieved by running the VCO at a much reduced frequency, while employing a power efficient frequency multiplier, and also thereby eliminating the first few dividers. Quadrature signals can be generated by frequency- multiplying low frequency I/Q signals, however this also multiplies the quadrature error of these signals. Another way is generating additional edges from the low-frequency oscillator (LFO) and develop a quadrature FM. This makes the I-Q precision heavily dependent on process mismatches in the ring oscillator. In this paper we examine the use of fewer edges from LFO and a single stage polyphase filter to generate approximate quadrature signals, which is then followed by an injection-locked quadrature VCO to generate high- precision I/Q signals. Simulation comparisons with the existing approach shows that the proposed method offers very good phase accuracy of 0.5deg with only a modest increase in power dissipation for 2.4 GHz IEEE 802.15.4 standard using UMC 0.13 mum RFCMOS technology.
Resumo:
We consider a scenario in which a wireless sensor network is formed by randomly deploying n sensors to measure some spatial function over a field, with the objective of computing a function of the measurements and communicating it to an operator station. We restrict ourselves to the class of type-threshold functions (as defined in the work of Giridhar and Kumar, 2005), of which max, min, and indicator functions are important examples: our discussions are couched in terms of the max function. We view the problem as one of message-passing distributed computation over a geometric random graph. The network is assumed to be synchronous, and the sensors synchronously measure values and then collaborate to compute and deliver the function computed with these values to the operator station. Computation algorithms differ in (1) the communication topology assumed and (2) the messages that the nodes need to exchange in order to carry out the computation. The focus of our paper is to establish (in probability) scaling laws for the time and energy complexity of the distributed function computation over random wireless networks, under the assumption of centralized contention-free scheduling of packet transmissions. First, without any constraint on the computation algorithm, we establish scaling laws for the computation time and energy expenditure for one-time maximum computation. We show that for an optimal algorithm, the computation time and energy expenditure scale, respectively, as Theta(radicn/log n) and Theta(n) asymptotically as the number of sensors n rarr infin. Second, we analyze the performance of three specific computation algorithms that may be used in specific practical situations, namely, the tree algorithm, multihop transmission, and the Ripple algorithm (a type of gossip algorithm), and obtain scaling laws for the computation time and energy expenditure as n rarr infin. In particular, we show that the computation time for these algorithms scales as Theta(radicn/lo- g n), Theta(n), and Theta(radicn log n), respectively, whereas the energy expended scales as , Theta(n), Theta(radicn/log n), and Theta(radicn log n), respectively. Finally, simulation results are provided to show that our analysis indeed captures the correct scaling. The simulations also yield estimates of the constant multipliers in the scaling laws. Our analyses throughout assume a centralized optimal scheduler, and hence, our results can be viewed as providing bounds for the performance with practical distributed schedulers.
Resumo:
Measurable electrical signal is generated when a gas flows over a variety of solids, including doped semiconductors, even at the modest speed of a few meters per second. The underlying mechanism is an interesting interplay of Bernoulli's principle and the Seebeck effect. The electrical signal depends on the square of Mach number (M) and is proportional to the Seebeck coefficient (S) of the solids. Here we present experimental estimate of the response time of the signal rise and fall process, i.e. how fast the semiconductor materials respond to a steady flow as soon as it is set on or off. A theoretical model is also presented to understand the process and the dependence of the response time on the nature and physical dimensions of the semiconductor material used and they are compared with the experimental observations. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Construction and demolition (C&D) waste have negative impacts on the environment. As a significant proportion of C&D waste is related to the design stage of a project, there is an opportunity for architects to reduce the waste. However, research suggests that many architects often do not understand the impact of their design on waste generation. Training and education are proposed by current researchers to improve architects’ knowledge; however, this has not been adequately validated as a viable approach to solving waste issues. This research investigates architects’ perceptions towards waste management in the design phase, and determines whether they feel they are adequately skilled in reducing C&D waste. Questionnaire surveys were distributed to architects from 98 architectural firms and 25 completed surveys were returned. The results show that while architects are aware of the relationship between design and waste, ‘extra time’ and ‘lack of knowledge’ are the key barriers to implementing waste reduction strategies. In addition, the majority of respondents acknowledge their lack of skill to reduce waste through design evaluation. Therefore, training programmes can be a viable strategy to enable them to address the pressing issue of C&D waste reduction.
Resumo:
A key trait of Free and Open Source Software (FOSS) development is its distributed nature. Nevertheless, two project-level operations, the fork and the merge of program code, are among the least well understood events in the lifespan of a FOSS project. Some projects have explicitly adopted these operations as the primary means of concurrent development. In this study, we examine the effect of highly distributed software development, is found in the Linux kernel project, on collection and modelling of software development data. We find that distributed development calls for sophisticated temporal modelling techniques where several versions of the source code tree can exist at once. Attention must be turned towards the methods of quality assurance and peer review that projects employ to manage these parallel source trees. Our analysis indicates that two new metrics, fork rate and merge rate, could be useful for determining the role of distributed version control systems in FOSS projects. The study presents a preliminary data set consisting of version control and mailing list data.
Resumo:
We combine results from searches by the CDF and D0 collaborations for a standard model Higgs boson (H) in the process gg->H->W+W- in p=pbar collisions at the Fermilab Tevatron Collider at sqrt{s}=1.96 TeV. With 4.8 fb-1 of integrated luminosity analyzed at CDF and 5.4 fb-1 at D0, the 95% Confidence Level upper limit on \sigma(gg->H) x B(H->W+W-) is 1.75 pb at m_H=120 GeV, 0.38 pb at m_H=165 GeV, and 0.83 pb at m_H=200 GeV. Assuming the presence of a fourth sequential generation of fermions with large masses, we exclude at the 95% Confidence Level a standard-model-like Higgs boson with a mass between 131 and 204 GeV.
Resumo:
A key trait of Free and Open Source Software (FOSS) development is its distributed nature. Nevertheless, two project-level operations, the fork and the merge of program code, are among the least well understood events in the lifespan of a FOSS project. Some projects have explicitly adopted these operations as the primary means of concurrent development. In this study, we examine the effect of highly distributed software development, is found in the Linux kernel project, on collection and modelling of software development data. We find that distributed development calls for sophisticated temporal modelling techniques where several versions of the source code tree can exist at once. Attention must be turned towards the methods of quality assurance and peer review that projects employ to manage these parallel source trees. Our analysis indicates that two new metrics, fork rate and merge rate, could be useful for determining the role of distributed version control systems in FOSS projects. The study presents a preliminary data set consisting of version control and mailing list data.
Resumo:
An optical technique is proposed for obtaining multiple excitation spots. Phase-matched counter propagating extended depth-of-focus fields were superimposed along the optical axis for generating multiple localized excitation spots. Moreover, the filtering effect due to the optical mask increases the lateral resolution. Proposed technique introduces the concept of simultaneous multiplane excitation and improves three-dimensional resolution. (C) 2010 American Institute of Physics.