192 resultados para Pilots.
Resumo:
Objects viewed through transparent sheets with residual non-parallelism and irregularity appear shifted and distorted. This distortion is measured in terms of angular and binocular deviation of an object viewed through the transparent sheet. The angular and binocular deviations introduced are particularly important in the context of aircraft windscreens and canopies as they can interfere with decision making of pilots especially while landing, leading to accidents. In this work, we have developed an instrument to measure both the angular and binocular deviations introduced by transparent sheets. This instrument is especially useful in the qualification of aircraft windscreens and canopies. It measures the deviation in the geometrical shadow cast by a periodic dot pattern trans-illuminated by the distorted light beam from the transparent test specimen compared to the reference pattern. Accurate quantification of the shift in the pattern is obtained by cross-correlating the reference shadow pattern with the specimen shadow pattern and measuring the location of the correlation peak. The developed instrument is handy to use and computes both angular and binocular deviation with an accuracy of less than +/- 0.1 mrad (approximate to 0.036 mrad) and has an excellent repeatability with an error of less than 2%. (C) 2012 American Institute of Physics. http://dx.doi.org/10.1063/1.4769756]
Resumo:
Training for receive antenna selection (AS) differs from that for conventional multiple antenna systems because of the limited hardware usage inherent in AS. We analyze and optimize the performance of a novel energy-efficient training method tailored for receive AS. In it, the transmitter sends not only pilots that enable the selection process, but also an extra pilot that leads to accurate channel estimates for the selected antenna that actually receives data. For time-varying channels, we propose a novel antenna selection rule and prove that it minimizes the symbol error probability (SEP). We also derive closed-form expressions for the SEP of MPSK, and show that the considered training method is significantly more energy-efficient than the conventional AS training method.
Resumo:
Single receive antenna selection (AS) is a popular method for obtaining diversity benefits without the additional costs of multiple radio receiver chains. Since only one antenna receives at any time, the transmitter sends a pilot multiple times to enable the receiver to estimate the channel gains of its N antennas to the transmitter and select an antenna. In time-varying channels, the channel estimates of different antennas are outdated to different extents. We analyze the symbol error probability (SEP) in time-varying channels of the N-pilot and (N+1)-pilot AS training schemes. In the former, the transmitter sends one pilot for each receive antenna. In the latter, the transmitter sends one additional pilot that helps sample the channel fading process of the selected antenna twice. We present several new results about the SEP, optimal energy allocation across pilots and data, and optimal selection rule in time-varying channels for the two schemes. We show that due to the unique nature of AS, the (N+1)-pilot scheme, despite its longer training duration, is much more energy-efficient than the conventional N-pilot scheme. An extension to a practical scenario where all data symbols of a packet are received by the same antenna is also investigated.
Resumo:
This paper considers antenna selection (AS) at a receiver equipped with multiple antenna elements but only a single radio frequency chain for packet reception. As information about the channel state is acquired using training symbols (pilots), the receiver makes its AS decisions based on noisy channel estimates. Additional information that can be exploited for AS includes the time-correlation of the wireless channel and the results of the link-layer error checks upon receiving the data packets. In this scenario, the task of the receiver is to sequentially select (a) the pilot symbol allocation, i.e., how to distribute the available pilot symbols among the antenna elements, for channel estimation on each of the receive antennas; and (b) the antenna to be used for data packet reception. The goal is to maximize the expected throughput, based on the past history of allocation and selection decisions, and the corresponding noisy channel estimates and error check results. Since the channel state is only partially observed through the noisy pilots and the error checks, the joint problem of pilot allocation and AS is modeled as a partially observed Markov decision process (POMDP). The solution to the POMDP yields the policy that maximizes the long-term expected throughput. Using the Finite State Markov Chain (FSMC) model for the wireless channel, the performance of the POMDP solution is compared with that of other existing schemes, and it is illustrated through numerical evaluation that the POMDP solution significantly outperforms them.
Resumo:
This paper studies a pilot-assisted physical layer data fusion technique known as Distributed Co-Phasing (DCP). In this two-phase scheme, the sensors first estimate the channel to the fusion center (FC) using pilots sent by the latter; and then they simultaneously transmit their common data by pre-rotating them by the estimated channel phase, thereby achieving physical layer data fusion. First, by analyzing the symmetric mutual information of the system, it is shown that the use of higher order constellations (HOC) can improve the throughput of DCP compared to the binary signaling considered heretofore. Using an HOC in the DCP setting requires the estimation of the composite DCP channel at the FC for data decoding. To this end, two blind algorithms are proposed: 1) power method, and 2) modified K-means algorithm. The latter algorithm is shown to be computationally efficient and converges significantly faster than the conventional K-means algorithm. Analytical expressions for the probability of error are derived, and it is found that even at moderate to low SNRs, the modified K-means algorithm achieves a probability of error comparable to that achievable with a perfect channel estimate at the FC, while requiring no pilot symbols to be transmitted from the sensor nodes. Also, the problem of signal corruption due to imperfect DCP is investigated, and constellation shaping to minimize the probability of signal corruption is proposed and analyzed. The analysis is validated, and the promising performance of DCP for energy-efficient physical layer data fusion is illustrated, using Monte Carlo simulations.
Resumo:
Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.
In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.
The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.
In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.
The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.
Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.
Resumo:
Ao contrário da maioria dos transtornos psiquiátricos, o transtorno de estresse pós-traumático (TEPT) apresenta uma fator causal necessário, embora não-suficiente: a exposição a um evento traumático (EPT). Em consequência deste evento desenvolvem-se três dimensões de sintomas: revivescência, esquiva/ entorpecimento emocional e hiperexcitabilidade. Um dos achados mais relevantes das pesquisas epidemiológicas de TEPT é que embora a maioria dos indivíduos seja exposta a um evento traumático em algum momento de sua vida, apenas uma minoria destes vai desenvolver o transtorno. Desta forma, a maior parte dos indivíduos expostos pode ser considerada resiliente. A resiliência consiste, portanto, na capacidade de adaptação eficaz diante de um distúrbio, estresse ou adversidade. Foi realizada uma revisão sistemática com metanálise de estudos longitudinais que investigaram fatores preditores de resiliência ao desenvolvimento de TEPT. A ausência de TEPT foi considerada proxy de resiliência. Em função do grande número de estudos identificados pela estratégia de busca, decidimos post hoc restringir as variáveis preditoras a apoio social (AS), personalidade, autoestima e eventos de vida potencialmente estressantes (EVPE). Apenas vinte artigos preencheram os critérios de elegibilidade. Treze estudos avaliaram apoio social, nove avaliaram personalidade e dois avaliaram EVPE. Nenhum dos trabalhos que pesquisou autoestima era elegível. Dezesseis dos vinte estudos incluídos nesta revisão avaliaram a associação de interesse na população geral. A maioria dos trabalhos avaliou a exposição de interesse após o EPT. Ainda que alguns destes tenham tentado captar a informação sobre a exposição antes do ocorrido, devido à natureza retrospectiva desta aferição, não há como se isentar o potencial efeito do trauma sobre resultado obtido. Além disso, foi observada grande heterogeneidade entre as pesquisas, limitando o número de estudos incluídos nas metanálises. Neuroticismo foi a única dimensão de personalidade avaliada por mais de um estudo. As medidas sumárias resultantes da combinação destes trabalhos revelaram que maior apoio social positivo prediz resiliência ao TEPT enquanto neuroticismo reduz a chance de resiliência. Os dois estudos que investigaram EVPE não puderam ser combinados. Um deles foi inconclusivo e o outro demonstrou associação entre menor número de EVPE e resiliência. Ressaltamos que as medidas sumárias devem ser interpretadas com cautela devido à grande heterogeneidade entre os estudos. Heterogeneidade na forma de avaliação dos fatores preditores de resiliência ao TEPT é compreensível devido à complexidade dos construtos avaliados. Todavia, a falta de padronização do método de operacionalização reduz a comparabilidade dos resultados.
Resumo:
Through the mid 1990’s, the bait purse-seine fishery for Atlantic menhaden, Brevoortia tyrannus, in the Virginia portion of Chesapeake Bay was essentially undocumented. Beginning in 1995, captains of Virginia bait vessels maintained deck logs of their daily fishing activities; concurrently, we sampled the bait landings for size and age composition of the catch. Herein, we summarize 15 years (1995–2009) of data from the deck logbooks, including information on total bait landings by purse seine, proportion of fishing to nonfishing days, proportion of purse-seine sets assisted by spotter pilots, nominal fishing effort, median catches, and temporal and areal trends in catch. Age and size composition of the catch are described, as well as vessel and gear characteristics and disposition of the catch.
Resumo:
O transtorno de estresse pós-traumático (TEPT) é um transtorno de ansiedade que pode ser desenvolvido após a ocorrência de um evento traumático, e que costuma vir acompanhado de um significativo comprometimento da qualidade de vida. Indivíduos diagnosticados com o TEPT apresentam níveis de frequência cardíaca mais elevados em situações de exposição a eventos estressores, como sons e imagens que relembram a experiência traumática. No entanto, estudos que avaliaram a frequência cardíaca no momento do trauma como preditor do desenvolvimento de TEPT não apresentam resultados consistentes. Os objetivos deste trabalho foram: verificar se frequência cardíaca (FC) peritraumática de repouso, após exposição ao trauma, é um fator preditor para o desenvolvimento do TEPT e para a gravidade dos sintomas de TEPT em adultos. Foi realizada uma revisão sistemática, seguida de metanálise, utilizando-se as bases eletrônicas PUBMED, LILACS, PILOTS, PsycoINFO e Web of Science. Foram incluídos 17 estudos nesta revisão sistemática. Os resultados de dez estudos foram utilizados para a metanálise das diferenças de médias de FC combinada. Oito estudos foram utilizados para a metanálise das correlações entre a FC e a gravidade dos sintomas de TEPT. Modelos de meta-regressão foram ajustados para identificar variáveis que pudessem explicar a heterogeneidade entre os estudos. A FC peritraumática no grupo de pacientes com TEPT é, em média, 3,98 batimento por mimuto (bpm) (p=0,04) maior em comparação com aqueles sem o transtorno, e o coeficiente de correlação de Pearson combinado foi de 0,14 (p=0,05).Consistente com a hipótese levantada, a frequência cardíaca peritraumática de repouso foi maior em indivíduos que desenvolveram o TEPT. Contudo, mensuração mais próxima do evento traumático e a exclusão de casos dissociativos poderão ampliar a magnitude do efeito encontrado, tornando este biomarcador simples e facilmente obtido um preditor clinicamente útil do desenvolvimento de TEPT.
Resumo:
Wydział Historyczny
Resumo:
At 8.18pm on 2 September 1998, Swissair Flight 111 (SR 111), took off from New York’s JFK airport bound for Geneva, Switzerland. Tragically, the MD-11 aircraft never arrived. According to the crash investigation report, published on 27 March 2003, electrical arcing in the ceiling void cabling was the most likely cause of the fire that brought down the aircraft. No one on board was aware of the disaster unfolding in the ceiling of the aircraft and, when a strange odour entered the cockpit, the pilots thought it was a problem with the air-conditioning system. Twenty minutes later, Swissair Flight 111 plunged into the Atlantic Ocean five nautical miles southwest of Peggy’s Cove, Nova Scotia, with the loss of all 229 lives on board. In this paper, the Computational Fluid Dynamics (CFD) analysis of the in-flight fire that brought down SR 111 is described. Reconstruction of the wreckage disclosed that the fire pattern was extensive and complex in nature. The fire damage created significant challenges to identify the origin of the fire and to appropriately explain the heat damage observed. The SMARTFIRE CFD software was used to predict the “possible” behaviour of airflow as well as the spread of fire and smoke within SR 111. The main aims of the CFD analysis were to develop a better understanding of the possible effects, or lack thereof, of numerous variables relating to the in-flight fire. Possible fire and smoke spread scenarios were studied to see what the associated outcomes would be. This assisted investigators at Transportation Safety Board (TSB) of Canada, Fire & Explosion Group in assessing fire dynamics for cause and origin determination.
Resumo:
The established (digital) leisure game industry is historically one dominated by large international hardware vendors (e.g. Sony, Microsoft and Nintendo), major publishers and supported by a complex network of development studios, distributors and retailers. New modes of digital distribution and development practice are challenging this business model and the leisure games industry landscape is one experiencing rapid change. The established (digital) leisure games industry, at least anecdotally, appears reluctant to participate actively in the applied games sector (Stewart et al., 2013). There are a number of potential explanations as to why this may indeed be the case including ; A concentration on large-scale consolidation of their (proprietary) platforms, content, entertainment brand and credibility which arguably could be weakened by association with the conflicting notion of purposefulness (in applied games) in market niches without clear business models or quantifiable returns on investment. In contrast, the applied games industry exhibits the characteristics of an emerging, immature industry namely: weak interconnectedness, limited knowledge exchange, an absence of harmonising standards, limited specialisations, limited division of labour and arguably insufficient evidence of the products efficacies (Stewart et al., 2013; Garcia Sanchez, 2013) and could, arguably, be characterised as a dysfunctional market. To test these assertions the Realising an Applied Gaming Ecosystem (RAGE) project will develop a number of self contained gaming assets to be actively employed in the creation of a number of applied games to be implemented and evaluated as regional pilots across a variety of European educational, training and vocational contexts. RAGE is a European Commission Horizon 2020 project with twenty (pan European) partners from industry, research and education with the aim of developing, transforming and enriching advanced technologies from the leisure games industry into self-contained gaming assets (i.e. solutions showing economic value potential) that could support a variety of stakeholders including teachers, students, and, significantly, game studios interested in developing applied games. RAGE will provide these assets together with a large quantity of high-quality knowledge resources through a self-sustainable Ecosystem, a social space that connects research, the gaming industries, intermediaries, education providers, policy makers and end-users in order to stimulate the development and application of applied games in educational, training and vocational contexts. The authors identify barriers (real and perceived) and opportunities facing stakeholders in engaging, exploring new emergent business models ,developing, establishing and sustaining an applied gaming eco system in Europe.
Resumo:
Purpose: This paper reports the findings of the evaluation of the Supporting People Health Pilots programme, which was established to demonstrate the policy links between housing support services and health and social care services by encouraging the development of integrated services. The paper highlights the challenges Method: The evaluation of the six health pilots rested on two main sources of data collection: Quarterly Project Evaluation Reports collected process data as well as reporting progress against aims and objectives. Semi-structured interviews—conducted across all key professional stakeholder groups and agencies and with people who used services—explored their experiences of these new services. Results: The ability of pilots to work across organisational boundaries to achieve their aims and objectives was associated not only with agencies sharing an understanding of the purpose of the joint venture, a history of joint working and clear and efficient governance arrangements but on two other characteristics: the extent and nature of statutory sector participation and, whether or not the service is defined by a history of voluntary sector involvement. In particular the pilots demonstrated how voluntary sector agencies appeared to be less constrained by organisational priorities and professional agenda and more able to respond flexibly to meet the complex needs of individuals. Conclusion and discussion: The pilots demonstrate that integrating services to support people with complex needs works best
Resumo:
The increasing demand for fast air transportation around the clock
has increased the number of night flights in civil aviation over
the past few decades. In night aviation, to land an aircraft, a
pilot needs to be able to identify an airport. The approach
lighting system (ALS) at an airport is used to provide
identification and guidance to pilots from a distance. ALS
consists of more than $100$ luminaires which are installed in a
defined pattern following strict guidelines by the International
Civil Aviation Organization (ICAO). ICAO also has strict
regulations for maintaining the performance level of the
luminaires. However, once installed, to date there is no automated
technique by which to monitor the performance of the lighting. We
suggest using images of the lighting pattern captured using a camera
placed inside an aircraft. Based on the information contained
within these images, the performance of the luminaires has to be
evaluated which requires identification of over $100$ luminaires
within the pattern of ALS image. This research proposes analysis
of the pattern using morphology filters which use a variable
length structuring element (VLSE). The dimension of the VLSE changes
continuously within an image and varies for different images.
A novel
technique for automatic determination of the VLSE is proposed and
it allows successful identification of the luminaires from the
image data as verified through the use of simulated and real data.
Resumo:
We consider a multipair decode-and-forward relay channel, where multiple sources transmit simultaneously their signals to multiple destinations with the help of a full-duplex relay station. We assume that the relay station is equipped with massive arrays, while all sources and destinations have a single antenna. The relay station uses channel estimates obtained from received pilots and zero-forcing (ZF) or maximum-ratio combining/maximum-ratio transmission (MRC/MRT) to process the signals. To reduce significantly the loop interference effect, we propose two techniques: i) using a massive receive antenna array; or ii) using a massive transmit antenna array together with very low transmit power at the relay station. We derive an exact achievable rate in closed-form for MRC/MRT processing and an analytical approximation of the achievable rate for ZF processing. This approximation is very tight, especially for large number of relay station antennas. These closed-form expressions enable us to determine the regions where the full-duplex mode outperforms the half-duplex mode, as well as, to design an optimal power allocation scheme. This optimal power allocation scheme aims to maximize the energy efficiency for a given sum spectral efficiency and under peak power constraints at the relay station and sources. Numerical results verify the effectiveness of the optimal power allocation scheme. Furthermore, we show that, by doubling the number of transmit/receive antennas at the relay station, the transmit power of each source and of the relay station can be reduced by 1.5dB if the pilot power is equal to the signal power, and by 3dB if the pilot power is kept fixed, while maintaining a given quality-of-service.