535 resultados para Hyperspaces Topologies
Resumo:
In the digital age, the hyperspace of virtual reality systems stands out as a new spatial concept creating a parallel realm to "real" space. Virtual reality influences one’s experience of and interaction with architectural space. This "otherworld" brings up the criticism of the existing conception of space, time and body. Hyperspaces are relatively new to designers but not to filmmakers. Their cinematic representations help the comprehension of the outcomes of these new spaces. Visualisation of futuristic ideas on the big screen turns film into a medium for spatial experimentation. Creating a possible future, The Matrix (Andy and Larry Wachowski, 1999) takes the concept of hyperspace to a level not-yet-realised but imagined. With a critical gaze at the existing norms of architecture, the film creates new horizons in terms of space. In this context, this study introduces science fiction cinema as a discussion medium to understand the potentials of virtual reality systems for the architecture of the twenty first century. As a "role model" cinema helps to better understand technological and spatial shifts. It acts as a vehicle for going beyond the spatial theories and designs of the twentieth century, and defining the conception of space in contemporary architecture.
Resumo:
Reconfigurable bi-state interwoven spiral FSSs are explored in this work. Their switching capability is realized by pin diodes that enable the change of the electromagnetic response between transparent and reflecting modes at the specified frequencies in both singly and dual polarised unit cell configurations. The proposed topologies are single layer FSS with their elements acting also as dc current carrying conductors supplying the bias signal for switching pin diodes between the on and off states, thus avoiding the need of external bias lines that can cause parasitic resonances and affect the response at oblique incidence. The presented simulation results show that such active FSSs have potentially good isolation between the transmission and reflection states, while retaining the substantially subwavelength response of the unit cell with large fractional bandwidths (FBWs) inherent to the original passive FSSs.
Resumo:
Credal networks are graph-based statistical models whose parameters take values in a set, instead of being sharply specified as in traditional statistical models (e.g., Bayesian networks). The computational complexity of inferences on such models depends on the irrelevance/independence concept adopted. In this paper, we study inferential complexity under the concepts of epistemic irrelevance and strong independence. We show that inferences under strong independence are NP-hard even in trees with binary variables except for a single ternary one. We prove that under epistemic irrelevance the polynomial-time complexity of inferences in credal trees is not likely to extend to more general models (e.g., singly connected topologies). These results clearly distinguish networks that admit efficient inferences and those where inferences are most likely hard, and settle several open questions regarding their computational complexity. We show that these results remain valid even if we disallow the use of zero probabilities. We also show that the computation of bounds on the probability of the future state in a hidden Markov model is the same whether we assume epistemic irrelevance or strong independence, and we prove an analogous result for inference in Naive Bayes structures. These inferential equivalences are important for practitioners, as hidden Markov models and Naive Bayes networks are used in real applications of imprecise probability.
Resumo:
Mollusks are the most morphologically disparate living animal phylum, they have diversified into all habitats, and have a deep fossil record. Monophyly and identity of their eight living classes is undisputed, but relationships between these groups and patterns of their early radiation have remained elusive. Arguments about traditional morphological phylogeny focus on a small number of topological concepts but often without regard to proximity of the individual classes. In contrast, molecular studies have proposed a number of radically different, inherently contradictory, and controversial sister relationships. Here, we assembled a dataset of 42 unique published trees describing molluscan interrelationships. We used these data to ask several questions about the state of resolution of molluscan phylogeny compared to a null model of the variation possible in random trees constructed from a monophyletic assemblage of eight terminals. Although 27 different unique trees have been proposed from morphological inference, the majority of these are not statistically different from each other. Within the available molecular topologies, only four studies to date have included the deep-sea class Monoplacophora; but 36.4% of all trees are not significantly different. We also present supertrees derived from 2 data partitions and 3 methods, including all available molecular molluscan phylogenies, which will form the basis for future hypothesis testing. The supertrees presented here were not constructed to provide yet another hypothesis of molluscan relationships, but rather to algorithmically evaluate the relationships present in the disparate published topologies. Based on the totality of available evidence, certain patterns of relatedness among constituent taxa become clear. The internodal distance is consistently short between a few taxon pairs, particularly supporting the relatedness of Monoplacophora and the chitons, Polyplacophora. Other taxon pairs are rarely or never found in close proximity, such as the vermiform Caudofoveata and Bivalvia. Our results have specific utility for guiding constructive research planning in order to better test relationships in Mollusca as well as other problematic groups. Taxa with consistently proximate relationships should be the focus of a combined approach in a concerted assessment of potential genetic and anatomical homology, while unequivocally distant taxa will make the most constructive choices for exemplar selection in higher-level phylogenomic analyses.
Resumo:
Motivated by the need for designing efficient and robust fully-distributed computation in highly dynamic networks such as Peer-to-Peer (P2P) networks, we study distributed protocols for constructing and maintaining dynamic network topologies with good expansion properties. Our goal is to maintain a sparse (bounded degree) expander topology despite heavy {\em churn} (i.e., nodes joining and leaving the network continuously over time). We assume that the churn is controlled by an adversary that has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Our main contribution is a randomized distributed protocol that guarantees with high probability the maintenance of a {\em constant} degree graph with {\em high expansion} even under {\em continuous high adversarial} churn. Our protocol can tolerate a churn rate of up to $O(n/\poly\log(n))$ per round (where $n$ is the stable network size). Our protocol is efficient, lightweight, and scalable, and it incurs only $O(\poly\log(n))$ overhead for topology maintenance: only polylogarithmic (in $n$) bits needs to be processed and sent by each node per round and any node's computation cost per round is also polylogarithmic. The given protocol is a fundamental ingredient that is needed for the design of efficient fully-distributed algorithms for solving fundamental distributed computing problems such as agreement, leader election, search, and storage in highly dynamic P2P networks and enables fast and scalable algorithms for these problems that can tolerate a large amount of churn.
Resumo:
The increasing scale of Multiple-Input Multiple- Output (MIMO) topologies employed in forthcoming wireless communications standards presents a substantial implementation challenge to designers of embedded baseband signal processing architectures for MIMO transceivers. Specifically the increased scale of such systems has a substantial impact on the perfor- mance/cost balance of detection algorithms for these systems. Whilst in small-scale systems Sphere Decoding (SD) algorithms offer the best quasi-ML performance/cost balance, in larger systems heuristic detectors, such Tabu-Search (TS) detectors are superior. This paper addresses a dearth of research in architectures for TS-based MIMO detection, presenting the first known realisations of TS detectors for 4 × 4 and 10 × 10 MIMO systems. To the best of the authors’ knowledge, these are the largest single-chip detectors on record.
Resumo:
O presente trabalho tem como objectivo o estudo e projecto de receptores optimizados para sistemas de comunicações por fibra óptica de muito alto débito (10Gb/s e 40Gb/s), com a capacidade integrada de compensação adaptativa pós-detecção da distorção originada pela característica de dispersão cromática e de polarização do canal óptico. O capítulo 1 detalha o âmbito de aplicabilidade destes receptores em sistemas de comunicações ópticas com multiplexagem no comprimento de onda (WDM) actuais. O capítulo apresenta ainda os objectivos e principais contribuições desta tese. O capítulo 2 detalha o projecto de um amplificador pós-detecção adequado para sistemas de comunicação ópticos com taxa de transmissão de 10Gb/s. São discutidas as topologias mais adequadas para amplificadores pós detecção e apresentados os critérios que ditaram a escolha da topologia de transimpedância bem como as condições que permitem optimizar o seu desempenho em termos de largura de banda, ganho e ruído. Para além disso são abordados aspectos relacionados com a implementação física em tecnologia monolítica de microondas (MMIC), focando em particular o impacto destes no desempenho do circuito, como é o caso do efeito dos componentes extrínsecos ao circuito monolítico, em particular as ligações por fio condutor do monólito ao circuito externo. Este amplificador foi projectado e produzido em tecnologia pHEMT de Arsenieto de Gálio e implementado em tecnologia MMIC. O protótipo produzido foi caracterizado na fábrica, ainda na bolacha em que foi produzido (on-wafer) tendo sido obtidos dados de caracterização de 80 circuitos protótipo. Estes foram comparados com resultados de simulação e com desempenho do protótipo montado num veículo de teste. O capítulo 3 apresenta o projecto de dois compensadores eléctricos ajustáveis com a capacidade de mitigar os efeitos da dispersão cromática e da dispersão de polarização em sistemas ópticos com débito binário de 10Gb/s e 40Gb/s, com modulação em banda lateral dupla e banda lateral única. Duas topologias possíveis para este tipo de compensadores (a topologia Feed-Forward Equalizer e a topologia Decision Feedback Equaliser) são apresentadas e comparadas. A topologia Feed-Forward Equaliser que serviu de base para a implementação dos compensadores apresentados é analisada com mais detalhe sendo propostas alterações que permitem a sua implementação prática. O capítulo apresenta em detalhe a forma como estes compensadores foram implementados como circuitos distribuídos em tecnologia MMIC sendo propostas duas formas de implementar as células de ganho variável: com recurso à configuração cascode ou com recurso à configuração célula de Gilbert. São ainda apresentados resultados de simulação e experimentais (dos protótipos produzidos) que permitem tirar algumas conclusões sobre o desempenho das células de ganho com as duas configurações distintas. Por fim, o capítulo inclui ainda resultados de desempenho dos compensadores testados como compensadores de um sinal eléctrico afectado de distorção. No capítulo 4 é feita uma análise do impacto da modulação em banda lateral dupla (BLD) em comparação com a modulação em banda lateral única (BLU) num sistema óptico afectado de dispersão cromática e de polarização. Mostra-se que com modulação em BLU, como não há batimento entre portadoras das duas bandas laterais em consequência do processo quadrático de detecção e há preservação da informação da distorção cromática do canal (na fase do sinal), o uso deste tipo de modulação em sistemas de comunicação óptica permite maior tolerância à dispersão cromática e os compensadores eléctricos são muito mais eficientes. O capítulo apresenta ainda resultados de teste dos compensadores desenvolvidos em cenários experimentais de laboratório representativos de sistemas ópticos a 10Gb/s e 40Gb/s. Os resultados permitem comparar o desempenho destes cenários sem e com compensação eléctrica optimizada, para os casos de modulação em BLU e em BLD, e considerando ainda os efeitos da dispersão na velocidade de grupo e do atraso de grupo diferencial. Mostra-se que a modulação BLU em conjunto com compensação adaptativa eléctrica permite um desempenho muito superior á modulação em BLD largamente utilizada nos sistemas de comunicações actuais. Por fim o capítulo 5 sintetiza e apresenta as principais conclusões deste trabalho.
Resumo:
Este trabalho apresenta um estudo sobre o dimensionamento de redes ópticas, com vistas a obter um modelo de dimensionamento para redes de transporte sobreviventes. No estudo utilizou-se uma abordagem estatística em detrimento à determinística. Inicialmente, apresentam-se as principais tecnologias e diferentes arquitecturas utilizadas nas redes ópticas de transporte. Bem como os principais esquemas de sobrevivência e modos de transporte. São identificadas variáveis necessárias e apresenta-se um modelo dimensionamento para redes de transporte, tendo-se dado ênfase às redes com topologia em malha e considerando os modos de transporte opaco, transparente e translúcido. É feita uma análise rigorosa das características das topologias de redes de transporte reais, e desenvolve-se um gerador de topologias de redes de transporte, para testar a validade dos modelos desenvolvidos. Também é implementado um algoritmo genético para a obtenção de uma topologia optimizada para um dado tráfego. São propostas expressões para o cálculo de variáveis não determinísticas, nomeadamente, para o número médio de saltos de um pedido, coeficiente de protecção e coeficiente de restauro. Para as duas últimas, também é analisado o impacto do modelo de tráfego. Verifica-se que os resultados obtidos pelas expressões propostas são similares às obtidas por cálculo numérico, e que o modelo de tráfego não influencia significativamente os valores obtidos para os coeficientes. Finalmente, é demonstrado que o modelo proposto é útil para o dimensionamento e cálculo dos custos de capital de redes com informação incompleta.
Resumo:
In the modern society, new devices, applications and technologies, with sophisticated capabilities, are converging in the same network infrastructure. Users are also increasingly demanding in personal preferences and expectations, desiring Internet connectivity anytime and everywhere. These aspects have triggered many research efforts, since the current Internet is reaching a breaking point trying to provide enough flexibility for users and profits for operators, while dealing with the complex requirements raised by the recent evolution. Fully aligned with the future Internet research, many solutions have been proposed to enhance the current Internet-based architectures and protocols, in order to become context-aware, that is, to be dynamically adapted to the change of the information characterizing any network entity. In this sense, the presented Thesis proposes a new architecture that allows to create several networks with different characteristics according to their context, on the top of a single Wireless Mesh Network (WMN), which infrastructure and protocols are very flexible and self-adaptable. More specifically, this Thesis models the context of users, which can span from their security, cost and mobility preferences, devices’ capabilities or services’ quality requirements, in order to turn a WMN into a set of logical networks. Each logical network is configured to meet a set of user context needs (for instance, support of high mobility and low security). To implement this user-centric architecture, this Thesis uses the network virtualization, which has often been advocated as a mean to deploy independent network architectures and services towards the future Internet, while allowing a dynamic resource management. This way, network virtualization can allow a flexible and programmable configuration of a WMN, in order to be shared by multiple logical networks (or virtual networks - VNs). Moreover, the high level of isolation introduced by network virtualization can be used to differentiate the protocols and mechanisms of each context-aware VN. This architecture raises several challenges to control and manage the VNs on-demand, in response to user and WMN dynamics. In this context, we target the mechanisms to: (i) discover and select the VN to assign to an user; (ii) create, adapt and remove the VN topologies and routes. We also explore how the rate of variation of the user context requirements can be considered to improve the performance and reduce the complexity of the VN control and management. Finally, due to the scalability limitations of centralized control solutions, we propose a mechanism to distribute the control functionalities along the architectural entities, which can cooperate to control and manage the VNs in a distributed way.
Resumo:
This paper presents the design analysis of novel tunable narrow-band bandpass sigma-delta modulators, which can achieve concurrent multiple noise-shaping for multi-tone input signals. Four different design methodologies based on the noise transfer functions of comb filters, slink filters, multi-notch filters and fractional delay comb filters are applied for the design of these multiple-band sigma-delta modulators. The latter approach utilises conventional comb filters in conjunction with FIR, or allpass IIR fractional delay filters, to deliver the desired nulls for the quantisation noise transfer function. Detailed simulation results show that FIR fractional delay comb filter-based sigma-delta modulators tune accurately to most centre frequencies, but suffer from degraded resolution at frequencies close to Nyquist. However, superior accuracies are obtained from their allpass IIR fractional delay counterpart at the expense of a slight shift in noise-shaping bands at very high frequencies. The merits and drawbacks of each technique for the various sigma-delta topologies are assessed in terms of in-band signal-to-noise ratios, accuracy of tunability and coefficient complexity for ease of implementation.
Resumo:
Num mundo em que as redes de telecomunicações estão em constante evolução e crescimento, o consumo energético destas também aumenta. Com a evolução tanto por parte das redes como dos seus equipamentos, o custo de implementação de uma rede tem-se reduzido até ao ponto em que o maior obstáculo para o crescimento das redes é já o seu custo de manutenção e funcionamento. Nas últimas décadas têm sido criados esforços para tornar as redes cada fez mais eficientes ao nível energético, reduzindo-se assim os seus custos operacionais, como também a redução dos problemas relacionados com as fontes de energia que alimentam estas redes. Neste sentido, este trabalho tem como objectivo principal o estudo do consumo energético de redes IP sobre WDM, designadamente o estudo de métodos de encaminhamento que sejam eficientes do ponto de vista energético. Neste trabalho formalizámos um modelo de optimização que foi avaliado usando diferentes topologias de rede. O resultado da análise mostrou que na maioria dos casos é possível obter uma redução do consumo na ordem dos 25%.
Resumo:
As wind power generation undergoes rapid growth, new technical challenges emerge: dynamic stability and power quality. The influence of wind speed disturbances and a pitch control malfunction on the quality of the energy injected into the electric grid is studied for variable-speed wind turbines with different power-electronic converter topologies. Additionally, a new control strategy is proposed for the variable-speed operation of wind turbines with permanent magnet synchronous generators. The performance of disturbance attenuation and system robustness is ascertained. Simulation results are presented and conclusions are duly drawn. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Automação e Electrónica Industrial
Resumo:
Screening of topologies developed by hierarchical heuristic procedures can be carried out by comparing their optimal performance. In this work we will be exploiting mono-objective process optimization using two algorithms, simulated annealing and tabu search, and four different objective functions: two of the net present value type, one of them including environmental costs and two of the global potential impact type. The hydrodealkylation of toluene to produce benzene was used as case study, considering five topologies with different complexities mainly obtained by including or not liquid recycling and heat integration. The performance of the algorithms together with the objective functions was observed, analyzed and discussed from various perspectives: average deviation of results for each algorithm, capacity for producing high purity product, screening of topologies, objective functions robustness in screening of topologies, trade-offs between economic and environmental type objective functions and variability of optimum solutions.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações