911 resultados para GNSS, Ambiguity resolution, Regularization, Ill-posed problem, Success probability


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Network traffic analysis has been one of the most crucial techniques for preserving a large-scale IP backbone network. Despite its importance, large-scale network traffic monitoring techniques suffer from some technical and mercantile issues to obtain precise network traffic data. Though the network traffic estimation method has been the most prevalent technique for acquiring network traffic, it still has a great number of problems that need solving. With the development of the scale of our networks, the level of the ill-posed property of the network traffic estimation problem is more deteriorated. Besides, the statistical features of network traffic have changed greatly in terms of current network architectures and applications. Motivated by that, in this paper, we propose a network traffic prediction and estimation method respectively. We first use a deep learning architecture to explore the dynamic properties of network traffic, and then propose a novel network traffic prediction approach based on a deep belief network. We further propose a network traffic estimation method utilizing the deep belief network via link counts and routing information. We validate the effectiveness of our methodologies by real data sets from the Abilene and GÉANT backbone networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Under certain conditions, the mathematical models governing the melting of nano-sized particles predict unphysical results, which suggests these models are incomplete. This thesis studies the addition of different physical effects to these models, using analytic and numerical techniques to obtain realistic and meaningful results. In particular, the mathematical "blow-up" of solutions to ill-posed Stefan problems is examined, and the regularisation of this blow-up via kinetic undercooling. Other effects such as surface tension, density change and size-dependent latent heat of fusion are also analysed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The BeiDou system is the first global navigation satellite system in which all satellites transmit triple-frequency signals that can provide the positioning, navigation, and timing independently. A benefit of triple-frequency signals is that more useful combinations can be formed, including some extrawide-lane combinations whose ambiguities can generally be instantaneously fixed without distance restriction, although the narrow-lane ambiguity resolution (NL AR) still depends on the interreceiver distance or requires a long time to achieve. In this paper, we synthetically study decimeter and centimeter kinematic positioning using BeiDou triple-frequency signals. It starts with AR of two extrawide-lane signals based on the ionosphere-free or ionosphere-reduced geometry-free model. For decimeter positioning, one can immediately use two ambiguity-fixed extrawide-lane observations without pursuing NL AR. To achieve higher accuracy, NL AR is the necessary next step. Despite the fact that long-baseline NL AR is still challenging, some NL ambiguities can indeed be fixed with high reliability. Partial AR for NL signals is acceptable, because as long as some ambiguities for NL signals are fixed, positioning accuracy will be certainly improved.With accumulation of observations, more and more NL ambiguities are fixed and the positioning accuracy continues to improve. An efficient Kalman-filtering system is established to implement the whole process. The formulated system is flexible, since the additional constraints can be easily applied to enhance the model's strength. Numerical results from a set of real triple-frequency BeiDou data on a 50 km baseline show that decimeter positioning is achievable instantaneously.With only five data epochs, 84% of NL ambiguities can be fixed so that the real-time kinematic accuracies are 4.5, 2.5, and 16 cm for north, east, and height components (respectively), while with 10 data epochs more than 90% of NL ambiguities are fixed, and the rea- -time kinematic solutions are improved to centimeter level for all three coordinate components.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The authors present the simulation of the tropical Pacific surface wind variability by a low-resolution (R15 horizontal resolution and 18 vertical levels) version of the Center for Ocean-Land-Atmosphere Interactions, Maryland, general circulation model (GCM) when forced by observed global sea surface temperature. The authors have examined the monthly mean surface winds acid precipitation simulated by the model that was integrated from January 1979 to March 1992. Analyses of the climatological annual cycle and interannual variability over the Pacific are presented. The annual means of the simulated zonal and meridional winds agree well with observations. The only appreciable difference is in the region of strong trade winds where the simulated zonal winds are about 15%-20% weaker than observed, The amplitude of the annual harmonics are weaker than observed over the intertropical convergence zone and the South Pacific convergence zone regions. The amplitudes of the interannual variation of the simulated zonal and meridional winds are close to those of the observed variation. The first few dominant empirical orthogonal functions (EOF) of the simulated, as well as the observed, monthly mean winds are found to contain a targe amount of high-frequency intraseasonal variations, While the statistical properties of the high-frequency modes, such as their amplitude and geographical locations, agree with observations, their detailed time evolution does not. When the data are subjected to a 5-month running-mean filter, the first two dominant EOFs of the simulated winds representing the low-frequency EI Nino-Southern Oscillation fluctuations compare quite well with observations. However, the location of the center of the westerly anomalies associated with the warm episodes is simulated about 15 degrees west of the observed locations. The model simulates well the progress of the westerly anomalies toward the eastern Pacific during the evolution of a warm event. The simulated equatorial wind anomalies are comparable in magnitude to the observed anomalies. An intercomparison of the simulation of the interannual variability by a few other GCMs with comparable resolution is also presented. The success in simulation of the large-scale low-frequency part of the tropical surface winds by the atmospheric GCM seems to be related to the model's ability to simulate the large-scale low-frequency part of the precipitation. Good correspondence between the simulated precipitation and the highly reflective cloud anomalies is seen in the first two EOFs of the 5-month running means. Moreover, the strong correlation found between the simulated precipitation and the simulated winds in the first two principal components indicates the primary role of model precipitation in driving the surface winds. The surface winds simulated by a linear model forced by the GCM-simulated precipitation show good resemblance to the GCM-simulated winds in the equatorial region. This result supports the recent findings that the large-scale part of the tropical surface winds is primarily linear.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

380 p. : il., gráf.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Measuring electrical activity in large numbers of cells with high spatial and temporal resolution is a fundamental problem for the study of neural development and information processing. To address this problem, we have constructed FlaSh: a novel, genetically-encoded probe that can be used to measure trans-membrane voltage in single cells. We fused a modified green fluorescent protein (GFP) into a voltage-sensitive potassium channel so that voltage dependent rearrangements in the potassium channel induce changes in the fluorescence of GFP. A voltage sensor encoded into DNA has the advantage that it may be introduced into an organism non-invasively and targeted to specific developmental stages, brain regions, cell types, and sub-cellular compartments.

We also describe modifications to FlaSh that shift its color, kinetics, and dynamic range. We used multiple green fluorescent proteins to produce variants of the FlaSh sensor that generate ratiometric signal output via fluorescence resonance energy transfer (FRET). Finally, we describe initial work toward FlaSh variants that are sensitive to G-protein coupled receptor (GPCR) activation. These sensors can be used to design functional assays for receptor activation in living cells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The search for reliable proxies of past deep ocean temperature and salinity has proved difficult, thereby limiting our ability to understand the coupling of ocean circulation and climate over glacial-interglacial timescales. Previous inferences of deep ocean temperature and salinity from sediment pore fluid oxygen isotopes and chlorinity indicate that the deep ocean density structure at the Last Glacial Maximum (LGM, approximately 20,000 years BP) was set by salinity, and that the density contrast between northern and southern sourced deep waters was markedly greater than in the modern ocean. High density stratification could help explain the marked contrast in carbon isotope distribution recorded in the LGM ocean relative to that we observe today, but what made the ocean's density structure so different at the LGM? How did it evolve from one state to another? Further, given the sparsity of the LGM temperature and salinity data set, what else can we learn by increasing the spatial density of proxy records?

We investigate the cause and feasibility of a highly and salinity stratified deep ocean at the LGM and we work to increase the amount of information we can glean about the past ocean from pore fluid profiles of oxygen isotopes and chloride. Using a coupled ocean--sea ice--ice shelf cavity model we test whether the deep ocean density structure at the LGM can be explained by ice--ocean interactions over the Antarctic continental shelves, and show that a large contribution of the LGM salinity stratification can be explained through lower ocean temperature. In order to extract the maximum information from pore fluid profiles of oxygen isotopes and chloride we evaluate several inverse methods for ill-posed problems and their ability to recover bottom water histories from sediment pore fluid profiles. We demonstrate that Bayesian Markov Chain Monte Carlo parameter estimation techniques enable us to robustly recover the full solution space of bottom water histories, not only at the LGM, but through the most recent deglaciation and the Holocene up to the present. Finally, we evaluate a non-destructive pore fluid sampling technique, Rhizon samplers, in comparison to traditional squeezing methods and show that despite their promise, Rhizons are unlikely to be a good sampling tool for pore fluid measurements of oxygen isotopes and chloride.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabalho objetiva analisar diversos aspectos do Direito Internacional Público em matéria de recursos hídricos de água doce superficiais e subterrâneos. Geração de energia, abastecimento, pesca, navegação, lazer, agricultura e indústria, são múltiplos os usos que os seres humanos fazem da água doce, mas antes disso a água é essencial para manutenção de todo e qualquer tipo de vida na Terra. São complexas e passíveis de várias análises as relações entre os Estados e as relações que se concretizam no interior dos Estados com objetivo de utilizar, controlar e preservar as fontes de água doce, a que se pretende fazer é uma análise jurídica, inserida no contexto político de expansão do capitalismo liberal. Pretende-se identificar e analisar normas jurídicas produzidas no âmbito internacional multilateral, considerando a sua forma, conteúdo e possíveis efeitos: na resolução de conflitos entre os Estados pelo controle e utilização da água doce, no estabelecimento de parâmetros para solução da crise ambiental e na superação dos problemas de acesso à água. Na primeira parte do trabalho, são identificadas as normas de Direito Internacional Público atinentes à matéria, descrevendo-se, primeiramente, a evolução histórica do Direito Internacional Fluvial até os estudos da doutrina de Direito Internacional e a Convenção de Nova York de 1997. O capítulo segundo objetiva apresentar o tema da água doce no contexto de surgimento do Direito Internacional do Meio Ambiente, de realização de conferências e criação de fóruns internacionais para a questão da água e do desenvolvimento de um direito humano à água. O capítulo terceiro propõe-se a ingressar na incipiente questão da regulamentação dos usos das águas subterrâneas, analisando os trabalhos da Comissão de Direito Internacional da Organização das Nações Unidas que culminaram com a adoção de uma Resolução sobre o Direito dos Aquíferos Transfronteiriços por parte da Assembleia Geral daquela organização. A segunda parte do trabalho objetiva analisar a aplicação das regras e princípios ensaiados nos textos de Direito Internacional aos casos concretos, confrontando-as com as soluções propostas em casos paradigmáticos de conflitos pela água, como o caso Gabcikovo-Nagymaros e o caso das Papeleras, envolvendo Argentina e Uruguai, ambos julgados pela Corte Internacional de Justiça. Na segunda parte do trabalho, também é analisado o caso do aquífero Guarani, um sistema de aquíferos interligados que se estende sob os subsolos de Argentina, Brasil, Paraguai e Uruguai, que em agosto de 2010 foi objeto de um tratado internacional assinado no âmbito do Mercosul. Por fim, a pesquisa objetiva desenvolver ideias e explicações para a existência (ou não) e a efetividade (ou a falta dela) das normas de Direito Internacional sobre recursos hídricos, considerando o conceito de soberania estatal que ora é o bode expiatório para a falta de assinaturas nos tratados ou de votos em declarações, ora é o próprio fundamento para a adoção de compromissos por parte dos Estados. Conclui-se tentando responder as seguintes questões: Existe Direito Internacional da água doce? São as normas de Direito Internacional efetivas? Para que servem essas normas de Direito Internacional, além da afirmação de sua própria existência como metas a serem atingidas?

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nesta dissertação consideramos duas abordagens para o tráfego de veículos: a macroscópica e a microscópica. O tráfego é descrito macroscopicamente por três grandezas físicas interligadas entre si, a saber, a velocidade, a densidade e o fluxo, descrevendo leis de conservação do número de veículos. Há vários modelos para o tráfego macroscópico de veículos. A maioria deles trata o tráfego de veículos como um fluido compressível, traduzindo a lei de conservação de massa para os veículos e requer uma lei de estado para o par velocidade-densidade, estabelecendo uma relação entre eles. Já o modelo descrito pela abordagem microscópica considera os veículos como partículas individuais. Consideramos os modelos da classe "car - following". Estes modelos baseiam-se no princípio de que o (n - 1)-ésimo veículo (denominado de "following-car") acelera em função do estímulo que recebe do n-ésimo veículo. Analisamos a equação de conservação do número de veículos em modelos macroscópicos para fluxo de tráfego. Posteriormente resolvemos esta equação através da linearização do modelo, estudando suas retas características e apresentamos a resolução do problema não linear em domínios limitados utilizando o método das características

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deposição é um fenômeno indesejável que ocorre na superfície dos trocadores de calor ao longo de sua operação, ocasionando redução na efetividade térmica e aumento da resistência ao escoamento nestes equipamentos. Estes efeitos trazem grandes consequências econômicas e ambientais, devido ao aumento dos custos operacionais (energia adicional é requerida), aumento dos custos de projeto (demanda por equipamentos de maior área de troca térmica), limitações hidráulicas (que pode levar a uma diminuição da carga processada) e aumento das emissões (aumento da queima de combustíveis fósseis para suprir a energia adicional requerida). Neste contexto, o presente trabalho tem por objetivo fornecer ferramentas computacionais robustas que apliquem técnicas de otimização para o gerenciamento da deposição em redes de trocadores de calor, visando minimizar os seus efeitos negativos. Estas ferramentas foram desenvolvidas utilizando programação matemática no ambiente computacional GAMS, e três abordagens distintas para a resolução do problema da deposição foram pesquisadas. Uma delas consiste na identificação do conjunto ótimo de trocadores de calor a serem limpos durante uma parada para manutenção da planta, visando restaurar a carga térmica nesses equipamentos através da remoção dos depósitos existentes. Já as duas outras abordagens consistem em otimizar a distribuição das vazões das correntes ao longo de ramais paralelos, uma de forma estacionária e a outra de forma dinâmica, visando maximizar a recuperação de energia ao longo da rede. O desempenho destas três abordagens é ilustrado através de um conjunto de exemplos de redes de trocadores de calor, onde os ganhos reais obtidos com estas ferramentas de otimização desenvolvidas são demonstrados

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The authors demonstrate that a widely proposed method of robot dynamic control can be inherently unstable, due to an algebraic feedback loop condition causing an ill-posed feedback system. By focussing on the concept of ill-posedness a necessary and sufficient condition is derived for instability in robot manipulator systems which incorporate online acceleration cross-coupling control. Also demonstrated is a quasilinear multivariable control framework useful for assessing the robustness of this type of control when the instability condition is not obeyed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An implementation of the inverse vector Jiles-Atherton model for the solution of non-linear hysteretic finite element problems is presented. The implementation applies the fixed point method with differential reluctivity values obtained from the Jiles-Atherton model. Differential reluctivities are usually computed using numerical differentiation, which is ill-posed and amplifies small perturbations causing large sudden increases or decreases of differential reluctivity values, which may cause numerical problems. A rule based algorithm for conditioning differential reluctivity values is presented. Unwanted perturbations on the computed differential reluctivity values are eliminated or reduced with the aim to guarantee convergence. Details of the algorithm are presented together with an evaluation of the algorithm by a numerical example. The algorithm is shown to guarantee convergence, although the rate of convergence depends on the choice of algorithm parameters. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current research into the process of engineering design is extending the use of computers towards the acquisition, representation and application of design process knowledge in addition to the existing storage and manipulation of product-based models of design objects. This is a difficult task because the design of mechanical systems is a complex, often unpredictable process involving ill-structured problem solving skills and large amounts of knowledge, some which may be of an incomplete and subjective nature. Design problems require the integration of a variety of modes of working such as numerical, graphical, algorithmic or heuristic and demand products through synthesis, analysis and evaluation activities.

This report presents the results of a feasibility study into the blackboard approach and discusses the development of an initial prototype system that will enable an alphanumeric design dialogue between a designer and an expert to be analysed in a formal way, thus providing real-life protocol data on which to base the blackboard message structures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present work examines the beginnings of ancient hermeneutics. More specifically, it discusses the connection between the rise of the practice of allegoresis, on the one hand, and the emergence of the first theory of figurative language, on the other. Thus, this book investigates the specific historical and cultural circumstances that enabled the ancient Greeks not only to discover the possibility of allegorical interpretation, but also to treat figurative language as a philosophical problem. By posing difficulties in understanding the enigmatic sense of various esoteric doctrines, poems, oracles and riddles, figurative language created the context for theoretical reflection on the meaning of these “messages”. Hence, ancient interpreters began to ponder over the nature and functions of figurative (“enigmatic”) language as well as over the techniques of its proper use and interpretation. Although the practice of allegorical interpretation was closely linked to the development of the whole of ancient philosophy, the present work covers only the period from the 6th to the 4th century B.C. It concentrates, then, on the philosophical and cultural consequences of allegoresis in the classical age. The main thesis advocated here has it that the ancient Greeks were in-clined to regard allegory as a cognitive problem rather than merely as a stylistic or a literary one. When searching for the hidden meanings of various esoteric doc-trines, poems, oracles and riddles, ancient interpreters of these “messages” assumed allegory to be the only tool suitable for articulating certain matters. In other words, it was their belief that the use of figurative language resulted from the necessity of expressing things that were otherwise inexpressible. The present work has been organized in the following manner. The first part contains historical and philological discussions that provide the point of departure for more philosophical considerations. This part consists of two introductory chapters. Chapter one situates the practice of allegorical interpretation at the borderline of two different traditions: the rhetorical-grammatical and the hermeneutical. In order to clearly differentiate between the two, chapter one distinguishes between allegory and allegoresis, on the one hand, and allegoresis and exegesis, on the other. While pointing to the conventionality (and even arbitrariness) of such distinctions, the chapter argues, nevertheless, for their heuristic usefulness. The remaining part of chapter one focuses on a historical and philological reconstruction of the most important conceptual tools of ancient hermeneutics. Discussing the semantics of such terms as allēgoría, hypónoia, ainigma and symbolon proves important for at least two crucial reasons. Firstly, it reveals the mutual affinity between allegoresis and divination, i.e., practices that are inherently connected with the need to discover the latent meaning of the “message” in question (whether poem or oracle). Secondly, these philological analyses bring to light the specificity of the ancient understanding of such concepts as allegory or symbol. It goes without saying that antiquity employed these terms in a manner quite disparate from modernity. Chapter one concludes with a discussion of ancient views on the cognitive value of figurative (“enigmatic”) language. Chapter two focuses on the role that allegoresis played in the process of transforming mythos into logos. It is suggested here that it was the practice of allegorical interpretation that made it possible to preserve the traditional myths as an important point of reference for the whole of ancient philosophy. Thus, chapter two argues that the existence of a clear opposition between mythos into logos in Preplatonic philosophy is highly questionable in light of the indisputable fact that the Presocratics, Sophists and Cynics were profoundly convinced about the cognitive value of mythos (this conviction was also shared by Plato and Aristotle, but their attitude towards myth was more complex). Consequently, chapter two argues that in Preplatonic philosophy, myth played a function analogous to the concepts discussed in chapter one (i.e., hidden meanings, enigmas and symbols), for in all these cases, ancient interpreters found tools for conveying issues that were otherwise difficult to convey. Chapter two concludes with a classification of various types of allegoresis. Whilst chapters one and two serve as a historical and philological introduction, the second part of this book concentrates on the close relationship between the development of allegoresis, on the one hand, and the flowering of philosophy, on the other. Thus, chapter three discusses the crucial role that allegorical interpretation came to play in Preplatonic philosophy, chapter four deals with Plato’s highly complex and ambivalent attitude to allegoresis, and chapter five has been devoted to Aristotle’s original approach to the practice of allegorical interpretation. It is evident that allegoresis was of paramount importance for the ancient thinkers, irrespective of whether they would value it positively (Preplatonic philosophers and Aristotle) or negatively (Plato). Beginning with the 6th century B.C., the ancient practice of allegorical interpretation is motivated by two distinct interests. On the one hand, the practice of allegorical interpretation reflects the more or less “conservative” attachment to the authority of the poet (whether Homer, Hesiod or Orpheus). The purpose of this apologetic allegoresis is to exonerate poetry from the charges leveled at it by the first philosophers and, though to a lesser degree, historians. Generally, these allegorists seek to save the traditional paideia that builds on the works of the poets. On the other hand, the practice of allegorical interpretation reflects also the more or less “progressive” desire to make original use of the authority of the poet (whether Homer, Hesiod or Orpheus) so as to promote a given philosophical doctrine. The objective of this instrumental allegoresis is to exculpate philosophy from the accusations brought against it by the more conservative circles. Needless to say, these allegorists significantly contribute to the process of the gradual replacing of the mythical view of the world with its more philosophical explanation. The present book suggests that it is the philosophy of Aristotle that should be regarded as a sort of acme in the development of ancient hermeneutics. The reasons for this are twofold. On the one hand, the Stagirite positively values the practice of allegoresis, rehabilitating, thus, the tradition of Preplatonic philosophy against Plato. And, on the other hand, Aristotle initiates the theoretical reflection on figurative (“enigmatic”) language. Hence, in Aristotle we encounter not only the practice of allegoresis, but also the theory of allegory (although the philosopher does not use the term allēgoría). With the situation being as it is, the significance of Aristotle’s work cannot be overestimated. First of all, the Stagirite introduces the concept of metaphor into the then philosophical considerations. From that moment onwards, the phenomenon of figurative language becomes an important philosophical issue. After Aristo-tle, the preponderance of thinkers would feel obliged to specify the rules for the appropriate use of figurative language and the techniques of its correct interpretation. Furthermore, Aristotle ascribes to metaphor (and to various other “excellent” sayings) the function of increasing and enhancing our knowledge. Thus, according to the Stagirite, figurative language is not only an ornamental device, but it can also have a significant explanatory power. Finally, Aristotle observes that figurative expressions cause words to become ambiguous. In this context, the philosopher notices that ambiguity can enrich the language of a poet, but it can also hinder a dialectical discussion. Accordingly, Aristotle is inclined to value polysemy either positively or negatively. Importantly, however, the Stagirite is perfectly aware of the fact that in natural languages ambiguity is unavoidable. This is why Aristotle initiates a syste-matic reflection on the phenomenon of ambiguity and distinguishes its various kinds. In Aristotle, ambiguity is, then, both a problem that needs to be identified and a tool that can help in elucidating intricate philosophical issues. This unique approach to ambiguity and figurative (“enigmatic”) language enabled Aristotle to formulate invaluable intuitions that still await appropriate recognition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Detecting and understanding anomalies in IP networks is an open and ill-defined problem. Toward this end, we have recently proposed the subspace method for anomaly diagnosis. In this paper we present the first large-scale exploration of the power of the subspace method when applied to flow traffic. An important aspect of this approach is that it fuses information from flow measurements taken throughout a network. We apply the subspace method to three different types of sampled flow traffic in a large academic network: multivariate timeseries of byte counts, packet counts, and IP-flow counts. We show that each traffic type brings into focus a different set of anomalies via the subspace method. We illustrate and classify the set of anomalies detected. We find that almost all of the anomalies detected represent events of interest to network operators. Furthermore, the anomalies span a remarkably wide spectrum of event types, including denial of service attacks (single-source and distributed), flash crowds, port scanning, downstream traffic engineering, high-rate flows, worm propagation, and network outage.