990 resultados para multidimensional systems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modeling the performance behavior of parallel applications to predict the execution times of the applications for larger problem sizes and number of processors has been an active area of research for several years. The existing curve fitting strategies for performance modeling utilize data from experiments that are conducted under uniform loading conditions. Hence the accuracy of these models degrade when the load conditions on the machines and network change. In this paper, we analyze a curve fitting model that attempts to predict execution times for any load conditions that may exist on the systems during application execution. Based on the experiments conducted with the model for a parallel eigenvalue problem, we propose a multi-dimensional curve-fitting model based on rational polynomials for performance predictions of parallel applications in non-dedicated environments. We used the rational polynomial based model to predict execution times for 2 other parallel applications on systems with large load dynamics. In all the cases, the model gave good predictions of execution times with average percentage prediction errors of less than 20%

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, computationally efficient approximate methods are developed for analyzing uncertain dynamical systems. Uncertainties in both the excitation and the modeling are considered and examples are presented illustrating the accuracy of the proposed approximations.

For nonlinear systems under uncertain excitation, methods are developed to approximate the stationary probability density function and statistical quantities of interest. The methods are based on approximating solutions to the Fokker-Planck equation for the system and differ from traditional methods in which approximate solutions to stochastic differential equations are found. The new methods require little computational effort and examples are presented for which the accuracy of the proposed approximations compare favorably to results obtained by existing methods. The most significant improvements are made in approximating quantities related to the extreme values of the response, such as expected outcrossing rates, which are crucial for evaluating the reliability of the system.

Laplace's method of asymptotic approximation is applied to approximate the probability integrals which arise when analyzing systems with modeling uncertainty. The asymptotic approximation reduces the problem of evaluating a multidimensional integral to solving a minimization problem and the results become asymptotically exact as the uncertainty in the modeling goes to zero. The method is found to provide good approximations for the moments and outcrossing rates for systems with uncertain parameters under stochastic excitation, even when there is a large amount of uncertainty in the parameters. The method is also applied to classical reliability integrals, providing approximations in both the transformed (independently, normally distributed) variables and the original variables. In the transformed variables, the asymptotic approximation yields a very simple formula for approximating the value of SORM integrals. In many cases, it may be computationally expensive to transform the variables, and an approximation is also developed in the original variables. Examples are presented illustrating the accuracy of the approximations and results are compared with existing approximations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new neural network architecture is introduced for incremental supervised learning of recognition categories and multidimensional maps in response to arbitrary sequences of analog or binary input vectors. The architecture, called Fuzzy ARTMAP, achieves a synthesis of fuzzy logic and Adaptive Resonance Theory (ART) neural networks by exploiting a close formal similarity between the computations of fuzzy subsethood and ART category choice, resonance, and learning. Fuzzy ARTMAP also realizes a new Minimax Learning Rule that conjointly minimizes predictive error and maximizes code compression, or generalization. This is achieved by a match tracking process that increases the ART vigilance parameter by the minimum amount needed to correct a predictive error. As a result, the system automatically learns a minimal number of recognition categories, or "hidden units", to met accuracy criteria. Category proliferation is prevented by normalizing input vectors at a preprocessing stage. A normalization procedure called complement coding leads to a symmetric theory in which the MIN operator (Λ) and the MAX operator (v) of fuzzy logic play complementary roles. Complement coding uses on-cells and off-cells to represent the input pattern, and preserves individual feature amplitudes while normalizing the total on-cell/off-cell vector. Learning is stable because all adaptive weights can only decrease in time. Decreasing weights correspond to increasing sizes of category "boxes". Smaller vigilance values lead to larger category boxes. Improved prediction is achieved by training the system several times using different orderings of the input set. This voting strategy can also be used to assign probability estimates to competing predictions given small, noisy, or incomplete training sets. Four classes of simulations illustrate Fuzzy ARTMAP performance as compared to benchmark back propagation and genetic algorithm systems. These simulations include (i) finding points inside vs. outside a circle; (ii) learning to tell two spirals apart; (iii) incremental approximation of a piecewise continuous function; and (iv) a letter recognition database. The Fuzzy ARTMAP system is also compared to Salzberg's NGE system and to Simpson's FMMC system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper, chosen as a best paper from the 2004 SAMOS Workshop on Computer Systems: describes a novel, efficient methodology for automatically creating embedded DSP computer systems. The novelty arises since now embedded electronic signal processing systems, such as radar or sonar, can be designed by anyone from the algorithm level, i.e. no low level system design experience is required, whilst still achieving low controllable implementation overheads and high real time performance. In the chosen design example, a bank of Normalised Lattice Filter (NLF) components is created which a four-fold reduction in the required processing resource with no performance decrease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on an algorithm for pattern matching in character strings, we implement a pattern matching machine that searches for occurrences of patterns in multidimensional time series. Before the search process takes place, time series are encoded in user-designed alphabets. The patterns, on the other hand, are formulated as regular expressions that are composed of letters from these alphabets and operators. Furthermore, we develop a genetic algorithm to breed patterns that maximize a user-defined fitness function. In an application to financial data, we show that patterns bred to predict high exchange rates volatility in training samples retain statistically significant predictive power in validation samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O tema principal desta tese é o problema de cancelamento de interferência para sistemas multi-utilizador, com antenas distribuídas. Como tal, ao iniciar, uma visão geral das principais propriedades de um sistema de antenas distribuídas é apresentada. Esta descrição inclui o estudo analítico do impacto da ligação, dos utilizadores do sistema, a mais antenas distribuídas. Durante essa análise é demonstrado que a propriedade mais importante do sistema para obtenção do ganho máximo, através da ligação de mais antenas de transmissão, é a simetria espacial e que os utilizadores nas fronteiras das células são os mais bene ciados. Tais resultados são comprovados através de simulação. O problema de cancelamento de interferência multi-utilizador é considerado tanto para o caso unidimensional (i.e. sem codi cação) como para o multidimensional (i.e. com codi cação). Para o caso unidimensional um algoritmo de pré-codi cação não-linear é proposto e avaliado, tendo como objectivo a minimização da taxa de erro de bit. Tanto o caso de portadora única como o de multipla-portadora são abordados, bem como o cenário de antenas colocadas e distribuidas. É demonstrado que o esquema proposto pode ser visto como uma extensão do bem conhecido esquema de zeros forçados, cuja desempenho é provado ser um limite inferior para o esquema generalizado. O algoritmo é avaliado, para diferentes cenários, através de simulação, a qual indica desempenho perto do óptimo, com baixa complexidade. Para o caso multi-dimensional um esquema para efectuar "dirty paper coding" binário, tendo como base códigos de dupla camada é proposto. No desenvolvimento deste esquema, a compressão com perdas de informação, é considerada como um subproblema. Resultados de simulação indicam transmissão dedigna proxima do limite de Shannon.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introdução: A profusão de informação na área médica cria problemas de gestão, sendo necessários métodos sistematizados para armazenamento e recuperação. Quando a informação se insere no contexto do processo clínico, os métodos devem integrar terminologias biomédicas controladas e igualmente devem integrar as características desejáveis dirigidas à estrutura, conteúdo e resultados clínicos. O objectivo deste artigo é testar a aplicabilidade e capacidade de recuperação, de um sistema multidimensional desenvolvido para classificação e gestão de informação em saúde. Métodos: A partir das questões recebidas em seis anos (Serviço de Informação de Medicamentos, Serviços Farmacêuticos, Hospitais da Universidade de Coimbra), seleccionaram-se 300 questões sobre informação clínica, por método aleatório informatizado. Caracterizou-se e avaliou-se a aplicabilidade pela quantidade classificada e pela necessidade de alterações ao sistema que é constituído por várias dimensões independentes e que englobam conceitos por vezes hierarquizados. A recuperação das questões foi testada pesquisando informação numa dimensão ou cruzamento de dimensões. Resultados: Todas as questões foram classificadas: 53% são casos clínicos com incidência nas doenças geniturinárias; doenças metabólicas, nutricionais e endócrinas; neoplasias; infecções e doenças do sistema nervoso. Em 81%, o objecto é o medicamento, sobretudo anti-infecciosos e anti-neoplásicos. As áreas de terapêutica e segurança foram as mais solicitadas, incidindo principalmente sobre os assuntos: utilização, reacções adversas, identificação de medicamentos e tecnologia farmacêutica. Na aplicabilidade, foi necessário adicionar alguns conceitos e modificar alguns grupos hierárquicos que não modificaram a estrutura base, nem colidiram com as características desejáveis. As limitações prenderam-se com os sistemas de classificação externos integrados. A pesquisa na dimensão assunto, do conceito administração de medicamentos, recuperou 19 questões. O cruzamento de duas dimensões: anti-infecciosos (externa) e teratogenicidade (assunto), recuperou três questões. Nos dois exemplos recupera-se informação a partir de qualquer um dos níveis da hierarquia, do mais geral ao mais específico e mesmo a partir de dimensões externas. Conclusões: A utilização do sistema nesta amostra demonstrou aplicabilidade na classificação e arquivo de informação clínica, capacidade de recuperação e flexibilidade, sofrendo alterações sem interferir com as características desejáveis. Esta ferramenta permite a recuperação da evidência que interessa orientada para o doente. Introduction: The large amount of information in the medical area creates management problems, being necessary systematic methods for filing and retrieval. With information on the context of clinical records, methods must integrate controlled biomedical terminologies and desirable characteristics oriented to the structure, content and clinical results. The objective is to test the applicability and capacity for retrieval of a multidimensional system developed for classification and management of health information. Methods: Three hundred questions were randomly selected, by computerized method, from the questions received in six years (Medicine Information Service, Pharmaceutical Department, Coimbra University Hospitals). They were characterized and applicability evaluated by classified amount and need to alter the system, which is composed of various independent dimensions, incorporating concepts sometimes hierarchical. Questions retrieval was tested searching information in a dimension or between dimensions. Results: All questions were classified: 53% are clinical cases with illnesses incidence in the genitourinary system; metabolic, nutritional and endocrine disease; cancer; infections and nervous system. In 81%, the object is a drug, mostly anti-infectious and anti-neoplastic agents. The therapeutic and safety areas had been the most requested, regarding the subjects: use, adverse reactions, drug identification and pharmaceutical technology. As to applicability, it was necessary to add some concepts and modify same hierarchical groups, that didn’t modify the basic structure, nor had collided with the desirable characteristics. The limitations were related with the incorporated external classification systems. The search in the subject dimension of the concept drug administration retrieved 19 questions. The search between two dimensions: antiinfectious (external) and teratogenicity (subject) retrieved three questions. In the two examples, it was possible to retrieve information from any one of the levels of the hierarchy, from the most general to the most specific and even from external dimensions. Conclusions: The use of the system in this sample showed its applicability in clinical information classification and filing, retrieval capacity and flexibility, supporting modifications without interfering with desirable characteristics. This tool allows retrieval of patient-oriented evidence that matters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study we analyse the emerging patterns of regional collaboration for innovation projects in China, using official government statistics of 30 Chinese regions. We propose the use of Ordinal Multidimensional Scaling and Cluster analysis as a robust method to study regional innovation systems. Our results show that regional collaborations amongst organisations can be categorised by means of eight dimensions: public versus private organisational mindset; public versus private resources; innovation capacity versus available infrastructures; innovation input (allocated resources) versus innovation output; knowledge production versus knowledge dissemination; and collaborative capacity versus collaboration output. Collaborations which are aimed to generate innovation fell into 4 categories, those related to highly specialised public research institutions, public universities, private firms and governmental intervention. By comparing the representative cases of regions in terms of these four innovation actors, we propose policy measures for improving regional innovation collaboration within China.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this study is the analysis of the dynamical properties of financial data series from worldwide stock market indexes during the period 2000–2009. We analyze, under a regional criterium, ten main indexes at a daily time horizon. The methods and algorithms that have been explored for the description of dynamical phenomena become an effective background in the analysis of economical data. We start by applying the classical concepts of signal analysis, fractional Fourier transform, and methods of fractional calculus. In a second phase we adopt the multidimensional scaling approach. Stock market indexes are examples of complex interacting systems for which a huge amount of data exists. Therefore, these indexes, viewed from a different perspectives, lead to new classification patterns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Earthquakes are associated with negative events, such as large number of casualties, destruction of buildings and infrastructures, or emergence of tsunamis. In this paper, we apply the Multidimensional Scaling (MDS) analysis to earthquake data. MDS is a set of techniques that produce spatial or geometric representations of complex objects, such that, objects perceived to be similar/distinct in some sense are placed nearby/distant on the MDS maps. The interpretation of the charts is based on the resulting clusters since MDS produces a different locus for each similarity measure. In this study, over three million seismic occurrences, covering the period from January 1, 1904 up to March 14, 2012 are analyzed. The events, characterized by their magnitude and spatiotemporal distributions, are divided into groups, either according to the Flinn–Engdahl seismic regions of Earth or using a rectangular grid based in latitude and longitude coordinates. Space-time and Space-frequency correlation indices are proposed to quantify the similarities among events. MDS has the advantage of avoiding sensitivity to the non-uniform spatial distribution of seismic data, resulting from poorly instrumented areas, and is well suited for accessing dynamics of complex systems. MDS maps are proven as an intuitive and useful visual representation of the complex relationships that are present among seismic events, which may not be perceived on traditional geographic maps. Therefore, MDS constitutes a valid alternative to classic visualization tools, for understanding the global behavior of earthquakes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Forest fires dynamics is often characterized by the absence of a characteristic length-scale, long range correlations in space and time, and long memory, which are features also associated with fractional order systems. In this paper a public domain forest fires catalogue, containing information of events for Portugal, covering the period from 1980 up to 2012, is tackled. The events are modelled as time series of Dirac impulses with amplitude proportional to the burnt area. The time series are viewed as the system output and are interpreted as a manifestation of the system dynamics. In the first phase we use the pseudo phase plane (PPP) technique to describe forest fires dynamics. In the second phase we use multidimensional scaling (MDS) visualization tools. The PPP allows the representation of forest fires dynamics in two-dimensional space, by taking time series representative of the phenomena. The MDS approach generates maps where objects that are perceived to be similar to each other are placed on the map forming clusters. The results are analysed in order to extract relationships among the data and to better understand forest fires behaviour.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines modern economic growth according to the multidimensional scaling (MDS) method and state space portrait (SSP) analysis. Electing GDP per capita as the main indicator for economic growth and prosperity, the long-run perspective from 1870 to 2010 identifies the main similarities among 34 world partners’ modern economic growth and exemplifies the historical waving mechanics of the largest world economy, the USA. MDS reveals two main clusters among the European countries and their old offshore territories, and SSP identifies the Great Depression as a mild challenge to the American global performance, when compared to the Second World War and the 2008 crisis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les sociétés modernes dépendent de plus en plus sur les systèmes informatiques et ainsi, il y a de plus en plus de pression sur les équipes de développement pour produire des logiciels de bonne qualité. Plusieurs compagnies utilisent des modèles de qualité, des suites de programmes qui analysent et évaluent la qualité d'autres programmes, mais la construction de modèles de qualité est difficile parce qu'il existe plusieurs questions qui n'ont pas été répondues dans la littérature. Nous avons étudié les pratiques de modélisation de la qualité auprès d'une grande entreprise et avons identifié les trois dimensions où une recherche additionnelle est désirable : Le support de la subjectivité de la qualité, les techniques pour faire le suivi de la qualité lors de l'évolution des logiciels, et la composition de la qualité entre différents niveaux d'abstraction. Concernant la subjectivité, nous avons proposé l'utilisation de modèles bayésiens parce qu'ils sont capables de traiter des données ambiguës. Nous avons appliqué nos modèles au problème de la détection des défauts de conception. Dans une étude de deux logiciels libres, nous avons trouvé que notre approche est supérieure aux techniques décrites dans l'état de l'art, qui sont basées sur des règles. Pour supporter l'évolution des logiciels, nous avons considéré que les scores produits par un modèle de qualité sont des signaux qui peuvent être analysés en utilisant des techniques d'exploration de données pour identifier des patrons d'évolution de la qualité. Nous avons étudié comment les défauts de conception apparaissent et disparaissent des logiciels. Un logiciel est typiquement conçu comme une hiérarchie de composants, mais les modèles de qualité ne tiennent pas compte de cette organisation. Dans la dernière partie de la dissertation, nous présentons un modèle de qualité à deux niveaux. Ces modèles ont trois parties: un modèle au niveau du composant, un modèle qui évalue l'importance de chacun des composants, et un autre qui évalue la qualité d'un composé en combinant la qualité de ses composants. L'approche a été testée sur la prédiction de classes à fort changement à partir de la qualité des méthodes. Nous avons trouvé que nos modèles à deux niveaux permettent une meilleure identification des classes à fort changement. Pour terminer, nous avons appliqué nos modèles à deux niveaux pour l'évaluation de la navigabilité des sites web à partir de la qualité des pages. Nos modèles étaient capables de distinguer entre des sites de très bonne qualité et des sites choisis aléatoirement. Au cours de la dissertation, nous présentons non seulement des problèmes théoriques et leurs solutions, mais nous avons également mené des expériences pour démontrer les avantages et les limitations de nos solutions. Nos résultats indiquent qu'on peut espérer améliorer l'état de l'art dans les trois dimensions présentées. En particulier, notre travail sur la composition de la qualité et la modélisation de l'importance est le premier à cibler ce problème. Nous croyons que nos modèles à deux niveaux sont un point de départ intéressant pour des travaux de recherche plus approfondis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe ncWMS, an implementation of the Open Geospatial Consortium’s Web Map Service (WMS) specification for multidimensional gridded environmental data. ncWMS can read data in a large number of common scientific data formats – notably the NetCDF format with the Climate and Forecast conventions – then efficiently generate map imagery in thousands of different coordinate reference systems. It is designed to require minimal configuration from the system administrator and, when used in conjunction with a suitable client tool, provides end users with an interactive means for visualizing data without the need to download large files or interpret complex metadata. It is also used as a “bridging” tool providing interoperability between the environmental science community and users of geographic information systems. ncWMS implements a number of extensions to the WMS standard in order to fulfil some common scientific requirements, including the ability to generate plots representing timeseries and vertical sections. We discuss these extensions and their impact upon present and future interoperability. We discuss the conceptual mapping between the WMS data model and the data models used by gridded data formats, highlighting areas in which the mapping is incomplete or ambiguous. We discuss the architecture of the system and particular technical innovations of note, including the algorithms used for fast data reading and image generation. ncWMS has been widely adopted within the environmental data community and we discuss some of the ways in which the software is integrated within data infrastructures and portals.