912 resultados para multi-dimensional
Resumo:
Before the rise of the Multidimentional Protein Identification Technology (MudPIT), protein and peptide mixtures were resolved using traditional proteomic technologies like the gel-‐ based 2D chromatography that separates proteins by isoelectric point and molecular weight. This technique was tedious and limited, since the characterization of single proteins required isolation of protein gel spots, their subsequent proteolyzation and analysis using Matrix-‐ assisted laser desorption/ionization-‐time of flight (MALDI-‐TOF) mass spectrometry.
Resumo:
Urban regeneration is more and more a “universal issue” and a crucial factor in the new trends of urban planning. It is no longer only an area of study and research; it became part of new urban and housing policies. Urban regeneration involves complex decisions as a consequence of the multiple dimensions of the problems that include special technical requirements, safety concerns, socio-economic, environmental, aesthetic, and political impacts, among others. This multi-dimensional nature of urban regeneration projects and their large capital investments justify the development and use of state-of-the-art decision support methodologies to assist decision makers. This research focuses on the development of a multi-attribute approach for the evaluation of building conservation status in urban regeneration projects, thus supporting decision makers in their analysis of the problem and in the definition of strategies and priorities of intervention. The methods presented can be embedded into a Geographical Information System for visualization of results. A real-world case study was used to test the methodology, whose results are also presented.
Resumo:
Thesis submitted to Faculdade de Ciências e Tecnologia of Universidade Nova de Lisboa in partial fulfilment of the requirements for the degree of Master in Computer Science
Resumo:
O ICTM (Interval Categorizer Tesselation Model), objeto da presente tese, é um modelo geral para análise de espaços de natureza geométrica, baseado em tesselaçoes, que é capaz de produzir uma categorização confiável de conjunto de pontos de um dado espaço, de acordo com múltiplas características dos pontos, cada característica correspondendo a uma camada do modelo. Por exemplo, na análise de terrenos geográficos, uma região geográfica pode ser analisada de acordo com a sua topografia, vegetaçao, demografia, dados econômicos etc, cada uma gerando uma subdivisão diferente da região. O modelo geral baseado em tesselações não está restrito, porém, a análise de espaços bi-dimensionais. O conjunto dos pontos analisados pode pertencer a um espaço multidimensional, determinando a característica multi-dimensional de cada camada. Um procedimento de projeção das categorizações obtidas em cada camada sobre uma camada básica leva a uma categorização confiavel mais significante, que combina em uma só classificação as análises obtidas para cada característica. Isto permite muitas análises interessantes no que tange a dependência mútua das características. A dimensão da tesselação pode ser arbitrária ou escolhida de acordo com algum critério específico estabelecido pela aplicação. Neste caso, a categorização obtida pode ser refinada, ou pela re-definição da dimensão da tesselação ou tomando cada sub-região resultante para ser analisada separadamente A formalização nos registradores pode ser facilmente recuperada apenas pela indexação dos elementos das matrizes, em qualquer momento da execução. A implementação do modelo é naturalmente paralela, uma vez que a análise é feita basicamente por regras locais. Como os dados de entrada numéricos são usualmente suscetíveis a erros, o modelo utiliza a aritmética intervalar para se ter um controle automático de erros. O modelo ICTM também suporta a extração de fatos sobre as regiões de modo qualitativo, por sentenças lógicas, ou quantitativamente, pela análise de probabilidade. Este trabalho recebe apoio nanceiro do CNPq/CTPETRO e FAPERGS.
Resumo:
Este estudo buscou verificar a influencia dos agentes da cadeia de suprimentos no desempenho do desenvolvimento de novos produtos quando os agentes são analisados em conjunto. A motivação desta pesquisa veio de estudos que alertaram para a consideração da integração da cadeia de suprimentos como um constructo multidimensional, englobando o envolvimento da manufatura, fornecedores e clientes no desenvolvimento de novos produtos; e devido à falta de informação sobre as influencias individuais destes agentes no desenvolvimento de novos produtos. Sob essas considerações, buscou-se construir um modelo analítico baseado na Teoria do Capital Social e Capacidade Absortiva, construir hipóteses a partir da revisão da literatura e conectar constructos como cooperação, envolvimento do fornecedor no desenvolvimento de novos produtos (DNP), envolvimento do cliente no DNP, envolvimento da manufatura no DNP, antecipação de novas tecnologias, melhoria contínua, desempenho operacional do DNP, desempenho de mercado do NPD e desempenho de negócio do DNP. Para testar as hipóteses foram consideradas três variáveis moderadoras, tais como turbulência ambiental (baixa, média e alta), indústria (eletrônicos, maquinários e equipamentos de transporte) e localização (América, Europa e Ásia). Para testar o modelo foram usados dados do projeto High Performance Manufacturing que contém 339 empresas das indústrias de eletrônicos, maquinários e equipamentos de transporte, localizadas em onze países. As hipóteses foram testadas por meio da Análise Fatorial Confirmatória (AFC) incluindo a moderação muti-grupo para as três variáveis moderadoras mencionadas anteriormente. Os principais resultados apontaram que as hipóteses relacionadas com cooperação foram confirmadas em ambientes de média turbulência, enquanto as hipóteses relacionadas ao desempenho no DNP foram confirmadas em ambientes de baixa turbulência ambiental e em países asiáticos. Adicionalmente, sob as mesmas condições, fornecedores, clientes e manufatura influenciam diferentemente no desempenho de novos produtos. Assim, o envolvimento de fornecedores influencia diretamente no desempenho operacional e indiretamente no desempenho de mercado e de negócio em baixos níveis de turbulência ambiental, na indústria de equipamentos de transporte em países da Americanos e Europeus. De igual forma, o envolvimento do cliente influenciou diretamente no desempenho operacional e indiretamente no desempenho de mercado e do negócio em médio nível de turbulência ambiental, na indústria de maquinários e em países Asiáticos. Fornecedores e clientes não influenciam diretamente no desempenho de mercado e do negócio e não influenciam indiretamente no desempenho operacional. O envolvimento da manufatura não influenciou nenhum tipo de desempenho do desenvolvimento de novos produtos em todos os cenários testados.
Resumo:
This paper proposes a new multi-objective estimation of distribution algorithm (EDA) based on joint modeling of objectives and variables. This EDA uses the multi-dimensional Bayesian network as its probabilistic model. In this way it can capture the dependencies between objectives, variables and objectives, as well as the dependencies learnt between variables in other Bayesian network-based EDAs. This model leads to a problem decomposition that helps the proposed algorithm to find better trade-off solutions to the multi-objective problem. In addition to Pareto set approximation, the algorithm is also able to estimate the structure of the multi-objective problem. To apply the algorithm to many-objective problems, the algorithm includes four different ranking methods proposed in the literature for this purpose. The algorithm is applied to the set of walking fish group (WFG) problems, and its optimization performance is compared with an evolutionary algorithm and another multi-objective EDA. The experimental results show that the proposed algorithm performs significantly better on many of the problems and for different objective space dimensions, and achieves comparable results on some compared with the other algorithms.
Resumo:
Probabilistic modeling is the de�ning characteristic of estimation of distribution algorithms (EDAs) which determines their behavior and performance in optimization. Regularization is a well-known statistical technique used for obtaining an improved model by reducing the generalization error of estimation, especially in high-dimensional problems. `1-regularization is a type of this technique with the appealing variable selection property which results in sparse model estimations. In this thesis, we study the use of regularization techniques for model learning in EDAs. Several methods for regularized model estimation in continuous domains based on a Gaussian distribution assumption are presented, and analyzed from di�erent aspects when used for optimization in a high-dimensional setting, where the population size of EDA has a logarithmic scale with respect to the number of variables. The optimization results obtained for a number of continuous problems with an increasing number of variables show that the proposed EDA based on regularized model estimation performs a more robust optimization, and is able to achieve signi�cantly better results for larger dimensions than other Gaussian-based EDAs. We also propose a method for learning a marginally factorized Gaussian Markov random �eld model using regularization techniques and a clustering algorithm. The experimental results show notable optimization performance on continuous additively decomposable problems when using this model estimation method. Our study also covers multi-objective optimization and we propose joint probabilistic modeling of variables and objectives in EDAs based on Bayesian networks, speci�cally models inspired from multi-dimensional Bayesian network classi�ers. It is shown that with this approach to modeling, two new types of relationships are encoded in the estimated models in addition to the variable relationships captured in other EDAs: objectivevariable and objective-objective relationships. An extensive experimental study shows the e�ectiveness of this approach for multi- and many-objective optimization. With the proposed joint variable-objective modeling, in addition to the Pareto set approximation, the algorithm is also able to obtain an estimation of the multi-objective problem structure. Finally, the study of multi-objective optimization based on joint probabilistic modeling is extended to noisy domains, where the noise in objective values is represented by intervals. A new version of the Pareto dominance relation for ordering the solutions in these problems, namely �-degree Pareto dominance, is introduced and its properties are analyzed. We show that the ranking methods based on this dominance relation can result in competitive performance of EDAs with respect to the quality of the approximated Pareto sets. This dominance relation is then used together with a method for joint probabilistic modeling based on `1-regularization for multi-objective feature subset selection in classi�cation, where six di�erent measures of accuracy are considered as objectives with interval values. The individual assessment of the proposed joint probabilistic modeling and solution ranking methods on datasets with small-medium dimensionality, when using two di�erent Bayesian classi�ers, shows that comparable or better Pareto sets of feature subsets are approximated in comparison to standard methods.
Resumo:
The master thesis presents methods for intellectual analysis and visualization 3D EKG in order to increase the efficiency of ECG analysis by extracting additional data. Visualization is presented as part of the signal analysis tasks considered imaging techniques and their mathematical description. Have been developed algorithms for calculating and visualizing the signal attributes are described using mathematical methods and tools for mining signal. The model of patterns searching for comparison purposes of accuracy of methods was constructed, problems of a clustering and classification of data are solved, the program of visualization of data is also developed. This approach gives the largest accuracy in a task of the intellectual analysis that is confirmed in this work. Considered visualization and analysis techniques are also applicable to the multi-dimensional signals of a different kind.
Resumo:
The notorious "dimensionality curse" is a well-known phenomenon for any multi-dimensional indexes attempting to scale up to high dimensions. One well-known approach to overcome degradation in performance with respect to increasing dimensions is to reduce the dimensionality of the original dataset before constructing the index. However, identifying the correlation among the dimensions and effectively reducing them are challenging tasks. In this paper, we present an adaptive Multi-level Mahalanobis-based Dimensionality Reduction (MMDR) technique for high-dimensional indexing. Our MMDR technique has four notable features compared to existing methods. First, it discovers elliptical clusters for more effective dimensionality reduction by using only the low-dimensional subspaces. Second, data points in the different axis systems are indexed using a single B+-tree. Third, our technique is highly scalable in terms of data size and dimension. Finally, it is also dynamic and adaptive to insertions. An extensive performance study was conducted using both real and synthetic datasets, and the results show that our technique not only achieves higher precision, but also enables queries to be processed efficiently. Copyright Springer-Verlag 2005
Resumo:
Марусия Н. Славчова-Божкова - В настоящата работа се обобщава една гранична теорема за докритичен многомерен разклоняващ се процес, зависещ от възрастта на частиците с два типа имиграция. Целта е да се обобщи аналогичен резултат в едномерния случай като се прилагат “coupling” метода, теория на възстановяването и регенериращи процеси.
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2014
Resumo:
The research described in this thesis was motivated by the need of a robust model capable of representing 3D data obtained with 3D sensors, which are inherently noisy. In addition, time constraints have to be considered as these sensors are capable of providing a 3D data stream in real time. This thesis proposed the use of Self-Organizing Maps (SOMs) as a 3D representation model. In particular, we proposed the use of the Growing Neural Gas (GNG) network, which has been successfully used for clustering, pattern recognition and topology representation of multi-dimensional data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models, without considering time constraints. It is proposed a hardware implementation leveraging the computing power of modern GPUs, which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). The proposed methods were applied to different problem and applications in the area of computer vision such as the recognition and localization of objects, visual surveillance or 3D reconstruction.
Resumo:
Since the last decade, the combined use of chemometrics and molecular spectroscopic techniques has become a new alternative for direct drug determination, without the need of physical separation. Among the new methodologies developed, the application of PARAFAC in the decomposition of spectrofluorimetric data should be highlighted. The first objective of this article is to describe the theoretical basis of PARAFAC. For this purpose, a discussion about the order of chemometric methods used in multivariate calibration and the development of multi-dimensional methods is presented first. The other objective of this article is to divulge for the Brazilian chemical community the potential of the combination PARAFAC/spectrofluorimetry for the determination of drugs in complex biological matrices. For this purpose, two applications aiming at determining, respectively, doxorrubicine and salicylate in human plasma are presented.
Resumo:
This paper describes U2DE, a finite-volume code that numerically solves the Euler equations. The code was used to perform multi-dimensional simulations of the gradual opening of a primary diaphragm in a shock tube. From the simulations, the speed of the developing shock wave was recorded and compared with other estimates. The ability of U2DE to compute shock speed was confirmed by comparing numerical results with the analytic solution for an ideal shock tube. For high initial pressure ratios across the diaphragm, previous experiments have shown that the measured shock speed can exceed the shock speed predicted by one-dimensional models. The shock speeds computed with the present multi-dimensional simulation were higher than those estimated by previous one-dimensional models and, thus, were closer to the experimental measurements. This indicates that multi-dimensional flow effects were partly responsible for the relatively high shock speeds measured in the experiments.