850 resultados para High-dimensional data visualization
Resumo:
Gabor features have been recognized as one of the most successful face representations. Encouraged by the results given by this approach, other kind of facial representations based on Steerable Gaussian first order kernels and Harris corner detector are proposed in this paper. In order to reduce the high dimensional feature space, PCA and LDA techniques are employed. Once the features have been extracted, AdaBoost learning algorithm is used to select and combine the most representative features. The experimental results on XM2VTS database show an encouraging recognition rate, showing an important improvement with respect to face descriptors only based on Gabor filters.
Resumo:
In this paper, we introduce an efficient method for particle selection in tracking objects in complex scenes. Firstly, we improve the proposal distribution function of the tracking algorithm, including current observation, reducing the cost of evaluating particles with a very low likelihood. In addition, we use a partitioned sampling approach to decompose the dynamic state in several stages. It enables to deal with high-dimensional states without an excessive computational cost. To represent the color distribution, the appearance of the tracked object is modelled by sampled pixels. Based on this representation, the probability of any observation is estimated using non-parametric techniques in color space. As a result, we obtain a Probability color Density Image (PDI) where each pixel points its membership to the target color model. In this way, the evaluation of all particles is accelerated by computing the likelihood p(z|x) using the Integral Image of the PDI.
Resumo:
When examining complex problems, such as the folding of proteins, coarse grained descriptions of the system drive our investigation and help us to rationalize the results. Oftentimes collective variables (CVs), derived through some chemical intuition about the process of interest, serve this purpose. Because finding these CVs is the most difficult part of any investigation, we recently developed a dimensionality reduction algorithm, sketch-map, that can be used to build a low-dimensional map of a phase space of high-dimensionality. In this paper we discuss how these machine-generated CVs can be used to accelerate the exploration of phase space and to reconstruct free-energy landscapes. To do so, we develop a formalism in which high-dimensional configurations are no longer represented by low-dimensional position vectors. Instead, for each configuration we calculate a probability distribution, which has a domain that encompasses the entirety of the low-dimensional space. To construct a biasing potential, we exploit an analogy with metadynamics and use the trajectory to adaptively construct a repulsive, history-dependent bias from the distributions that correspond to the previously visited configurations. This potential forces the system to explore more of phase space by making it desirable to adopt configurations whose distributions do not overlap with the bias. We apply this algorithm to a small model protein and succeed in reproducing the free-energy surface that we obtain from a parallel tempering calculation.
Resumo:
Manipulator motion planning is a task which relies heavily on the construction of a configuration space prior to path planning. However when fast real-time motion is needed, the full construction of the manipulator's high-dimensional configu-ration space can be too slow and expensive. Alternative planning methods, which avoid this full construction of the manipulator's configuration space are needed to solve this problem. Here, one such existing local planning method for manipulators based on configuration-sampling and subgoal-selection has been extended. Using a modified Artificial Potential Fields (APF) function, goal-configuration sampling and a novel subgoal selection method, it provides faster, more optimal paths than the previously proposed work. Simulation results show a decrease in both runtime and path lengths, along with a decrease in unexpected local minimum and crashing issues.
Resumo:
Within the last few years the field personalized medicine entered the stage. Accompanied with great hopes and expectations it is believed that this field may have the potential to revolutionize medical and clinical care by utilizing genomics information about the individual patients themselves. In this paper, we reconstruct the early footprints of personalized medicine as reflected by information retrieved from PubMed and Google Scholar. That means we are providing a data-driven perspective of this field to estimate its current status and potential problems.
Resumo:
The speeds of sound in dibromomethane, bromochloromethane, and dichloromethane have been measured in the temperature range from 293.15 to 313.15 K and at pressures up to 100 MPa. Densities and isobaric heat capacities at atmospheric pressure have been also determined. Experimental results were used to calculate the densities and isobaric heat capacities as the function of temperature and pressure by means of a numerical integration technique. Moreover, experimental data at atmospheric pressure were then used to determine the SAFT-VR Mie molecular parameters for these liquids. The accuracy of the model has been then evaluated using a comparison of derived experimental high-pressure data with those predicted using SAFT. It was found that the model provide the possibility to predict also the isobaric heat capacity of all selected haloalkanes within an error up to 6%.
Resumo:
Inter-dealer trading in US Treasury securities is almost equally divided between two electronic trading platforms that have only slight differences in terms of their relative liquidity and transparency. BrokerTec is more active in the trading of 2-, 5-, and 10-year T-notes while eSpeed has more active trading in the 30-year bond. Over the period studied, eSpeed provides a more pre-trade transparent platform than BrokerTec. We examine the contribution to ‘price discovery’ of activity in the two platforms using high frequency data. We find that price discovery does not derive equally from the two platforms and that the shares vary across term to maturity. This can be traced to differential trading activities and transparency of the two platforms.
Resumo:
The introduction of the Tesla in 2008 has demonstrated to the public of the potential of electric vehicles in terms of reducing fuel consumption and green-house gas from the transport sector. It has brought electric vehicles back into the spotlight worldwide at a moment when fossil fuel prices were reaching unexpected high due to increased demand and strong economic growth. The energy storage capabilities from of fleets of electric vehicles as well as the potentially random discharging and charging offers challenges to the grid in terms of operation and control. Optimal scheduling strategies are key to integrating large numbers of electric vehicles and the smart grid. In this paper, state-of-the-art optimization methods are reviewed on scheduling strategies for the grid integration with electric vehicles. The paper starts with a concise introduction to analytical charging strategies, followed by a review of a number of classical numerical optimization methods, including linear programming, non-linear programming, dynamic programming as well as some other means such as queuing theory. Meta-heuristic techniques are then discussed to deal with the complex, high-dimensional and multi-objective scheduling problem associated with stochastic charging and discharging of electric vehicles. Finally, future research directions are suggested.
Resumo:
The properties of Ellerman bombs (EBs), small-scale brightenings in the Hα line wings, have proved difficult to establish because their size is close to the spatial resolution of even the most advanced telescopes. Here, we aim to infer the size and lifetime of EBs using high-resolution data of an emerging active region collected using the Interferometric BIdimensional Spectrometer (IBIS) and Rapid Oscillations of the Solar Atmosphere (ROSA) instruments as well as the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO). We develop an algorithm to track EBs through their evolution, finding that EBs can often be much smaller (around 0.3″) and shorter-lived (less than one minute) than previous estimates. A correlation between G-band magnetic bright points and EBs is also found. Combining SDO/HMI and G-band data gives a good proxy of the polarity for the vertical magnetic field. It is found that EBs often occur both over regions of opposite polarity flux and strong unipolar fields, possibly hinting at magnetic reconnection as a driver of these events.The energetics of EB events is found to follow a power-law distribution in the range of a nanoflare (1022-25 ergs).
Resumo:
Rapid in situ diagnosis of damage is a key issue in the preservation of stone-built cultural heritage. This is evident in the increasing number of congresses, workshops and publications dealing with this issue. With this increased activity has come, however, the realisation that for many culturally significant artefacts it is not possible either to remove samples for analysis or to affix surface markers for measurement. It is for this reason that there has been a growth of interest in non-destructive and minimally invasive techniques for characterising internal and external stone condition. With this interest has come the realisation that no single technique can adequately encompass the wide variety of parameters to be assessed or provide the range of information required to identify appropriate conservation. In this paper we describe a strategy to address these problems through the development of an integrated `tool kit' of measurement and analytical techniques aimed specifically at linking object-specific research to appropriate intervention. The strategy is based initially upon the acquisition of accurate three-dimensional models of stone-built heritage at different scales using a combination of millimetre accurate LiDAR and sub-millimetre accurate Object Scanning that can be exported into a GIS or directly into CAD. These are currently used to overlay information on stone characteristics obtained through a combination of Ground Penetrating Radar, Surface Permeametry, Colorimetry and X-ray Fluorescence, but the possibility exists for adding to this array of techniques as appropriate. In addition to the integrated three-dimensional data array provided by superimposition upon Digital Terrain Models, there is the capability of accurate re-measurement to show patterns of surface loss and changes in material condition over time. Thus it is possible to both record and base-line condition and to identify areas that require either preventive maintenance or more significant pre-emptive intervention. In pursuit of these goals the authors are developing, through a UK Government supported collaboration between University Researchers and Conservation Architects, commercially viable protocols for damage diagnosis, condition monitoring and eventually mechanisms for prioritizing repairs to stone-built heritage. The understanding is, however, that such strategies are not age-constrained and can ultimately be applied to structures of any age.
Resumo:
We explore the challenges posed by the violation of Bell-like inequalities by d-dimensional systems exposed to imperfect state-preparation and measurement settings. We address, in particular, the limit of high-dimensional systems, naturally arising when exploring the quantum-to-classical transition. We show that, although suitable Bell inequalities can be violated, in principle, for any dimension of given subsystems, it is in practice increasingly challenging to detect such violations, even if the system is prepared in a maximally entangled state. We characterize the effects of random perturbations on the state or on the measurement settings, also quantifying the efforts needed to certify the possible violations in case of complete ignorance on the system state at hand.
Resumo:
We present a homological characterisation of those chain complexes of modules over a Laurent polynomial ring in several indeterminates which are finitely dominated over the ground ring (that is, are a retract up to homotopy of a bounded complex of finitely generated free modules). The main tools, which we develop in the paper, are a non-standard totalisation construction for multi-complexes based on truncated products, and a high-dimensional mapping torus construction employing a theory of cubical diagrams that commute up to specified coherent homotopies.
Resumo:
Um dos maiores avanços científicos do século XX foi o desenvolvimento de tecnologia que permite a sequenciação de genomas em larga escala. Contudo, a informação produzida pela sequenciação não explica por si só a sua estrutura primária, evolução e seu funcionamento. Para esse fim novas áreas como a biologia molecular, a genética e a bioinformática são usadas para estudar as diversas propriedades e funcionamento dos genomas. Com este trabalho estamos particularmente interessados em perceber detalhadamente a descodificação do genoma efectuada no ribossoma e extrair as regras gerais através da análise da estrutura primária do genoma, nomeadamente o contexto de codões e a distribuição dos codões. Estas regras estão pouco estudadas e entendidas, não se sabendo se poderão ser obtidas através de estatística e ferramentas bioinfomáticas. Os métodos tradicionais para estudar a distribuição dos codões no genoma e seu contexto não providenciam as ferramentas necessárias para estudar estas propriedades à escala genómica. As tabelas de contagens com as distribuições de codões, assim como métricas absolutas, estão actualmente disponíveis em bases de dados. Diversas aplicações para caracterizar as sequências genéticas estão também disponíveis. No entanto, outros tipos de abordagens a nível estatístico e outros métodos de visualização de informação estavam claramente em falta. No presente trabalho foram desenvolvidos métodos matemáticos e computacionais para a análise do contexto de codões e também para identificar zonas onde as repetições de codões ocorrem. Novas formas de visualização de informação foram também desenvolvidas para permitir a interpretação da informação obtida. As ferramentas estatísticas inseridas no modelo, como o clustering, análise residual, índices de adaptação dos codões revelaram-se importantes para caracterizar as sequências codificantes de alguns genomas. O objectivo final é que a informação obtida permita identificar as regras gerais que governam o contexto de codões em qualquer genoma.
Resumo:
Nos últimos anos, o número de vítimas de acidentes de tráfego por milhões de habitantes em Portugal tem sido mais elevado do que a média da União Europeia. Ao nível nacional torna-se premente uma melhor compreensão dos dados de acidentes e sobre o efeito do veículo na gravidade do mesmo. O objetivo principal desta investigação consistiu no desenvolvimento de modelos de previsão da gravidade do acidente, para o caso de um único veículo envolvido e para caso de uma colisão, envolvendo dois veículos. Além disso, esta investigação compreendeu o desenvolvimento de uma análise integrada para avaliar o desempenho do veículo em termos de segurança, eficiência energética e emissões de poluentes. Os dados de acidentes foram recolhidos junto da Guarda Nacional Republicana Portuguesa, na área metropolitana do Porto para o período de 2006-2010. Um total de 1,374 acidentes foram recolhidos, 500 acidentes envolvendo um único veículo e 874 colisões. Para a análise da segurança, foram utilizados modelos de regressão logística. Para os acidentes envolvendo um único veículo, o efeito das características do veículo no risco de feridos graves e/ou mortos (variável resposta definida como binária) foi explorado. Para as colisões envolvendo dois veículos foram criadas duas variáveis binárias adicionais: uma para prever a probabilidade de feridos graves e/ou mortos num dos veículos (designado como veículo V1) e outra para prever a probabilidade de feridos graves e/ou mortos no outro veículo envolvido (designado como veículo V2). Para ultrapassar o desafio e limitações relativas ao tamanho da amostra e desigualdade entre os casos analisados (apenas 5.1% de acidentes graves), foi desenvolvida uma metodologia com base numa estratégia de reamostragem e foram utilizadas 10 amostras geradas de forma aleatória e estratificada para a validação dos modelos. Durante a fase de modelação, foi analisado o efeito das características do veículo, como o peso, a cilindrada, a distância entre eixos e a idade do veículo. Para a análise do consumo de combustível e das emissões, foi aplicada a metodologia CORINAIR. Posteriormente, os dados das emissões foram modelados de forma a serem ajustados a regressões lineares. Finalmente, foi desenvolvido um indicador de análise integrada (denominado “SEG”) que proporciona um método de classificação para avaliar o desempenho do veículo ao nível da segurança rodoviária, consumos e emissões de poluentes.Face aos resultados obtidos, para os acidentes envolvendo um único veículo, o modelo de previsão do risco de gravidade identificou a idade e a cilindrada do veículo como estatisticamente significativas para a previsão de ocorrência de feridos graves e/ou mortos, ao nível de significância de 5%. A exatidão do modelo foi de 58.0% (desvio padrão (D.P.) 3.1). Para as colisões envolvendo dois veículos, ao prever a probabilidade de feridos graves e/ou mortos no veículo V1, a cilindrada do veículo oposto (veículo V2) aumentou o risco para os ocupantes do veículo V1, ao nível de significância de 10%. O modelo para prever o risco de gravidade no veículo V1 revelou um bom desempenho, com uma exatidão de 61.2% (D.P. 2.4). Ao prever a probabilidade de feridos graves e/ou mortos no veículo V2, a cilindrada do veículo V1 aumentou o risco para os ocupantes do veículo V2, ao nível de significância de 5%. O modelo para prever o risco de gravidade no veículo V2 também revelou um desempenho satisfatório, com uma exatidão de 40.5% (D.P. 2.1). Os resultados do indicador integrado SEG revelaram que os veículos mais recentes apresentam uma melhor classificação para os três domínios: segurança, consumo e emissões. Esta investigação demonstra que não existe conflito entre a componente da segurança, a eficiência energética e emissões relativamente ao desempenho dos veículos.
Resumo:
The ever-growing energy consumption in mobile networks stimulated by the expected growth in data tra ffic has provided the impetus for mobile operators to refocus network design, planning and deployment towards reducing the cost per bit, whilst at the same time providing a signifi cant step towards reducing their operational expenditure. As a step towards incorporating cost-eff ective mobile system, 3GPP LTE-Advanced has adopted the coordinated multi-point (CoMP) transmission technique due to its ability to mitigate and manage inter-cell interference (ICI). Using CoMP the cell average and cell edge throughput are boosted. However, there is room for reducing energy consumption further by exploiting the inherent exibility of dynamic resource allocation protocols. To this end packet scheduler plays the central role in determining the overall performance of the 3GPP longterm evolution (LTE) based on packet-switching operation and provide a potential research playground for optimizing energy consumption in future networks. In this thesis we investigate the baseline performance for down link CoMP using traditional scheduling approaches, and subsequently go beyond and propose novel energy e fficient scheduling (EES) strategies that can achieve power-e fficient transmission to the UEs whilst enabling both system energy effi ciency gain and fairness improvement. However, ICI can still be prominent when multiple nodes use common resources with di fferent power levels inside the cell, as in the so called heterogeneous networks (Het- Net) environment. HetNets are comprised of two or more tiers of cells. The rst, or higher tier, is a traditional deployment of cell sites, often referred to in this context as macrocells. The lower tiers are termed small cells, and can appear as microcell, picocells or femtocells. The HetNet has attracted signiffi cant interest by key manufacturers as one of the enablers for high speed data at low cost. Research until now has revealed several key hurdles that must be overcome before HetNets can achieve their full potential: bottlenecks in the backhaul must be alleviated, as well as their seamless interworking with CoMP. In this thesis we explore exactly the latter hurdle, and present innovative ideas on advancing CoMP to work in synergy with HetNet deployment, complemented by a novel resource allocation policy for HetNet tighter interference management. As system level simulator has been used to analyze the proposed algorithm/protocols, and results have concluded that up to 20% energy gain can be observed.