901 resultados para Subfractals, Subfractal Coding, Model Analysis, Digital Imaging, Pattern Recognition
Resumo:
This paper presents the novel theory for performing multi-agent activity recognition without requiring large training corpora. The reduced need for data means that robust probabilistic recognition can be performed within domains where annotated datasets are traditionally unavailable. Complex human activities are composed from sequences of underlying primitive activities. We do not assume that the exact temporal ordering of primitives is necessary, so can represent complex activity using an unordered bag. Our three-tier architecture comprises low-level video tracking, event analysis and high-level inference. High-level inference is performed using a new, cascading extension of the Rao–Blackwellised Particle Filter. Simulated annealing is used to identify pairs of agents involved in multi-agent activity. We validate our framework using the benchmarked PETS 2006 video surveillance dataset and our own sequences, and achieve a mean recognition F-Score of 0.82. Our approach achieves a mean improvement of 17% over a Hidden Markov Model baseline.
Resumo:
[ES]This paper describes an analysis performed for facial description in static images and video streams. The still image context is first analyzed in order to decide the optimal classifier configuration for each problem: gender recognition, race classification, and glasses and moustache presence. These results are later applied to significant samples which are automatically extracted in real-time from video streams achieving promising results in the facial description of 70 individuals by means of gender, race and the presence of glasses and moustache.
Resumo:
Mainstream cinema is to an ever-increasing degree deploying digital imaging technologies to work with the human form; expanding on it, morphing its features, or providing new ways of presenting it. This has prompted theories of simulation and virtualisation to explore the cultural and aesthetic implications, anxieties, and possibilities of a loss of the ‘real’ – in turn often defined in terms of the photographic trace. This thesis wants to provide another perspective. Following instead some recent imperatives in art-theory, this study looks to introduce and expand on the notion of the human figure, as pertaining to processes of figuration rather than (only) representation. The notion of the figure and figuration have an extended history in the fields of hermeneutics, aesthetics, and philosophy, through which they have come to stand for particular theories and methodologies with regards to images and their communication of meaning. This objective of this study is to appropriate these for film-theory, culminating in two case-studies to demonstrate how formal parameters present and organise ideas of the body and the human. The aim is to develop a material approach to contemporary digital practices, where bodies have not ceased to matter but are framed in new ways by new technologies.
Resumo:
[EN]Social robots are receiving much interest in the robotics community. The most important goal for such robots lies in their interaction capabilities. An attention system is crucial, both as a filter to center the robot’s perceptual resources and as a mean of letting the observer know that the robot has intentionality. In this paper a simple but flexible and functional attentional model is described. The model, which has been implemented in an interactive robot currently under development, fuses both visual and auditive information extracted from the robot’s environment, and can incorporate knowledge-based influences on attention.
Resumo:
[EN]The work presented in this paper is related to Depth Recovery from Focus The approach starts calibrating focal length of the camera using the Gaussian lens law for the thin lens camera model Two approaches are presented based on the availability of the internal distance of the lens
Resumo:
With the world of professional sports shifting towards employing better sport analytics, the demand for vision-based performance analysis is growing increasingly in recent years. In addition, the nature of many sports does not allow the use of any kind of sensors or other wearable markers attached to players for monitoring their performances during competitions. This provides a potential application of systematic observations such as tracking information of the players to help coaches to develop their visual skills and perceptual awareness needed to make decisions about team strategy or training plans. My PhD project is part of a bigger ongoing project between sport scientists and computer scientists involving also industry partners and sports organisations. The overall idea is to investigate the contribution technology can make to the analysis of sports performance on the example of team sports such as rugby, football or hockey. A particular focus is on vision-based tracking, so that information about the location and dynamics of the players can be gained without any additional sensors on the players. To start with, prior approaches on visual tracking are extensively reviewed and analysed. In this thesis, methods to deal with the difficulties in visual tracking to handle the target appearance changes caused by intrinsic (e.g. pose variation) and extrinsic factors, such as occlusion, are proposed. This analysis highlights the importance of the proposed visual tracking algorithms, which reflect these challenges and suggest robust and accurate frameworks to estimate the target state in a complex tracking scenario such as a sports scene, thereby facilitating the tracking process. Next, a framework for continuously tracking multiple targets is proposed. Compared to single target tracking, multi-target tracking such as tracking the players on a sports field, poses additional difficulties, namely data association, which needs to be addressed. Here, the aim is to locate all targets of interest, inferring their trajectories and deciding which observation corresponds to which target trajectory is. In this thesis, an efficient framework is proposed to handle this particular problem, especially in sport scenes, where the players of the same team tend to look similar and exhibit complex interactions and unpredictable movements resulting in matching ambiguity between the players. The presented approach is also evaluated on different sports datasets and shows promising results. Finally, information from the proposed tracking system is utilised as the basic input for further higher level performance analysis such as tactics and team formations, which can help coaches to design a better training plan. Due to the continuous nature of many team sports (e.g. soccer, hockey), it is not straightforward to infer the high-level team behaviours, such as players’ interaction. The proposed framework relies on two distinct levels of performance analysis: low-level performance analysis, such as identifying players positions on the play field, as well as a high-level analysis, where the aim is to estimate the density of player locations or detecting their possible interaction group. The related experiments show the proposed approach can effectively explore this high-level information, which has many potential applications.
Resumo:
O fogo é um processo frequente nas paisagens do norte de Portugal. Estudos anteriores mostraram que os bosques de azinheira (Quercus rotundifolia) persistem após a passagem do fogo e ajudam a diminuir a sua intensidade e taxa de propagação. Os principais objetivos deste estudo foram compreender e modelar o efeito dos bosques de azinheira no comportamento do fogo ao nível da paisagem da bacia superior do rio Sabor, localizado no nordeste de Portugal. O impacto dos bosques de azinheira no comportamento do fogo foi testado em termos de área e configuração de acordo com cenários que simulam a possível distribuição destas unidades de vegetação na paisagem, considerando uma percentagem de ocupação da azinheira de 2.2% (Low), 18.1% (Moderate), 26.0% (High), e 39.8% (Rivers). Estes cenários tiveram como principal objetivo testar 1) o papel dos bosques de azinheira no comportamento do fogo e 2) de que forma a configuração das manchas de azinheira podem ajudar a diminuir a intensidade da linha de fogo e área ardida. Na modelação do comportamento do fogo foi usado o modelo FlamMap para simular a intensidade de linha do fogo e taxa de propagação do fogo com base em modelos de combustível associados a cada ocupação e uso do solo presente na área de estudo, e também com base em fatores topográficos (altitude, declive e orientação da encosta) e climáticos (humidade e velocidade do vento). Foram ainda usados dois modelos de combustível para a ocupação de azinheira (áreas interiores e de bordadura), desenvolvidos com base em dados reais obtidos na região. Usou-se o software FRAGSATS para a análise dos padrões espaciais das classes de intensidade de linha do fogo, usando-se as métricas Class Area (CA), Number of Patches (NP) e Large Patches Index (LPI). Os resultados obtidos indicaram que a intensidade da linha de fogo e a taxa de propagação do fogo variou entre cenários e entre modelos de combustível para o azinhal. A intensidade média da linha de fogo e a taxa média de propagação do fogo decresceu à medida que a percentagem de área de bosques de azinheira aumentou na paisagem. Também foi observado que as métricas CA, NP e LPI variaram entre cenários e modelos de combustível para o azinhal, decrescendo quando a percentagem de área de bosques de azinheira aumentou. Este estudo permitiu concluir que a variação da percentagem de ocupação e configuração espacial dos bosques de azinheira influenciam o comportamento do fogo, reduzindo, em termos médios, a intensidade da linha de fogo e a taxa de propagação, sugerindo que os bosques de azinhal podem ser usados como medidas silvícolas preventivas para diminuir o risco de incêndio nesta região.
Resumo:
Background: Understanding transcriptional regulation by genome-wide microarray studies can contribute to unravel complex relationships between genes. Attempts to standardize the annotation of microarray data include the Minimum Information About a Microarray Experiment (MIAME) recommendations, the MAGE-ML format for data interchange, and the use of controlled vocabularies or ontologies. The existing software systems for microarray data analysis implement the mentioned standards only partially and are often hard to use and extend. Integration of genomic annotation data and other sources of external knowledge using open standards is therefore a key requirement for future integrated analysis systems. Results: The EMMA 2 software has been designed to resolve shortcomings with respect to full MAGE-ML and ontology support and makes use of modern data integration techniques. We present a software system that features comprehensive data analysis functions for spotted arrays, and for the most common synthesized oligo arrays such as Agilent, Affymetrix and NimbleGen. The system is based on the full MAGE object model. Analysis functionality is based on R and Bioconductor packages and can make use of a compute cluster for distributed services. Conclusion: Our model-driven approach for automatically implementing a full MAGE object model provides high flexibility and compatibility. Data integration via SOAP-based web-services is advantageous in a distributed client-server environment as the collaborative analysis of microarray data is gaining more and more relevance in international research consortia. The adequacy of the EMMA 2 software design and implementation has been proven by its application in many distributed functional genomics projects. Its scalability makes the current architecture suited for extensions towards future transcriptomics methods based on high-throughput sequencing approaches which have much higher computational requirements than microarrays.
Resumo:
Over the past several decades, thousands of otoliths, bivalve shells, and scales have been collected for the purposes of age determination and remain archived in European and North American fisheries laboratories. Advances in digital imaging and computer software combined with techniques developed by tree-ring scientists provide a means by which to extract additional levels of information in these calcified structures and generate annually resolved (one value per year), multidecadal time-series of population-level growth anomalies. Chemical and isotopic properties may also be extracted to provide additional information regarding the environmental conditions these organisms experienced.Given that they are exactly placed in time, chronologies can be directly compared to instrumental climate records, chronologies from other regions or species, or time-seriesof other biological phenomena. In this way, chronologies may be used to reconstruct historical ranges of environmental variability, identify climatic drivers of growth, establish linkages within and among species, and generate ecosystem-level indicators. Following the first workshop in Hamburg, Germany, in December 2014, the second workshop on Growth increment Chronologies in Marine Fish: climate-ecosystem interactions in the North Atlantic (WKGIC2) met at the Mediterranean Institute for Advanced Studies headquarters in Esporles, Spain, on 18–22 April 2016, chaired by Bryan Black (USA) and Christoph Stransky (Germany).Thirty-six participants from fifteen different countries attended. Objectives were to i) review the applications of chronologies developed from growth-increment widths in the hard parts (otoliths, shells, scales) of marine fish and bivalve species ii) review the fundamentals of crossdating and chronology development, iii) discuss assumptions and limitations of these approaches, iv) measure otolith growth-increment widths in image analysis software, v) learn software to statistically check increment dating accuracy, vi) generate a growth increment chronology and relate it to climate indices, and vii) initiate cooperative projects or training exercises to commence after the workshop.The workshop began with an overview of tree-ring techniques of chronology development, including a hands-on exercise in cross dating. Next, we discussed the applications of fish and bivalve biochronologies and the range of issues that could be addressed. We then reviewed key assumptions and limitations, especially those associated with short-lived species for which there are numerous and extensive otolith archives in European fisheries labs. Next, participants were provided with images of European plaice otoliths from the North Sea and taught to measure increment widths in image analysis software. Upon completion of measurements, techniques of chronology development were discussed and contrasted to those that have been applied for long-lived species. Plaice growth time-series were then related to environmental variability using the KNMI Climate Explorer. Finally, potential future collaborations and funding opportunities were discussed, and there was a clear desire to meet again to compare various statistical techniques for chronology development using a range existing fish, bivalve, and tree growth-increment datasets. Overall, we hope to increase the use of these techniques, and over the long term, develop networks of biochronologies for integrative analyses of ecosystem functioning and relationships to long-term climate variability and fishing pressure.
Resumo:
(Deep) neural networks are increasingly being used for various computer vision and pattern recognition tasks due to their strong ability to learn highly discriminative features. However, quantitative analysis of their classication ability and design philosophies are still nebulous. In this work, we use information theory to analyze the concatenated restricted Boltzmann machines (RBMs) and propose a mutual information-based RBM neural networks (MI-RBM). We develop a novel pretraining algorithm to maximize the mutual information between RBMs. Extensive experimental results on various classication tasks show the eectiveness of the proposed approach.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Educação, Programa de Pós-Graduação em Educação, 2016.
Resumo:
The main objectives of this thesis are to validate an improved principal components analysis (IPCA) algorithm on images; designing and simulating a digital model for image compression, face recognition and image detection by using a principal components analysis (PCA) algorithm and the IPCA algorithm; designing and simulating an optical model for face recognition and object detection by using the joint transform correlator (JTC); establishing detection and recognition thresholds for each model; comparing between the performance of the PCA algorithm and the performance of the IPCA algorithm in compression, recognition and, detection; and comparing between the performance of the digital model and the performance of the optical model in recognition and detection. The MATLAB © software was used for simulating the models. PCA is a technique used for identifying patterns in data and representing the data in order to highlight any similarities or differences. The identification of patterns in data of high dimensions (more than three dimensions) is too difficult because the graphical representation of data is impossible. Therefore, PCA is a powerful method for analyzing data. IPCA is another statistical tool for identifying patterns in data. It uses information theory for improving PCA. The joint transform correlator (JTC) is an optical correlator used for synthesizing a frequency plane filter for coherent optical systems. The IPCA algorithm, in general, behaves better than the PCA algorithm in the most of the applications. It is better than the PCA algorithm in image compression because it obtains higher compression, more accurate reconstruction, and faster processing speed with acceptable errors; in addition, it is better than the PCA algorithm in real-time image detection due to the fact that it achieves the smallest error rate as well as remarkable speed. On the other hand, the PCA algorithm performs better than the IPCA algorithm in face recognition because it offers an acceptable error rate, easy calculation, and a reasonable speed. Finally, in detection and recognition, the performance of the digital model is better than the performance of the optical model.
Resumo:
To analyze the characteristics and predict the dynamic behaviors of complex systems over time, comprehensive research to enable the development of systems that can intelligently adapt to the evolving conditions and infer new knowledge with algorithms that are not predesigned is crucially needed. This dissertation research studies the integration of the techniques and methodologies resulted from the fields of pattern recognition, intelligent agents, artificial immune systems, and distributed computing platforms, to create technologies that can more accurately describe and control the dynamics of real-world complex systems. The need for such technologies is emerging in manufacturing, transportation, hazard mitigation, weather and climate prediction, homeland security, and emergency response. Motivated by the ability of mobile agents to dynamically incorporate additional computational and control algorithms into executing applications, mobile agent technology is employed in this research for the adaptive sensing and monitoring in a wireless sensor network. Mobile agents are software components that can travel from one computing platform to another in a network and carry programs and data states that are needed for performing the assigned tasks. To support the generation, migration, communication, and management of mobile monitoring agents, an embeddable mobile agent system (Mobile-C) is integrated with sensor nodes. Mobile monitoring agents visit distributed sensor nodes, read real-time sensor data, and perform anomaly detection using the equipped pattern recognition algorithms. The optimal control of agents is achieved by mimicking the adaptive immune response and the application of multi-objective optimization algorithms. The mobile agent approach provides potential to reduce the communication load and energy consumption in monitoring networks. The major research work of this dissertation project includes: (1) studying effective feature extraction methods for time series measurement data; (2) investigating the impact of the feature extraction methods and dissimilarity measures on the performance of pattern recognition; (3) researching the effects of environmental factors on the performance of pattern recognition; (4) integrating an embeddable mobile agent system with wireless sensor nodes; (5) optimizing agent generation and distribution using artificial immune system concept and multi-objective algorithms; (6) applying mobile agent technology and pattern recognition algorithms for adaptive structural health monitoring and driving cycle pattern recognition; (7) developing a web-based monitoring network to enable the visualization and analysis of real-time sensor data remotely. Techniques and algorithms developed in this dissertation project will contribute to research advances in networked distributed systems operating under changing environments.
Resumo:
Improved clinical care for Bipolar Disorder (BD) relies on the identification of diagnostic markers that can reliably detect disease-related signals in clinically heterogeneous populations. At the very least, diagnostic markers should be able to differentiate patients with BD from healthy individuals and from individuals at familial risk for BD who either remain well or develop other psychopathology, most commonly Major Depressive Disorder (MDD). These issues are particularly pertinent to the development of translational applications of neuroimaging as they represent challenges for which clinical observation alone is insufficient. We therefore applied pattern classification to task-based functional magnetic resonance imaging (fMRI) data of the n-back working memory task, to test their predictive value in differentiating patients with BD (n=30) from healthy individuals (n=30) and from patients' relatives who were either diagnosed with MDD (n=30) or were free of any personal lifetime history of psychopathology (n=30). Diagnostic stability in these groups was confirmed with 4-year prospective follow-up. Task-based activation patterns from the fMRI data were analyzed with Gaussian Process Classifiers (GPC), a machine learning approach to detecting multivariate patterns in neuroimaging datasets. Consistent significant classification results were only obtained using data from the 3-back versus 0-back contrast. Using contrast, patients with BD were correctly classified compared to unrelated healthy individuals with an accuracy of 83.5%, sensitivity of 84.6% and specificity of 92.3%. Classification accuracy, sensitivity and specificity when comparing patients with BD to their relatives with MDD, were respectively 73.1%, 53.9% and 94.5%. Classification accuracy, sensitivity and specificity when comparing patients with BD to their healthy relatives were respectively 81.8%, 72.7% and 90.9%. We show that significant individual classification can be achieved using whole brain pattern analysis of task-based working memory fMRI data. The high accuracy and specificity achieved by all three classifiers suggest that multivariate pattern recognition analyses can aid clinicians in the clinical care of BD in situations of true clinical uncertainty regarding the diagnosis and prognosis.
Resumo:
Da alcuni decenni l'UE sta promuovendo l'uso di sistemi di risoluzione alternativa delle controversie (ADR) per favorire l'accesso alla giustizia dei consumatori. La presente tesi fornisce una panoramica completa della "prima generazione" di regole in tema di ADR, con l'obiettivo di indagare le ragioni strutturali del fallimento di tale cornice normativa nel colmare il divario con la pratica commerciale nella risoluzione delle controversie osservabile nei mercati digitali. L'emergere del modello organizzativo della piattaforma nei mercati digitali ha evidenziato l’urgenza di una nuova ondata di regolamentazione. In particolare, le piattaforme digitali di grandi dimensioni (VLOPs) si pongono nell’ottica di esercitare funzioni simili a quelle di aggiudicazione delle controversie precedentemente svolte, in maniera esclusiva, dai sistemi giuridici nazionali o dalle istituzioni ADR. La seconda parte della tesi si basa sull'analisi del fenomeno delle piattaforme digitali da una prospettiva di diritto civile, considerando l'evoluzione del diritto dell'UE in questo settore e il dibattito dottrinale sulle relazioni contrattuali nell’economia delle piattaforme. L'analisi si concentrerà sui sistemi interni di gestione dei reclami utilizzati dalle VLOPs per risolvere i propri conflitti con gli utenti o per giudicare controversie tra utenti. Questi sistemi saranno inquadrati come online dispute resolution (ODR) delle piattaforme. Per sostenere l'analisi del fenomeno, la tesi presenterà quattro casi studio di sistemi di ODR attualmente offerti da VLOPs di diverse categorie. Complessivamente, la tesi mira a fornire una nuova dimensione alla nozione di ODR, offrendo un dettagliato quadro del ruolo delle piattaforme digitali nella risoluzione delle controversie, anche alla luce del Regolamento Platform-to-business (UE 1150/2019) e del Digital Service Act (UE 2065/2022). Dall’indagine emerge la necessità per gli studiosi del diritto processuale civile di prestare attenzione a questo fenomeno emergente, anche al fine di evitare che la risoluzione delle controversie operata dalle piattaforme digitali diventi un ostacolo sostanziale all'accesso alla giustizia dei cittadini.