906 resultados para Adaptive object model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, we propose to infer pixel-level labelling in video by utilising only object category information, exploiting the intrinsic structure of video data. Our motivation is the observation that image-level labels are much more easily to be acquired than pixel-level labels, and it is natural to find a link between the image level recognition and pixel level classification in video data, which would transfer learned recognition models from one domain to the other one. To this end, this thesis proposes two domain adaptation approaches to adapt the deep convolutional neural network (CNN) image recognition model trained from labelled image data to the target domain exploiting both semantic evidence learned from CNN, and the intrinsic structures of unlabelled video data. Our proposed approaches explicitly model and compensate for the domain adaptation from the source domain to the target domain which in turn underpins a robust semantic object segmentation method for natural videos. We demonstrate the superior performance of our methods by presenting extensive evaluations on challenging datasets comparing with the state-of-the-art methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The major function of this model is to access the UCI Wisconsin Breast Cancer data-set[1] and classify the data items into two categories, which are normal and anomalous. This kind of classification can be referred as anomaly detection, which discriminates anomalous behaviour from normal behaviour in computer systems. One popular solution for anomaly detection is Artificial Immune Systems (AIS). AIS are adaptive systems inspired by theoretical immunology and observed immune functions, principles and models which are applied to problem solving. The Dendritic Cell Algorithm (DCA)[2] is an AIS algorithm that is developed specifically for anomaly detection. It has been successfully applied to intrusion detection in computer security. It is believed that agent-based modelling is an ideal approach for implementing AIS, as intelligent agents could be the perfect representations of immune entities in AIS. This model evaluates the feasibility of re-implementing the DCA in an agent-based simulation environment called AnyLogic, where the immune entities in the DCA are represented by intelligent agents. If this model can be successfully implemented, it makes it possible to implement more complicated and adaptive AIS models in the agent-based simulation environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new type of space debris was recently discovered by Schildknecht in near -geosynchronous orbit (GEO). These objects were later identified as exhibiting properties associated with High Area-to-Mass ratio (HAMR) objects. According to their brightness magnitudes (light curve), high rotation rates and composition properties (albedo, amount of specular and diffuse reflection, colour, etc), it is thought that these objects are multilayer insulation (MLI). Observations have shown that this debris type is very sensitive to environmental disturbances, particularly solar radiation pressure, due to the fact that their shapes are easily deformed leading to changes in the Area-to-Mass ratio (AMR) over time. This thesis proposes a simple effective flexible model of the thin, deformable membrane with two different methods. Firstly, this debris is modelled with Finite Element Analysis (FEA) by using Bernoulli-Euler theory called “Bernoulli model”. The Bernoulli model is constructed with beam elements consisting 2 nodes and each node has six degrees of freedom (DoF). The mass of membrane is distributed in beam elements. Secondly, the debris based on multibody dynamics theory call “Multibody model” is modelled as a series of lump masses, connected through flexible joints, representing the flexibility of the membrane itself. The mass of the membrane, albeit low, is taken into account with lump masses in the joints. The dynamic equations for the masses, including the constraints defined by the connecting rigid rod, are derived using fundamental Newtonian mechanics. The physical properties of both flexible models required by the models (membrane density, reflectivity, composition, etc.), are assumed to be those of multilayer insulation. Both flexible membrane models are then propagated together with classical orbital and attitude equations of motion near GEO region to predict the orbital evolution under the perturbations of solar radiation pressure, Earth’s gravity field, luni-solar gravitational fields and self-shadowing effect. These results are then compared to two rigid body models (cannonball and flat rigid plate). In this investigation, when comparing with a rigid model, the evolutions of orbital elements of the flexible models indicate the difference of inclination and secular eccentricity evolutions, rapid irregular attitude motion and unstable cross-section area due to a deformation over time. Then, the Monte Carlo simulations by varying initial attitude dynamics and deformed angle are investigated and compared with rigid models over 100 days. As the results of the simulations, the different initial conditions provide unique orbital motions, which is significantly different in term of orbital motions of both rigid models. Furthermore, this thesis presents a methodology to determine the material dynamic properties of thin membranes and validates the deformation of the multibody model with real MLI materials. Experiments are performed in a high vacuum chamber (10-4 mbar) replicating space environment. A thin membrane is hinged at one end but free at the other. The free motion experiment, the first experiment, is a free vibration test to determine the damping coefficient and natural frequency of the thin membrane. In this test, the membrane is allowed to fall freely in the chamber with the motion tracked and captured through high velocity video frames. A Kalman filter technique is implemented in the tracking algorithm to reduce noise and increase the tracking accuracy of the oscillating motion. The forced motion experiment, the last test, is performed to determine the deformation characteristics of the object. A high power spotlight (500-2000W) is used to illuminate the MLI and the displacements are measured by means of a high resolution laser sensor. Finite Element Analysis (FEA) and multibody dynamics of the experimental setups are used for the validation of the flexible model by comparing with the experimental results of displacements and natural frequencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Supply chains are ubiquitous in any commercial delivery systems. The exchange of goods and services, from different supply points to distinct destinations scattered along a given geographical area, requires the management of stocks and vehicles fleets in order to minimize costs while maintaining good quality services. Even if the operating conditions remain constant over a given time horizon, managing a supply chain is a very complex task. Its complexity increases exponentially with both the number of network nodes and the dynamical operational changes. Moreover, the management system must be adaptive in order to easily cope with several disturbances such as machinery and vehicles breakdowns or changes in demand. This work proposes the use of a model predictive control paradigm in order to tackle the above referred issues. The obtained simulation results suggest that this strategy promotes an easy tasks rescheduling in case of disturbances or anticipated changes in operating conditions. © Springer International Publishing Switzerland 2017

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] Therefore the understanding and proper evaluation of the flow and mixing behaviour at microscale becomes a very important issue. In this study, the diffusion behaviour of two reacting solutions of HCI and NaOH were directly observed in a glass/polydimethylsiloxane microfluidic device using adaptive coatings based on the conductive polymer polyaniline that are covalently attached to the microchannel walls. The two liquid streams were combined at the junction of a Y-shaped microchannel, and allowed to diffuse into each other and react. The results showed excellent correlation between optical observation of the diffusion process and the numerical results. A numerical model which is based on finite volume method (FVM) discretisation of steady Navier-Stokes (fluid flow) equations and mass transport equations without reactions was used to calculate the flow variables at discrete points in the finite volume mesh element. The high correlation between theory and practical data indicates the potential of such coatings to monitor diffusion processes and mixing behaviour inside microfluidic channels in a dye free environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mental stress is known to disrupt the execution of motor performance and can lead to decrements in the quality of performance, however, individuals have shown significant differences regarding how fast and well they can perform a skilled task according to how well they can manage stress and emotion. The purpose of this study was to advance our understanding of how the brain modulates emotional reactivity under different motivational states to achieve differential performance in a target shooting task that requires precision visuomotor coordination. In order to study the interactions in emotion regulatory brain areas (i.e. the ventral striatum, amygdala, prefrontal cortex) and the autonomic nervous system, reward and punishment interventions were employed and the resulting behavioral and physiological responses contrasted to observe the changes in shooting performance (i.e. shooting accuracy and stability of aim) and neuro-cognitive processes (i.e. cognitive load and reserve) during the shooting task. Thirty-five participants, aged 18 to 38 years, from the Reserve Officers’ Training Corp (ROTC) at the University of Maryland were recruited to take 30 shots at a bullseye target in three different experimental conditions. In the reward condition, $1 was added to their total balance for every 10-point shot. In the punishment condition, $1 was deducted from their total balance if they did not hit the 10-point area. In the neutral condition, no money was added or deducted from their total balance. When in the reward condition, which was reportedly most enjoyable and least stressful of the conditions, heart rate variability was found to be positively related to shooting scores, inversely related to variability in shooting performance and positively related to alpha power (i.e. less activation) in the left temporal region. In the punishment (and most stressful) condition, an increase in sympathetic response (i.e. increased LF/HF ratio) was positively related to jerking movements as well as variability of placement (on the target) in the shots taken. This, coupled with error monitoring activity in the anterior cingulate cortex, suggests evaluation of self-efficacy might be driving arousal regulation, thus affecting shooting performance. Better performers showed variable, increasing high-alpha power in the temporal region during the aiming period towards taking the shot which could indicate an adaptive strategy of engagement. They also showed lower coherence during hit shots than missed shots which was coupled with reduced jerking movements and better precision and accuracy. Frontal asymmetry measures revealed possible influence of the prefrontal lobe in driving this effect in reward and neutral conditions. The possible interactions, reasons behind these findings and implications are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação de Mestrado, Engenharia Informática, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Generating sample models for testing a model transformation is no easy task. This paper explores the use of classifying terms and stratified sampling for developing richer test cases for model transformations. Classifying terms are used to define the equivalence classes that characterize the relevant subgroups for the test cases. From each equivalence class of object models, several representative models are chosen depending on the required sample size. We compare our results with test suites developed using random sampling, and conclude that by using an ordered and stratified approach the coverage and effectiveness of the test suite can be significantly improved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les métaheuristiques sont très utilisées dans le domaine de l'optimisation discrète. Elles permettent d’obtenir une solution de bonne qualité en un temps raisonnable, pour des problèmes qui sont de grande taille, complexes, et difficiles à résoudre. Souvent, les métaheuristiques ont beaucoup de paramètres que l’utilisateur doit ajuster manuellement pour un problème donné. L'objectif d'une métaheuristique adaptative est de permettre l'ajustement automatique de certains paramètres par la méthode, en se basant sur l’instance à résoudre. La métaheuristique adaptative, en utilisant les connaissances préalables dans la compréhension du problème, des notions de l'apprentissage machine et des domaines associés, crée une méthode plus générale et automatique pour résoudre des problèmes. L’optimisation globale des complexes miniers vise à établir les mouvements des matériaux dans les mines et les flux de traitement afin de maximiser la valeur économique du système. Souvent, en raison du grand nombre de variables entières dans le modèle, de la présence de contraintes complexes et de contraintes non-linéaires, il devient prohibitif de résoudre ces modèles en utilisant les optimiseurs disponibles dans l’industrie. Par conséquent, les métaheuristiques sont souvent utilisées pour l’optimisation de complexes miniers. Ce mémoire améliore un procédé de recuit simulé développé par Goodfellow & Dimitrakopoulos (2016) pour l’optimisation stochastique des complexes miniers stochastiques. La méthode développée par les auteurs nécessite beaucoup de paramètres pour fonctionner. Un de ceux-ci est de savoir comment la méthode de recuit simulé cherche dans le voisinage local de solutions. Ce mémoire implémente une méthode adaptative de recherche dans le voisinage pour améliorer la qualité d'une solution. Les résultats numériques montrent une augmentation jusqu'à 10% de la valeur de la fonction économique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mobile sensor networks have unique advantages compared with wireless sensor networks. The mobility enables mobile sensors to flexibly reconfigure themselves to meet sensing requirements. In this dissertation, an adaptive sampling method for mobile sensor networks is presented. Based on the consideration of sensing resource constraints, computing abilities, and onboard energy limitations, the adaptive sampling method follows a down sampling scheme, which could reduce the total number of measurements, and lower sampling cost. Compressive sensing is a recently developed down sampling method, using a small number of randomly distributed measurements for signal reconstruction. However, original signals cannot be reconstructed using condensed measurements, as addressed by Shannon Sampling Theory. Measurements have to be processed under a sparse domain, and convex optimization methods should be applied to reconstruct original signals. Restricted isometry property would guarantee signals can be recovered with little information loss. While compressive sensing could effectively lower sampling cost, signal reconstruction is still a great research challenge. Compressive sensing always collects random measurements, whose information amount cannot be determined in prior. If each measurement is optimized as the most informative measurement, the reconstruction performance can perform much better. Based on the above consideration, this dissertation is focusing on an adaptive sampling approach, which could find the most informative measurements in unknown environments and reconstruct original signals. With mobile sensors, measurements are collect sequentially, giving the chance to uniquely optimize each of them. When mobile sensors are about to collect a new measurement from the surrounding environments, existing information is shared among networked sensors so that each sensor would have a global view of the entire environment. Shared information is analyzed under Haar Wavelet domain, under which most nature signals appear sparse, to infer a model of the environments. The most informative measurements can be determined by optimizing model parameters. As a result, all the measurements collected by the mobile sensor network are the most informative measurements given existing information, and a perfect reconstruction would be expected. To present the adaptive sampling method, a series of research issues will be addressed, including measurement evaluation and collection, mobile network establishment, data fusion, sensor motion, signal reconstruction, etc. Two dimensional scalar field will be reconstructed using the method proposed. Both single mobile sensors and mobile sensor networks will be deployed in the environment, and reconstruction performance of both will be compared.In addition, a particular mobile sensor, a quadrotor UAV is developed, so that the adaptive sampling method can be used in three dimensional scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les métaheuristiques sont très utilisées dans le domaine de l'optimisation discrète. Elles permettent d’obtenir une solution de bonne qualité en un temps raisonnable, pour des problèmes qui sont de grande taille, complexes, et difficiles à résoudre. Souvent, les métaheuristiques ont beaucoup de paramètres que l’utilisateur doit ajuster manuellement pour un problème donné. L'objectif d'une métaheuristique adaptative est de permettre l'ajustement automatique de certains paramètres par la méthode, en se basant sur l’instance à résoudre. La métaheuristique adaptative, en utilisant les connaissances préalables dans la compréhension du problème, des notions de l'apprentissage machine et des domaines associés, crée une méthode plus générale et automatique pour résoudre des problèmes. L’optimisation globale des complexes miniers vise à établir les mouvements des matériaux dans les mines et les flux de traitement afin de maximiser la valeur économique du système. Souvent, en raison du grand nombre de variables entières dans le modèle, de la présence de contraintes complexes et de contraintes non-linéaires, il devient prohibitif de résoudre ces modèles en utilisant les optimiseurs disponibles dans l’industrie. Par conséquent, les métaheuristiques sont souvent utilisées pour l’optimisation de complexes miniers. Ce mémoire améliore un procédé de recuit simulé développé par Goodfellow & Dimitrakopoulos (2016) pour l’optimisation stochastique des complexes miniers stochastiques. La méthode développée par les auteurs nécessite beaucoup de paramètres pour fonctionner. Un de ceux-ci est de savoir comment la méthode de recuit simulé cherche dans le voisinage local de solutions. Ce mémoire implémente une méthode adaptative de recherche dans le voisinage pour améliorer la qualité d'une solution. Les résultats numériques montrent une augmentation jusqu'à 10% de la valeur de la fonction économique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A comprehensive user model, built by monitoring a user's current use of applications, can be an excellent starting point for building adaptive user-centred applications. The BaranC framework monitors all user interaction with a digital device (e.g. smartphone), and also collects all available context data (such as from sensors in the digital device itself, in a smart watch, or in smart appliances) in order to build a full model of user application behaviour. The model built from the collected data, called the UDI (User Digital Imprint), is further augmented by analysis services, for example, a service to produce activity profiles from smartphone sensor data. The enhanced UDI model can then be the basis for building an appropriate adaptive application that is user-centred as it is based on an individual user model. As BaranC supports continuous user monitoring, an application can be dynamically adaptive in real-time to the current context (e.g. time, location or activity). Furthermore, since BaranC is continuously augmenting the user model with more monitored data, over time the user model changes, and the adaptive application can adapt gradually over time to changing user behaviour patterns. BaranC has been implemented as a service-oriented framework where the collection of data for the UDI and all sharing of the UDI data are kept strictly under the user's control. In addition, being service-oriented allows (with the user's permission) its monitoring and analysis services to be easily used by 3rd parties in order to provide 3rd party adaptive assistant services. An example 3rd party service demonstrator, built on top of BaranC, proactively assists a user by dynamic predication, based on the current context, what apps and contacts the user is likely to need. BaranC introduces an innovative user-controlled unified service model of monitoring and use of personal digital activity data in order to provide adaptive user-centred applications. This aims to improve on the current situation where the diversity of adaptive applications results in a proliferation of applications monitoring and using personal data, resulting in a lack of clarity, a dispersal of data, and a diminution of user control.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Comparative and evolutionary developmental analyses seek to discover the similarities and differences between humans and non-human species that illuminate both the evolutionary foundations of our nature that we share with other animals, and the distinctive characteristics that make human development unique. As our closest animal relatives, with whom we last shared common ancestry, non-human primates have beenparticularly important in this endeavour. Such studies that have focused on social learning, traditions, and culture have discovered much about the ‘how’ of social learning, concerned with key underlying processes such as imitation and emulation. One of the core discoveries is that the adaptive adjustment of social learning options to different contexts is not unique to human infants, therefore multiple new strands of research have begun to focus on more subtle questions about when, from whom, and why such learning occurs. Here we review illustrative studies on both human infants and young children and on non-human primates to identify the similarities shared more broadly across the primate order, and the apparent specialisms that distinguish human development. Adaptive biases in social learning discussed include those modulated by task comprehension, experience, conformity to majorities, and the age, skill, proficiency and familiarity of potential alternative cultural models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Near-infrared polarimetry observation is a powerful tool to study the central sources at the center of the Milky Way. My aim of this thesis is to analyze the polarized emission present in the central few light years of the Galactic Center region, in particular the non-thermal polarized emission of Sagittarius~A* (Sgr~A*), the electromagnetic manifestation of the super-massive black hole, and the polarized emission of an infrared-excess source in the literature referred to as DSO/G2. This source is in orbit about Sgr~A*. In this thesis I focus onto the Galactic Center observations at $\lambda=2.2~\mu m$ ($K_\mathrm{s}$-band) in polarimetry mode during several epochs from 2004 to 2012. The near-infrared polarized observations have been carried out using the adaptive optics instrument NAOS/CONICA and Wollaston prism at the Very Large Telescope of ESO (European Southern Observatory). Linear polarization at 2.2 $\mu m$, its flux statistics and time variation, can be used to constrain the physical conditions of the accretion process onto the central super-massive black hole. I present a statistical analysis of polarized $K_\mathrm{s}$-band emission from Sgr~A* and investigate the most comprehensive sample of near-infrared polarimetric light curves of this source up to now. I find several polarized flux excursions during the years and obtain an exponent of about 4 for the power-law fitted to polarized flux density distribution of fluxes above 5~mJy. Therefore, this distribution is closely linked to the single state power-law distribution of the total $K_\mathrm{s}$-band flux densities reported earlier by us. I find polarization degrees of the order of 20\%$\pm$10\% and a preferred polarization angle of $13^o\pm15^o$. Based on simulations of polarimetric measurements given the observed flux density and its uncertainty in orthogonal polarimetry channels, I find that the uncertainties of polarization parameters under a total flux density of $\sim 2\,{\mathrm{mJy}}$ are probably dominated by observational uncertainties. At higher flux densities there are intrinsic variations of polarization degree and angle within rather well constrained ranges. Since the emission is most likely due to optically thin synchrotron radiation, the obtained preferred polarization angle is very likely reflecting the intrinsic orientation of the Sgr~A* system i.e. an accretion disk or jet/wind scenario coupled to the super-massive black hole. Our polarization statistics show that Sgr~A* must be a stable system, both in terms of geometry, and the accretion process. I also investigate an infrared-excess source called G2 or Dusty S-cluster Object (DSO) moving on a highly eccentric orbit around the Galaxy's central black hole, Sgr~A*. I use for the first time the near-infrared polarimetric imaging data to determine the nature and the properties of DSO and obtain an improved $K_\mathrm{s}$-band identification of this source in median polarimetry images of different observing years. The source starts to deviate from the stellar confusion in 2008 data and it does not show a flux density variability based on our data set. Furthermore, I measure the polarization degree and angle of this source and conclude based on the simulations on polarization parameters that it is an intrinsically polarized source with a varying polarization angle as it approaches Sgr~A* position. I use the interpretation of the DSO polarimetry measurements to assess its possible properties.