813 resultados para feature based modelling
Resumo:
Results from two studies on longitudinal friendship networks are presented, exploring the impact of a gratitude intervention on positive and negative affect dynamics in a social network. The gratitude intervention had been previously shown to increase positive affect and decrease negative affect in an individual but dynamic group effects have not been considered. In the first study the intervention was administered to the whole network. In the second study two social networks are considered and in each only a subset of individuals, initially low/high in negative affect respectively received the intervention as `agents of change'. Data was analyzed using stochastic actor based modelling techniques to identify resulting network changes, impact on positive and negative affect and potential contagion of mood within the group. The first study found a group level increase in positive and a decrease in negative affect. Homophily was detected with regard to positive and negative affect but no evidence of contagion was found. The network itself became more volatile along with a fall in rate of change of negative affect. Centrality measures indicated that the best broadcasters were the individuals with the least negative affect levels at the beginning of the study. In the second study, the positive and negative affect levels for the whole group depended on the initial levels of negative affect of the intervention recipients. There was evidence of positive affect contagion in the group where intervention recipients had low initial level of negative affect and contagion in negative affect for the group where recipients had initially high level of negative affect.
Resumo:
Atmospheric transport and suspension of dust frequently brings electrification, which may be substantial. Electric fields of 10 kVm-1 to 100 kVm-1 have been observed at the surface beneath suspended dust in the terrestrial atmosphere, and some electrification has been observed to persist in dust at levels to 5 km, as well as in volcanic plumes. The interaction between individual particles which causes the electrification is incompletely understood, and multiple processes are thought to be acting. A variation in particle charge with particle size, and the effect of gravitational separation explains to, some extent, the charge structures observed in terrestrial dust storms. More extensive flow-based modelling demonstrates that bulk electric fields in excess of 10 kV m-1 can be obtained rapidly (in less than 10 s) from rotating dust systems (dust devils) and that terrestrial breakdown fields can be obtained. Modelled profiles of electrical conductivity in the Martian atmosphere suggest the possibility of dust electrification, and dust devils have been suggested as a mechanism of charge separation able to maintain current flow between one region of the atmosphere and another, through a global circuit. Fundamental new understanding of Martian atmospheric electricity will result from the ExoMars mission, which carries the DREAMS (Dust characterization, Risk Assessment, and Environment Analyser on the Martian Surface)-MicroARES (Atmospheric Radiation and Electricity Sensor) instrumentation to Mars in 2016 for the first in situ measurements.
Resumo:
The environment where galaxies are found heavily influences their evolution. Close groupings, like the ones in the cores of galaxy clusters or compact groups, evolve in ways far more dramatic than their isolated counterparts. We have conducted a multi-wavelength study of Hickson Compact Group 7 (HCG 7), consisting of four giant galaxies: three spirals and one lenticular. We use Hubble Space Telescope (HST) imaging to identify and characterize the young and old star cluster populations. We find young massive clusters (YMCs) mostly in the three spirals, while the lenticular features a large, unimodal population of globular clusters (GCs) but no detectable clusters with ages less than a few Gyr. The spatial and approximate age distributions of the similar to 300 YMCs and similar to 150 GCs thus hint at a regular star formation history in the group over a Hubble time. While at first glance the HST data show the galaxies as undisturbed, our deep ground-based, wide-field imaging that extends the HST coverage reveals faint signatures of stellar material in the intragroup medium (IGM). We do not, however, detect the IGM in H I or Chandra X-ray observations, signatures that would be expected to arise from major mergers. Despite this fact, we find that the H I gas content of the individual galaxies and the group as a whole are a third of the expected abundance. The appearance of quiescence is challenged by spectroscopy that reveals an intense ionization continuum in one galaxy nucleus, and post-burst characteristics in another. Our spectroscopic survey of dwarf galaxy members yields a single dwarf elliptical galaxy in an apparent stellar tidal feature. Based on all this information, we suggest an evolutionary scenario for HCG 7, whereby the galaxies convert most of their available gas into stars without the influence of major mergers and ultimately result in a dry merger. As the conditions governing compact groups are reminiscent of galaxies at intermediate redshift, we propose that HCGs are appropriate for studying galaxy evolution at z similar to 1-2.
Resumo:
Small and medium-sized companies and other enterprises (SMEs) around the world are exposed to flood risk and many of the 4.5 million in the UK are at risk. As SMEs represent almost half of total business turnover in the UK, their protection is a vital part of the drive for greater climate change resilience. However, few have measures in place to ensure the continuity of their activities during a flood and its aftermath. The SESAME project aims to develop tools that encourage businesses to discover ways of becoming more resilient to floods and to appreciate how much better off they will be once they have adapted to the ongoing risk. By taking some of the mystery out of flooding and flood risk, it aims to make it susceptible to the same business acumen that enables the UK’s SMEs to deal with the many other challenges they face. In this paper we will report on the different aspects of the research in the project Understanding behaviour Changing behaviour Modelling impacts Economic impacts Through the above the project will advise government, local authorities and other public bodies on how to improve their responses to floods and will enable them to recommend ways to improve the guidelines provided to SMEs in flood risk areas.
Resumo:
A abordagem de Modelos Baseados em Agentes é utilizada para trabalhar problemas complexos, em que se busca obter resultados partindo da análise e construção de componentes e das interações entre si. Os resultados observados a partir das simulações são agregados da combinação entre ações e interferências que ocorrem no nível microscópico do modelo. Conduzindo, desta forma, a uma simulação do micro para o macro. Os mercados financeiros são sistemas perfeitos para o uso destes modelos por preencherem a todos os seus requisitos. Este trabalho implementa um Modelo de Mercado Financeiro Baseado em Agentes constituído por diversos agentes que interagem entre si através de um Núcleo de Negociação que atua com dois ativos e conta com o auxílio de formadores de mercado para promover a liquidez dos mercados, conforme se verifica em mercados reais. Para operação deste modelo, foram desenvolvidos dois tipos de agentes que administram, simultaneamente, carteiras com os dois ativos. O primeiro tipo usa o modelo de Markowitz, enquanto o segundo usa técnicas de análise de spread entre ativos. Outra contribuição deste modelo é a análise sobre o uso de função objetivo sobre os retornos dos ativos, no lugar das análises sobre os preços.
Resumo:
Visual Odometry is the process that estimates camera position and orientation based solely on images and in features (projections of visual landmarks present in the scene) extraced from them. With the increasing advance of Computer Vision algorithms and computer processing power, the subarea known as Structure from Motion (SFM) started to supply mathematical tools composing localization systems for robotics and Augmented Reality applications, in contrast with its initial purpose of being used in inherently offline solutions aiming 3D reconstruction and image based modelling. In that way, this work proposes a pipeline to obtain relative position featuring a previously calibrated camera as positional sensor and based entirely on models and algorithms from SFM. Techniques usually applied in camera localization systems such as Kalman filters and particle filters are not used, making unnecessary additional information like probabilistic models for camera state transition. Experiments assessing both 3D reconstruction quality and camera position estimated by the system were performed, in which image sequences captured in reallistic scenarios were processed and compared to localization data gathered from a mobile robotic platform
Resumo:
Attention is a phenomenon that allows the selection of relevant stimuli in order to prioritize them and improve their processing. This modulation could occur in any step of the process: in an early stage or in a late stage, more precisely, in a perceptive or motor stage. However, even with a rich literature about attention in time, there are still some divergences about how this modulation occurs. A hypothesis about it says that temporal attention would only be able to prepare the motor system to respond. The perceptual modulation would only occur when the temporal expectation is in combination with another expectation of a property with neuronal receptive field. In this situation, the receptive field's pre-activation is the explanation of how temporal attention would be capable to modulate perceptual process. The crucial objective was to test this hypothesis. In other words, it was to verify if the feature expectation of a stimulus (Gabor orientation) and its temporal expectancy interferes in perception quality. Two experiments were made: the first one tested the voluntary temporal expectation, and the second one tested the automatic temporal expectation. Our data shows that both Feature-based Attention and Temporal Attention improve the process of perception. Temporal expectation effects just occur in situations of competitive environment. Hypothesis verification was not conclusive, because of methodological problems
Resumo:
The regional ocean off southeast Brazil (20 degrees S-28 degrees S) is known as a current-eddy-upwelling region. The proximity of the Brazil Current to the coast in the Cape Sao Tome vicinities, as well as of its quasi-stationary unstable meanders, suggests the possibility of background eddy-induced upwelling. Such phenomenon can intensify the prevalent coastal upwelling due to wind and topographic effects. In this paper, with the help of a numerical simulation, we provide evidence that eddy-induced upwelling in the absence of wind is possible in this region. The simulation was conducted with a regional configuration of the 3-D Princeton Ocean Model initialized by a feature-based implementation of the Brazil Current and Cape Frio eddy, blended with climatology. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Total ankle arthroplasty (TAA) is still not as satisfactory as total hip and total knee arthroplasty. For the TAA to be considered a valuable alternative to ankle arthrodesis, an effective range of ankle mobility must be recovered. The disappointing clinical results of the current generation of TAA are mostly related to poor understanding of the structures guiding ankle joint mobility. A new design (BOX Ankle) has been developed, uniquely able to restore physiologic ankle mobility and a natural relationship between the implanted components and the retained ligaments. For the first time the shapes of the tibial and talar components in the sagittal plane were designed to be compatible with the demonstrated ligament isometric rotation. This resulted in an unique motion at the replaced ankle where natural sliding as well as rolling motion occurs while at the same time full conformity is maintained between the three components throughout the flexion arc. According to prior research, the design features a spherical convex tibial component, a talar component with radius of curvature in the sagittal plane longer than that of the natural talus, and a fully conforming meniscal component. After computer-based modelling and preliminary observations in several trial implantation in specimens, 126 patients were implanted in the period July 2003 – December 2008. 75 patients with at least 6 months follow-up are here reported. Mean age was 62,6 years (range 22 – 80), mean follow-up 20,2 months. The AOFAS clinical score systems were used to assess patient outcome. Radiographs at maximal dorsiflexion and maximal plantar flexion confirmed the meniscalbearing component moves anteriorly during dorsiflexion and posteriorly during plantarflexion. Frontal and lateral radiographs in the patients, show good alignment of the components, and no signs of radiolucency or loosening. The mean AOFAS score was observed to go from 41 pre-op to 74,6 at 6 month follow-up, with further improvement at the following follow-up. These early results reveal satisfactory clinical scores, with good recovery of range of motion and reduction of pain. Radiographic assessment reveals good osteointegration. All these preliminary results confirm biomechanical studies and the validity of this novel ligamentcompatible prosthesis design. Surely it will be important to re-evaluate these patients later.
Resumo:
3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.
Resumo:
An algorithm for the real-time registration of a retinal video sequence captured with a scanning digital ophthalmoscope (SDO) to a retinal composite image is presented. This method is designed for a computer-assisted retinal laser photocoagulation system to compensate for retinal motion and hence enhance the accuracy, speed, and patient safety of retinal laser treatments. The procedure combines intensity and feature-based registration techniques. For the registration of an individual frame, the translational frame-to-frame motion between preceding and current frame is detected by normalized cross correlation. Next, vessel points on the current video frame are identified and an initial transformation estimate is constructed from the calculated translation vector and the quadratic registration matrix of the previous frame. The vessel points are then iteratively matched to the segmented vessel centerline of the composite image to refine the initial transformation and register the video frame to the composite image. Criteria for image quality and algorithm convergence are introduced, which assess the exclusion of single frames from the registration process and enable a loss of tracking signal if necessary. The algorithm was successfully applied to ten different video sequences recorded from patients. It revealed an average accuracy of 2.47 ± 2.0 pixels (∼23.2 ± 18.8 μm) for 2764 evaluated video frames and demonstrated that it meets the clinical requirements.
Resumo:
The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by process-based modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM). The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under http://www.flow-r.org) and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws. We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10 m DEM resolution as a good compromise between processing time and quality of results. However, valuable results have still been obtained on the basis of lower quality DEMs with 25 m resolution.
Resumo:
Individuals differ in their preference for processing information on the basis of taxonomic, feature-based similarity, or thematic, relation-based similarity. These differences, which have been investigated in a recently emerging research stream in cognitive psychology, affect innovative behavior and thus constitute an important antecedent of individual performance in research and development (R&D) that has been overlooked so far in the literature on innovation management. To fill this research gap, survey and test data from the employees of a multinational information technology services firm are used to examine the relationship between thematic thinking and R&D professionals' individual performance. A moderated mediation model is applied to investigate the proposed relationships of thematic thinking and individual-level performance indicators. Results show a positive relationship between thematic thinking and innovativeness, as well as individual job performance. While the results do not support the postulated moderation of the innovativeness–job performance relationship by employees' political skill, they show that the relationship between thematic thinking and job performance is fully mediated by R&D professionals' innovativeness. The present study is thus the first to reveal a positive relationship between thematic thinking and innovative performance.
Resumo:
In 2008, the Oceanography Center at the University of Cyprus acquired two underwater gliders in the framework of a nationally-managed infrastructure upgrade program. The gliders were purchased from the Seaglider Fabrication Center at the University of Washington. Both gliders are rated to 1000 m and carry a typical sensor payload: non-pumped conductivity-temperature-depth sensors (CTD), a dissolved oxygen sensor, an optical triplet to measure optical backscatter at 400 nm, 700 nm, and chlorophyll-a fluorescence. Since March of 2009, the gliders have been used in a long-term observing program of the Cypriot EEZ, and by September 2015, have covered more than 15300 km over ground and 3500 dive cycles in 940 glider days. Butterfly patterns have been flown in two configurations, either on the western or eastern side of the EEZ south of Cyprus. The glider endurance lines criss-cross the region in order to more accurately locate and investigate the mesoscale structures south of Cyprus, and in particular the Cyprus eddy which is often the dominant feature. Based on the near real time observations, the glider mission was sometimes altered in order to more fully sample the Cyprus eddy, or to locate its center or extent. A summary of the raw and processed data collected, and the quality control procedures are presented, in order for future users to take advantage of this unique data set.
Resumo:
La mayor parte de los entornos diseñados por el hombre presentan características geométricas específicas. En ellos es frecuente encontrar formas poligonales, rectangulares, circulares . . . con una serie de relaciones típicas entre distintos elementos del entorno. Introducir este tipo de conocimiento en el proceso de construcción de mapas de un robot móvil puede mejorar notablemente la calidad y la precisión de los mapas resultantes. También puede hacerlos más útiles de cara a un razonamiento de más alto nivel. Cuando la construcción de mapas se formula en un marco probabilístico Bayesiano, una especificación completa del problema requiere considerar cierta información a priori sobre el tipo de entorno. El conocimiento previo puede aplicarse de varias maneras, en esta tesis se presentan dos marcos diferentes: uno basado en el uso de primitivas geométricas y otro que emplea un método de representación cercano al espacio de las medidas brutas. Un enfoque basado en características geométricas supone implícitamente imponer un cierto modelo a priori para el entorno. En este sentido, el desarrollo de una solución al problema SLAM mediante la optimización de un grafo de características geométricas constituye un primer paso hacia nuevos métodos de construcción de mapas en entornos estructurados. En el primero de los dos marcos propuestos, el sistema deduce la información a priori a aplicar en cada caso en base a una extensa colección de posibles modelos geométricos genéricos, siguiendo un método de Maximización de la Esperanza para hallar la estructura y el mapa más probables. La representación de la estructura del entorno se basa en un enfoque jerárquico, con diferentes niveles de abstracción para los distintos elementos geométricos que puedan describirlo. Se llevaron a cabo diversos experimentos para mostrar la versatilidad y el buen funcionamiento del método propuesto. En el segundo marco, el usuario puede definir diferentes modelos de estructura para el entorno mediante grupos de restricciones y energías locales entre puntos vecinos de un conjunto de datos del mismo. El grupo de restricciones que se aplica a cada grupo de puntos depende de la topología, que es inferida por el propio sistema. De este modo, se pueden incorporar nuevos modelos genéricos de estructura para el entorno con gran flexibilidad y facilidad. Se realizaron distintos experimentos para demostrar la flexibilidad y los buenos resultados del enfoque propuesto. Abstract Most human designed environments present specific geometrical characteristics. In them, it is easy to find polygonal, rectangular and circular shapes, with a series of typical relations between different elements of the environment. Introducing this kind of knowledge in the mapping process of mobile robots can notably improve the quality and accuracy of the resulting maps. It can also make them more suitable for higher level reasoning applications. When mapping is formulated in a Bayesian probabilistic framework, a complete specification of the problem requires considering a prior for the environment. The prior over the structure of the environment can be applied in several ways; this dissertation presents two different frameworks, one using a feature based approach and another one employing a dense representation close to the measurements space. A feature based approach implicitly imposes a prior for the environment. In this sense, feature based graph SLAM was a first step towards a new mapping solution for structured scenarios. In the first framework, the prior is inferred by the system from a wide collection of feature based priors, following an Expectation-Maximization approach to obtain the most probable structure and the most probable map. The representation of the structure of the environment is based on a hierarchical model with different levels of abstraction for the geometrical elements describing it. Various experiments were conducted to show the versatility and the good performance of the proposed method. In the second framework, different priors can be defined by the user as sets of local constraints and energies for consecutive points in a range scan from a given environment. The set of constraints applied to each group of points depends on the topology, which is inferred by the system. This way, flexible and generic priors can be incorporated very easily. Several tests were carried out to demonstrate the flexibility and the good results of the proposed approach.