825 resultados para Tracking and trailing.
Resumo:
This paper describes a simple method for internal camera calibration for computer vision. This method is based on tracking image features through a sequence of images while the camera undergoes pure rotation. The location of the features relative to the camera or to each other need not be known and therefore this method can be used both for laboratory calibration and for self calibration in autonomous robots working in unstructured environments. A second method of calibration is also presented. This method uses simple geometric objects such as spheres and straight lines to The camera parameters. Calibration is performed using both methods and the results compared.
Resumo:
This paper describes the successful implementation of a prototype software application that independently and proactively detects whether a mobile phone is lost or misused. When the mobile phone is detected as being lost or misused, the application takes steps to mitigate the impact of loss and to gather evidence. The goal is to aid in the recovery of the mobile phone. The prototype works regardless of the cellular infrastructure the mobile phone is operating in and makes minimum demands on the owner of the mobile phone. The prototype was developed on Nokia 6600 mobile phones that run Symbian Operating System 7.0s. Development was done using Nokia’s Series 60 Developer’s Platform 2.0.
Resumo:
The discontinuities in the solutions of systems of conservation laws are widely considered as one of the difficulties in numerical simulation. A numerical method is proposed for solving these partial differential equations with discontinuities in the solution. The method is able to track these sharp discontinuities or interfaces while still fully maintain the conservation property. The motion of the front is obtained by solving a Riemann problem based on the state values at its both sides which are reconstructed by using weighted essentially non oscillatory (WENO) scheme. The propagation of the front is coupled with the evaluation of "dynamic" numerical fluxes. Some numerical tests in 1D and preliminary results in 2D are presented.
Resumo:
In this paper a precorrected FFT-Fast Multipole Tree (pFFT-FMT) method for solving the potential flow around arbitrary three dimensional bodies is presented. The method takes advantage of the efficiency of the pFFT and FMT algorithms to facilitate more demanding computations such as automatic wake generation and hands-off steady and unsteady aerodynamic simulations. The velocity potential on the body surfaces and in the domain is determined using a pFFT Boundary Element Method (BEM) approach based on the Green’s Theorem Boundary Integral Equation. The vorticity trailing all lifting surfaces in the domain is represented using a Fast Multipole Tree, time advected, vortex participle method. Some simple steady state flow solutions are performed to demonstrate the basic capabilities of the solver. Although this paper focuses primarily on steady state solutions, it should be noted that this approach is designed to be a robust and efficient unsteady potential flow simulation tool, useful for rapid computational prototyping.
Resumo:
Bone morphogenetic protein-2 (BMP-2) has the ability to induce osteoblast differentiation of undifferentiated cells, resulting in the healing of skeletal defects when delivered with a suitable carrier. We have applied a versatile delivery platform comprising a novel composite of two biomaterials with proven track records – apatite and poly(lactic-co-glycolic acid) (PLGA) – to the delivery of BMP-2. Sustained release of this growth factor was tuned with variables that affect polymer degradation and/or apatite dissolution, such as polymer molecular weight, polymer composition, apatite loading, and apatite particle size. The effect of released BMP-2 on C3H10T1/2 murine pluripotent mesenchymal cells was assessed by tracking the expression of osteoblastic makers, alkaline phosphatase (ALP) and osteocalcin. Release media collected over 100 days induced elevated ALP activity in C3H10T1/2 cells. The expression of osteocalcin was also upregulated significantly. These results demonstrated the potential of apatite-PLGA composite particles for releasing protein in bioactive form over extended periods of time.
Resumo:
The study of granular material is of great interest to many researchers in both engineering and science communities. The importance of such a study derives from its complex rheological character and also its significant role in a wide range of industrial applications, such as coal, food, plastics, pharmaceutical, powder metallurgy and mineral processing. A number of recent reports have been focused on the physics of non-cohesive granular material submitted to vertical vibration in either experimental or theoretical approaches. Such a kind of system can be used to separate, mix and dry granular materials in industries. It exhibits different instability behaviour on its surface when under vertical vibration, for example, avalanching, surface fluidization and surface wave, and these phenomena have attracted particular interest of many researchers. However, its fundamental understanding of the instability mechanism is not yet well-understood. This paper is therefore to study the dynamics of granular motion in such a kind of system using Positron Emission Particle Tracking (PEPT), which allows the motion of a single tracer particle to be followed in a non-invasive way. Features of the solids motion such as cycle frequency and dispersion index were investigated via means of authors’ specially-written programmes. Regardless of the surface behaviour, particles are found to travel in rotational movement in horizontal plane. Particle cycle frequency is found to increase strongly with increasing vibration amplitude. Particle dispersion also increased strongly with vibration amplitude. Horizontal dispersion is observed to always exceed vertical dispersion.
Resumo:
Memory errors are a common cause of incorrect software execution and security vulnerabilities. We have developed two new techniques that help software continue to execute successfully through memory errors: failure-oblivious computing and boundless memory blocks. The foundation of both techniques is a compiler that generates code that checks accesses via pointers to detect out of bounds accesses. Instead of terminating or throwing an exception, the generated code takes another action that keeps the program executing without memory corruption. Failure-oblivious code simply discards invalid writes and manufactures values to return for invalid reads, enabling the program to continue its normal execution path. Code that implements boundless memory blocks stores invalid writes away in a hash table to return as the values for corresponding out of bounds reads. he net effect is to (conceptually) give each allocated memory block unbounded size and to eliminate out of bounds accesses as a programming error. We have implemented both techniques and acquired several widely used open source servers (Apache, Sendmail, Pine, Mutt, and Midnight Commander).With standard compilers, all of these servers are vulnerable to buffer overflow attacks as documented at security tracking web sites. Both failure-oblivious computing and boundless memory blocks eliminate these security vulnerabilities (as well as other memory errors). Our results show that our compiler enables the servers to execute successfully through buffer overflow attacks to continue to correctly service user requests without security vulnerabilities.
Resumo:
This paper proposes a field application of a high-level reinforcement learning (RL) control system for solving the action selection problem of an autonomous robot in cable tracking task. The learning system is characterized by using a direct policy search method for learning the internal state/action mapping. Policy only algorithms may suffer from long convergence times when dealing with real robotics. In order to speed up the process, the learning phase has been carried out in a simulated environment and, in a second step, the policy has been transferred and tested successfully on a real robot. Future steps plan to continue the learning process on-line while on the real robot while performing the mentioned task. We demonstrate its feasibility with real experiments on the underwater robot ICTINEU AUV
Resumo:
Photo-mosaicing techniques have become popular for seafloor mapping in various marine science applications. However, the common methods cannot accurately map regions with high relief and topographical variations. Ortho-mosaicing borrowed from photogrammetry is an alternative technique that enables taking into account the 3-D shape of the terrain. A serious bottleneck is the volume of elevation information that needs to be estimated from the video data, fused, and processed for the generation of a composite ortho-photo that covers a relatively large seafloor area. We present a framework that combines the advantages of dense depth-map and 3-D feature estimation techniques based on visual motion cues. The main goal is to identify and reconstruct certain key terrain feature points that adequately represent the surface with minimal complexity in the form of piecewise planar patches. The proposed implementation utilizes local depth maps for feature selection, while tracking over several views enables 3-D reconstruction by bundle adjustment. Experimental results with synthetic and real data validate the effectiveness of the proposed approach
Resumo:
Introducción: La ecocardiografía es actualmente la técnica de imagen diagnóstica más utilizada para la evaluación de la anatomía y la función cardiovascular. En la actualidad se está utilizando la ecocardiografía por speckle tracking la cual permite una evaluación mas objetiva y confiable de la función ventricular, sin embargo se requieren valores de referencia que hagan que los valores obtenidos sean válidos y útiles para determinar en forma mas oportuna conductas previas al deterioro de su función. Objetivo general: Determinar los valores de referencia para mecánica ventricular izquierda mediante ecocardiografía bidimensional por speckle tracking con equipo Toshiba Artida con transductor multifrecuencia de 3 megahertzios en pacientes sin patología cardiaca conocida en la Fundación Clínica Shaio en el año 2014. Metodología: Análisis de una cohorte prospectiva de todos los pacientes que ingresaron a la Fundación Clínica Shaio para evaluación ecocardiográfica sin patología cardiaca conocida entre los meses Agosto y Diciembre del 2014. Resultados: Se presenta este estudio de la evaluación de la mecánica ventricular izquierda en adultos sanos, los resultados son similares a los obtenidos en estudios de referencia, sin embargo se consideran de gran importancia ya que de acuerdo a la guía actual de evaluación de la mecánica ventricular por strain rate es importante que cada equipo se encuentre estandarizado con el fin de tener resultados válidos de acuerdo a las diferentes patologías en las que se puede aplicar y a nuestra población.
Resumo:
In November 2008, Colombian authorities dismantled a network of Ponzi schemes, making hundreds of thousands of investors lose tens of millions of dollars throughout the country. Using original data on the geographical incidence of the Ponzi schemes, this paper estimates the impact of their break down on crime. We find that the crash of Ponzi schemes differentially exacerbated crime in affected districts. Confirming the intuition of the standard economic model of crime, this effect is only present in places with relatively weak judicial and law enforcement institutions, and with little access to consumption smoothing mechanisms such as microcredit. In addition, we show that, with the exception of economically-motivated felonies such as robbery, violent crime is not affected by the negative shock.
Resumo:
Diffusion Tensor Imaging (DTI) is a new magnetic resonance imaging modality capable of producing quantitative maps of microscopic natural displacements of water molecules that occur in brain tissues as part of the physical diffusion process. This technique has become a powerful tool in the investigation of brain structure and function because it allows for in vivo measurements of white matter fiber orientation. The application of DTI in clinical practice requires specialized processing and visualization techniques to extract and represent acquired information in a comprehensible manner. Tracking techniques are used to infer patterns of continuity in the brain by following in a step-wise mode the path of a set of particles dropped into a vector field. In this way, white matter fiber maps can be obtained.
Resumo:
One of the main challenges for developers of new human-computer interfaces is to provide a more natural way of interacting with computer systems, avoiding excessive use of hand and finger movements. In this way, also a valuable alternative communication pathway is provided to people suffering from motor disabilities. This paper describes the construction of a low cost eye tracker using a fixed head setup. Therefore a webcam, laptop and an infrared lighting source were used together with a simple frame to fix the head of the user. Furthermore, detailed information on the various image processing techniques used for filtering the centre of the pupil and different methods to calculate the point of gaze are discussed. An overall accuracy of 1.5 degrees was obtained while keeping the hardware cost of the device below 100 euros.
Resumo:
Taking into account the study of Luegi (2006), where eye movements of 20 Portuguese university students while reading text passages were analyzed, in this article we discuss some methodological issues concerning eye tracking measures to evaluate reading difficulties. Relating syntactic complexity, grammaticality and ambiguity to eye movements, we will discuss the use of many different dependent variables that indicate the immediate and delayed processes in text processing. We propose a new measure that we called Progression-Path which permits analyzing, in the critical region, what happens when the reader proceeds on the sentence instead of going backwards to solve a problem that s/he found (which is the most common expected behavior but not the only one, as is illustrated by some of our examples).
Resumo:
In this study we evaluate processing costs of different types of anaphoric expressions during reading. We consider three types of anaphoric expressions in Subject sentential position: a null pronoun (pro), and two gaps produced by syntactic movement: a WHvariable and a NP copy. Given that coreferential pro exhibits more referential weight than wh- and NP-gaps, and grounded on theories of referential processing based on relations of hierarchy and accessibility of the antecedent, we raise the hypothesis that the more dependent on its antecedent the anaphoric null constituent is, and the more minimal is the distance in terms of hierarchical structure between the anaphoric null element and its antecedent, the lower are the cognitive costs in processing. To test our hypothesis, we registered the eye movements with R6-HS ASL system of 20 Portuguese adult native speakers. Text regions including the selected anaphoric expressions were delimited and tagged. We analyzed the reading time of each region taking into account the number and duration of eye fixations per region; we used the reading time by character in milliseconds in order to compare values between regions of different length. We found a significant advantage in the reading time of the gaps arising from movement over the reading time of pro.