911 resultados para moving particle tracking
Resumo:
Retrograde transport of NF-κB from the synapse to the nucleus in neurons is mediated by the dynein/dynactin motor complex and can be triggered by synaptic activation. The calibre of axons is highly variable ranging down to 100 nm, aggravating the investigation of transport processes in neurites of living neurons using conventional light microscopy. In this study we quantified for the first time the transport of the NF-κB subunit p65 using high-density single-particle tracking in combination with photoactivatable fluorescent proteins in living mouse hippocampal neurons. We detected an increase of the mean diffusion coefficient (Dmean) in neurites from 0.12 ± 0.05 µm2/s to 0.61 ± 0.03 µm2/s after stimulation with glutamate. We further observed that the relative amount of retrogradely transported p65 molecules is increased after stimulation. Glutamate treatment resulted in an increase of the mean retrograde velocity from 10.9 ± 1.9 to 15 ± 4.9 µm/s, whereas a velocity increase from 9 ± 1.3 to 14 ± 3 µm/s was observed for anterogradely transported p65. This study demonstrates for the first time that glutamate stimulation leads to an increased mobility of single NF-κB p65 molecules in neurites of living hippocampal neurons.
Resumo:
The U.S. Geological Survey (USGS) is committed to providing the Nation with credible scientific information that helps to enhance and protect the overall quality of life and that facilitates effective management of water, biological, energy, and mineral resources (http://www.usgs.gov/). Information on the Nation’s water resources is critical to ensuring long-term availability of water that is safe for drinking and recreation and is suitable for industry, irrigation, and fish and wildlife. Population growth and increasing demands for water make the availability of that water, now measured in terms of quantity and quality, even more essential to the long-term sustainability of our communities and ecosystems. The USGS implemented the National Water-Quality Assessment (NAWQA) Program in 1991 to support national, regional, State, and local information needs and decisions related to water-quality management and policy (http://water.usgs.gov/nawqa). The NAWQA Program is designed to answer: What is the condition of our Nation’s streams and ground water? How are conditions changing over time? How do natural features and human activities affect the quality of streams and ground water, and where are those effects most pronounced? By combining information on water chemistry, physical characteristics, stream habitat, and aquatic life, the NAWQA Program aims to provide science-based insights for current and emerging water issues and priorities. From 1991-2001, the NAWQA Program completed interdisciplinary assessments and established a baseline understanding of water-quality conditions in 51 of the Nation’s river basins and aquifers, referred to as Study Units (http://water.usgs.gov/nawqa/studyu.html).
Resumo:
Diffusion is a common phenomenon in nature and generally is associated with a system trying to reach a local or a global equilibrium state, as a result of highly irregular individual particle motion. Therefore it is of fundamental importance in physics, chemistry and biology. Particle tracking in complex fluids can reveal important characteristics of its properties. In living cells, we coat the microbead with a peptide (RGD) that binds to integrin receptors at the plasma membrane, which connects to the CSK. This procedure is based on the hypothesis that the microsphere can move only if the structure where it is attached move as well. Then, the observed trajectory of microbeads is a probe of the cytoskeleton (CSK), which is governed by several factors, including thermal diffusion, pressure gradients, and molecular motors. The possibility of separating the trajectories into passive and active diffusion may give information about the viscoelasticity of the cell structure and molecular motors activity. And also we could analyze the motion via generalized Stokes-Einstein relation, avoiding the use of any active techniques. Usually a 12 to 16 Frames Per Second (FPS) system is used to track the microbeads in cell for about 5 minutes. Several factors make this FPS limitation: camera computer communication, light, computer speed for online analysis among others. Here we used a high quality camera and our own software, developed in C++ and Linux, to reach high FPS. Measurements were conducted with samples for 10£ and 20£ objectives. We performed sequentially images with different intervals, all with 2 ¹s exposure. The sequences of intervals are in milliseconds: 4 5 ms (maximum speed) 14, 25, 50 and 100 FPS. Our preliminary results highlight the difference between passive and active diffusion, since the passive diffusion is represented by a Gaussian in the distribution of displacements of the center of mass of individual beads between consecutive frames. However, the active process, or anomalous diffusion, shows as long tails in the distribution of displacements.
Resumo:
The AEgIS experiment is an interdisciplinary collaboration between atomic, plasma and particle physicists, with the scientific goal of performing the first precision measurement of the Earth's gravitational acceleration on antimatter. The principle of the experiment is as follows: cold antihydrogen atoms are synthesized in a Penning-Malmberg trap and are Stark accelerated towards a moiré deflectometer, the classical counterpart of an atom interferometer, and annihilate on a position sensitive detector. Crucial to the success of the experiment is an antihydrogen detector that will be used to demonstrate the production of antihydrogen and also to measure the temperature of the anti-atoms and the creation of a beam. The operating requirements for the detector are very challenging: it must operate at close to 4 K inside a 1 T solenoid magnetic field and identify the annihilation of the antihydrogen atoms that are produced during the 1 μs period of antihydrogen production. Our solution—called the FACT detector—is based on a novel multi-layer scintillating fiber tracker with SiPM readout and off the shelf FPGA based readout system. This talk will present the design of the FACT detector and detail the operation of the detector in the context of the AEgIS experiment.
Resumo:
This paper presents a new method to measure the sinking rates of individual phytoplankton “particles” (cells, chains, colonies, and aggregates) in the laboratory. Conventional particle tracking and high resolution video imaging were used to measure particle sinking rates and particle size. The stabilizing force of a very mild linear salinity gradient (1 ppt over 15 cm) prevented the formation of convection currents in the laboratory settling chamber. Whereas bulk settling methods such as SETCOL provide a single value of sinking rate for a population, this method allows the measurement of sinking rate and particle size for a large number of individual particles or phytoplankton within a population. The method has applications where sinking rates vary within a population, or where sinking rate-size relationships are important. Preliminary data from experiments with both laboratory and field samples of marine phytoplankton are presented here to illustrate the use of the technique, its applications, and limitations. Whereas this paper deals only with sinking phytoplankton, the method is equally valid for positively buoyant species, as well as nonbiological particles.
Resumo:
Los sistemas de seguimiento mono-cámara han demostrado su notable capacidad para el análisis de trajectorias de objectos móviles y para monitorización de escenas de interés; sin embargo, tanto su robustez como sus posibilidades en cuanto a comprensión semántica de la escena están fuertemente limitadas por su naturaleza local y monocular, lo que los hace insuficientes para aplicaciones realistas de videovigilancia. El objetivo de esta tesis es la extensión de las posibilidades de los sistemas de seguimiento de objetos móviles para lograr un mayor grado de robustez y comprensión de la escena. La extensión propuesta se divide en dos direcciones separadas. La primera puede considerarse local, ya que está orientada a la mejora y enriquecimiento de las posiciones estimadas para los objetos móviles observados directamente por las cámaras del sistema; dicha extensión se logra mediante el desarrollo de un sistema multi-cámara de seguimiento 3D, capaz de proporcionar consistentemente las posiciones 3D de múltiples objetos a partir de las observaciones capturadas por un conjunto de sensores calibrados y con campos de visión solapados. La segunda extensión puede considerarse global, dado que su objetivo consiste en proporcionar un contexto global para relacionar las observaciones locales realizadas por una cámara con una escena de mucho mayor tamaño; para ello se propone un sistema automático de localización de cámaras basado en las trayectorias observadas de varios objetos móviles y en un mapa esquemático de la escena global monitorizada. Ambas líneas de investigación se tratan utilizando, como marco común, técnicas de estimación bayesiana: esta elección está justificada por la versatilidad y flexibilidad proporcionada por dicho marco estadístico, que permite la combinación natural de múltiples fuentes de información sobre los parámetros a estimar, así como un tratamiento riguroso de la incertidumbre asociada a las mismas mediante la inclusión de modelos de observación específicamente diseñados. Además, el marco seleccionado abre grandes posibilidades operacionales, puesto que permite la creación de diferentes métodos numéricos adaptados a las necesidades y características específicas de distintos problemas tratados. El sistema de seguimiento 3D con múltiples cámaras propuesto está específicamente diseñado para permitir descripciones esquemáticas de las medidas realizadas individualmente por cada una de las cámaras del sistema: esta elección de diseño, por tanto, no asume ningún algoritmo específico de detección o seguimiento 2D en ninguno de los sensores de la red, y hace que el sistema propuesto sea aplicable a redes reales de vigilancia con capacidades limitadas tanto en términos de procesamiento como de transmision. La combinación robusta de las observaciones capturadas individualmente por las cámaras, ruidosas, incompletas y probablemente contaminadas por falsas detecciones, se basa en un metodo de asociación bayesiana basado en geometría y color: los resultados de dicha asociación permiten el seguimiento 3D de los objetos de la escena mediante el uso de un filtro de partículas. El sistema de fusión de observaciones propuesto tiene, como principales características, una gran precisión en términos de localización 3D de objetos, y una destacable capacidad de recuperación tras eventuales errores debidos a un número insuficiente de datos de entrada. El sistema automático de localización de cámaras se basa en la observación de múltiples objetos móviles y un mapa esquemático de las áreas transitables del entorno monitorizado para inferir la posición absoluta de dicho sensor. Para este propósito, se propone un novedoso marco bayesiano que combina modelos dinámicos inducidos por el mapa en los objetos móviles presentes en la escena con las trayectorias observadas por la cámara, lo que representa un enfoque nunca utilizado en la literatura existente. El sistema de localización se divide en dos sub-tareas diferenciadas, debido a que cada una de estas tareas requiere del diseño de algoritmos específicos de muestreo para explotar en profundidad las características del marco desarrollado: por un lado, análisis de la ambigüedad del caso específicamente tratado y estimación aproximada de la localización de la cámara, y por otro, refinado de la localización de la cámara. El sistema completo, diseñado y probado para el caso específico de localización de cámaras en entornos de tráfico urbano, podría tener aplicación también en otros entornos y sensores de diferentes modalidades tras ciertas adaptaciones. ABSTRACT Mono-camera tracking systems have proved their capabilities for moving object trajectory analysis and scene monitoring, but their robustness and semantic possibilities are strongly limited by their local and monocular nature and are often insufficient for realistic surveillance applications. This thesis is aimed at extending the possibilities of moving object tracking systems to a higher level of scene understanding. The proposed extension comprises two separate directions. The first one is local, since is aimed at enriching the inferred positions of the moving objects within the area of the monitored scene directly covered by the cameras of the system; this task is achieved through the development of a multi-camera system for robust 3D tracking, able to provide 3D tracking information of multiple simultaneous moving objects from the observations reported by a set of calibrated cameras with semi-overlapping fields of view. The second extension is global, as is aimed at providing local observations performed within the field of view of one camera with a global context relating them to a much larger scene; to this end, an automatic camera positioning system relying only on observed object trajectories and a scene map is designed. The two lines of research in this thesis are addressed using Bayesian estimation as a general unifying framework. Its suitability for these two applications is justified by the flexibility and versatility of that stochastic framework, which allows the combination of multiple sources of information about the parameters to estimate in a natural and elegant way, addressing at the same time the uncertainty associated to those sources through the inclusion of models designed to this end. In addition, it opens multiple possibilities for the creation of different numerical methods for achieving satisfactory and efficient practical solutions to each addressed application. The proposed multi-camera 3D tracking method is specifically designed to work on schematic descriptions of the observations performed by each camera of the system: this choice allows the use of unspecific off-the-shelf 2D detection and/or tracking subsystems running independently at each sensor, and makes the proposal suitable for real surveillance networks with moderate computational and transmission capabilities. The robust combination of such noisy, incomplete and possibly unreliable schematic descriptors relies on a Bayesian association method, based on geometry and color, whose results allow the tracking of the targets in the scene with a particle filter. The main features exhibited by the proposal are, first, a remarkable accuracy in terms of target 3D positioning, and second, a great recovery ability after tracking losses due to insufficient input data. The proposed system for visual-based camera self-positioning uses the observations of moving objects and a schematic map of the passable areas of the environment to infer the absolute sensor position. To this end, a new Bayesian framework combining trajectory observations and map-induced dynamic models for moving objects is designed, which represents an approach to camera positioning never addressed before in the literature. This task is divided into two different sub-tasks, setting ambiguity analysis and approximate position estimation, on the one hand, and position refining, on the other, since they require the design of specific sampling algorithms to correctly exploit the discriminative features of the developed framework. This system, designed for camera positioning and demonstrated in urban traffic environments, can also be applied to different environments and sensors of other modalities after certain required adaptations.
Resumo:
Los sistemas de seguimiento mono-cámara han demostrado su notable capacidad para el análisis de trajectorias de objectos móviles y para monitorización de escenas de interés; sin embargo, tanto su robustez como sus posibilidades en cuanto a comprensión semántica de la escena están fuertemente limitadas por su naturaleza local y monocular, lo que los hace insuficientes para aplicaciones realistas de videovigilancia. El objetivo de esta tesis es la extensión de las posibilidades de los sistemas de seguimiento de objetos móviles para lograr un mayor grado de robustez y comprensión de la escena. La extensión propuesta se divide en dos direcciones separadas. La primera puede considerarse local, ya que está orientada a la mejora y enriquecimiento de las posiciones estimadas para los objetos móviles observados directamente por las cámaras del sistema; dicha extensión se logra mediante el desarrollo de un sistema multi-cámara de seguimiento 3D, capaz de proporcionar consistentemente las posiciones 3D de múltiples objetos a partir de las observaciones capturadas por un conjunto de sensores calibrados y con campos de visión solapados. La segunda extensión puede considerarse global, dado que su objetivo consiste en proporcionar un contexto global para relacionar las observaciones locales realizadas por una cámara con una escena de mucho mayor tamaño; para ello se propone un sistema automático de localización de cámaras basado en las trayectorias observadas de varios objetos móviles y en un mapa esquemático de la escena global monitorizada. Ambas líneas de investigación se tratan utilizando, como marco común, técnicas de estimación bayesiana: esta elección está justificada por la versatilidad y flexibilidad proporcionada por dicho marco estadístico, que permite la combinación natural de múltiples fuentes de información sobre los parámetros a estimar, así como un tratamiento riguroso de la incertidumbre asociada a las mismas mediante la inclusión de modelos de observación específicamente diseñados. Además, el marco seleccionado abre grandes posibilidades operacionales, puesto que permite la creación de diferentes métodos numéricos adaptados a las necesidades y características específicas de distintos problemas tratados. El sistema de seguimiento 3D con múltiples cámaras propuesto está específicamente diseñado para permitir descripciones esquemáticas de las medidas realizadas individualmente por cada una de las cámaras del sistema: esta elección de diseño, por tanto, no asume ningún algoritmo específico de detección o seguimiento 2D en ninguno de los sensores de la red, y hace que el sistema propuesto sea aplicable a redes reales de vigilancia con capacidades limitadas tanto en términos de procesamiento como de transmision. La combinación robusta de las observaciones capturadas individualmente por las cámaras, ruidosas, incompletas y probablemente contaminadas por falsas detecciones, se basa en un metodo de asociación bayesiana basado en geometría y color: los resultados de dicha asociación permiten el seguimiento 3D de los objetos de la escena mediante el uso de un filtro de partículas. El sistema de fusión de observaciones propuesto tiene, como principales características, una gran precisión en términos de localización 3D de objetos, y una destacable capacidad de recuperación tras eventuales errores debidos a un número insuficiente de datos de entrada. El sistema automático de localización de cámaras se basa en la observación de múltiples objetos móviles y un mapa esquemático de las áreas transitables del entorno monitorizado para inferir la posición absoluta de dicho sensor. Para este propósito, se propone un novedoso marco bayesiano que combina modelos dinámicos inducidos por el mapa en los objetos móviles presentes en la escena con las trayectorias observadas por la cámara, lo que representa un enfoque nunca utilizado en la literatura existente. El sistema de localización se divide en dos sub-tareas diferenciadas, debido a que cada una de estas tareas requiere del diseño de algoritmos específicos de muestreo para explotar en profundidad las características del marco desarrollado: por un lado, análisis de la ambigüedad del caso específicamente tratado y estimación aproximada de la localización de la cámara, y por otro, refinado de la localización de la cámara. El sistema completo, diseñado y probado para el caso específico de localización de cámaras en entornos de tráfico urbano, podría tener aplicación también en otros entornos y sensores de diferentes modalidades tras ciertas adaptaciones. ABSTRACT Mono-camera tracking systems have proved their capabilities for moving object trajectory analysis and scene monitoring, but their robustness and semantic possibilities are strongly limited by their local and monocular nature and are often insufficient for realistic surveillance applications. This thesis is aimed at extending the possibilities of moving object tracking systems to a higher level of scene understanding. The proposed extension comprises two separate directions. The first one is local, since is aimed at enriching the inferred positions of the moving objects within the area of the monitored scene directly covered by the cameras of the system; this task is achieved through the development of a multi-camera system for robust 3D tracking, able to provide 3D tracking information of multiple simultaneous moving objects from the observations reported by a set of calibrated cameras with semi-overlapping fields of view. The second extension is global, as is aimed at providing local observations performed within the field of view of one camera with a global context relating them to a much larger scene; to this end, an automatic camera positioning system relying only on observed object trajectories and a scene map is designed. The two lines of research in this thesis are addressed using Bayesian estimation as a general unifying framework. Its suitability for these two applications is justified by the flexibility and versatility of that stochastic framework, which allows the combination of multiple sources of information about the parameters to estimate in a natural and elegant way, addressing at the same time the uncertainty associated to those sources through the inclusion of models designed to this end. In addition, it opens multiple possibilities for the creation of different numerical methods for achieving satisfactory and efficient practical solutions to each addressed application. The proposed multi-camera 3D tracking method is specifically designed to work on schematic descriptions of the observations performed by each camera of the system: this choice allows the use of unspecific off-the-shelf 2D detection and/or tracking subsystems running independently at each sensor, and makes the proposal suitable for real surveillance networks with moderate computational and transmission capabilities. The robust combination of such noisy, incomplete and possibly unreliable schematic descriptors relies on a Bayesian association method, based on geometry and color, whose results allow the tracking of the targets in the scene with a particle filter. The main features exhibited by the proposal are, first, a remarkable accuracy in terms of target 3D positioning, and second, a great recovery ability after tracking losses due to insufficient input data. The proposed system for visual-based camera self-positioning uses the observations of moving objects and a schematic map of the passable areas of the environment to infer the absolute sensor position. To this end, a new Bayesian framework combining trajectory observations and map-induced dynamic models for moving objects is designed, which represents an approach to camera positioning never addressed before in the literature. This task is divided into two different sub-tasks, setting ambiguity analysis and approximate position estimation, on the one hand, and position refining, on the other, since they require the design of specific sampling algorithms to correctly exploit the discriminative features of the developed framework. This system, designed for camera positioning and demonstrated in urban traffic environments, can also be applied to different environments and sensors of other modalities after certain required adaptations.
Resumo:
The pyrolysis of a freely moving cellulosic particle inside a 41.7mgs -1 source continuously fed fluid bed reactor subjected to convective heat transfer is modelled. The Lagrangian approach is adopted for the particle tracking inside the reactor, while the flow of the inert gas is treated with the standard Eulerian method for gases. The model incorporates the thermal degradation of cellulose to char with simultaneous evolution of gases and vapours from discrete cellulosic particles. The reaction kinetics is represented according to the Broido–Shafizadeh scheme. The convective heat transfer to the surface of the particle is solved by two means, namely the Ranz–Marshall correlation and the limit case of infinitely fast external heat transfer rates. The results from both approaches are compared and discussed. The effect of the different heat transfer rates on the discrete phase trajectory is also considered.
Resumo:
The study of granular material is of great interest to many researchers in both engineering and science communities. The importance of such a study derives from its complex rheological character and also its significant role in a wide range of industrial applications, such as coal, food, plastics, pharmaceutical, powder metallurgy and mineral processing. A number of recent reports have been focused on the physics of non-cohesive granular material submitted to vertical vibration in either experimental or theoretical approaches. Such a kind of system can be used to separate, mix and dry granular materials in industries. It exhibits different instability behaviour on its surface when under vertical vibration, for example, avalanching, surface fluidization and surface wave, and these phenomena have attracted particular interest of many researchers. However, its fundamental understanding of the instability mechanism is not yet well-understood. This paper is therefore to study the dynamics of granular motion in such a kind of system using Positron Emission Particle Tracking (PEPT), which allows the motion of a single tracer particle to be followed in a non-invasive way. Features of the solids motion such as cycle frequency and dispersion index were investigated via means of authors’ specially-written programmes. Regardless of the surface behaviour, particles are found to travel in rotational movement in horizontal plane. Particle cycle frequency is found to increase strongly with increasing vibration amplitude. Particle dispersion also increased strongly with vibration amplitude. Horizontal dispersion is observed to always exceed vertical dispersion.
Resumo:
Micro-scale, two-phase flow is found in a variety of devices such as Lab-on-a-chip, bio-chips, micro-heat exchangers, and fuel cells. Knowledge of the fluid behavior near the dynamic gas-liquid interface is required for developing accurate predictive models. Light is distorted near a curved gas-liquid interface preventing accurate measurement of interfacial shape and internal liquid velocities. This research focused on the development of experimental methods designed to isolate and probe dynamic liquid films and measure velocity fields near a moving gas-liquid interface. A high-speed, reflectance, swept-field confocal (RSFC) imaging system was developed for imaging near curved surfaces. Experimental studies of dynamic gas-liquid interface of micro-scale, two-phase flow were conducted in three phases. Dynamic liquid film thicknesses of segmented, two-phase flow were measured using the RSFC and compared to a classic film thickness deposition model. Flow fields near a steadily moving meniscus were measured using RSFC and particle tracking velocimetry. The RSFC provided high speed imaging near the menisci without distortion caused the gas-liquid interface. Finally, interfacial morphology for internal two-phase flow and droplet evaporation were measured using interferograms produced by the RSFC imaging technique. Each technique can be used independently or simultaneously when.