994 resultados para Partial observations


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In recent years, rapid advances in information technology have led to various data collection systems which are enriching the sources of empirical data for use in transport systems. Currently, traffic data are collected through various sensors including loop detectors, probe vehicles, cell-phones, Bluetooth, video cameras, remote sensing and public transport smart cards. It has been argued that combining the complementary information from multiple sources will generally result in better accuracy, increased robustness and reduced ambiguity. Despite the fact that there have been substantial advances in data assimilation techniques to reconstruct and predict the traffic state from multiple data sources, such methods are generally data-driven and do not fully utilize the power of traffic models. Furthermore, the existing methods are still limited to freeway networks and are not yet applicable in the urban context due to the enhanced complexity of the flow behavior. The main traffic phenomena on urban links are generally caused by the boundary conditions at intersections, un-signalized or signalized, at which the switching of the traffic lights and the turning maneuvers of the road users lead to shock-wave phenomena that propagate upstream of the intersections. This paper develops a new model-based methodology to build up a real-time traffic prediction model for arterial corridors using data from multiple sources, particularly from loop detectors and partial observations from Bluetooth and GPS devices.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The low-thrust guidance problem is defined as the minimum terminal variance (MTV) control of a space vehicle subjected to random perturbations of its trajectory. To accomplish this control task, only bounded thrust level and thrust angle deviations are allowed, and these must be calculated based solely on the information gained from noisy, partial observations of the state. In order to establish the validity of various approximations, the problem is first investigated under the idealized conditions of perfect state information and negligible dynamic errors. To check each approximate model, an algorithm is developed to facilitate the computation of the open loop trajectories for the nonlinear bang-bang system. Using the results of this phase in conjunction with the Ornstein-Uhlenbeck process as a model for the random inputs to the system, the MTV guidance problem is reformulated as a stochastic, bang-bang, optimal control problem. Since a complete analytic solution seems to be unattainable, asymptotic solutions are developed by numerical methods. However, it is shown analytically that a Kalman filter in cascade with an appropriate nonlinear MTV controller is an optimal configuration. The resulting system is simulated using the Monte Carlo technique and is compared to other guidance schemes of current interest.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Le nombre important de véhicules sur le réseau routier peut entraîner des problèmes d'encombrement et de sécurité. Les usagers des réseaux routiers qui nous intéressent sont les camionneurs qui transportent des marchandises, pouvant rouler avec des véhicules non conformes ou emprunter des routes interdites pour gagner du temps. Le transport de matières dangereuses est réglementé et certains lieux, surtout les ponts et les tunnels, leur sont interdits d'accès. Pour aider à faire appliquer les lois en vigueur, il existe un système de contrôles routiers composé de structures fixes et de patrouilles mobiles. Le déploiement stratégique de ces ressources de contrôle mise sur la connaissance du comportement des camionneurs que nous allons étudier à travers l'analyse de leurs choix de routes. Un problème de choix de routes peut se modéliser en utilisant la théorie des choix discrets, elle-même fondée sur la théorie de l'utilité aléatoire. Traiter ce type de problème avec cette théorie est complexe. Les modèles que nous utiliserons sont tels, que nous serons amenés à faire face à des problèmes de corrélation, puisque plusieurs routes partagent probablement des arcs. De plus, puisque nous travaillons sur le réseau routier du Québec, le choix de routes peut se faire parmi un ensemble de routes dont le nombre est potentiellement infini si on considère celles ayant des boucles. Enfin, l'étude des choix faits par un humain n'est pas triviale. Avec l'aide du modèle de choix de routes retenu, nous pourrons calculer une expression de la probabilité qu'une route soit prise par le camionneur. Nous avons abordé cette étude du comportement en commençant par un travail de description des données collectées. Le questionnaire utilisé par les contrôleurs permet de collecter des données concernant les camionneurs, leurs véhicules et le lieu du contrôle. La description des données observées est une étape essentielle, car elle permet de présenter clairement à un analyste potentiel ce qui est accessible pour étudier les comportements des camionneurs. Les données observées lors d'un contrôle constitueront ce que nous appellerons une observation. Avec les attributs du réseau, il sera possible de modéliser le réseau routier du Québec. Une sélection de certains attributs permettra de spécifier la fonction d'utilité et par conséquent la fonction permettant de calculer les probabilités de choix de routes par un camionneur. Il devient alors possible d'étudier un comportement en se basant sur des observations. Celles provenant du terrain ne nous donnent pas suffisamment d'information actuellement et même en spécifiant bien un modèle, l'estimation des paramètres n'est pas possible. Cette dernière est basée sur la méthode du maximum de vraisemblance. Nous avons l'outil, mais il nous manque la matière première que sont les observations, pour continuer l'étude. L'idée est de poursuivre avec des observations de synthèse. Nous ferons des estimations avec des observations complètes puis, pour se rapprocher des conditions réelles, nous continuerons avec des observations partielles. Ceci constitue d'ailleurs un défi majeur. Nous proposons pour ces dernières, de nous servir des résultats des travaux de (Bierlaire et Frejinger, 2008) en les combinant avec ceux de (Fosgerau, Frejinger et Karlström, 2013). Bien qu'elles soient de nature synthétiques, les observations que nous utilisons nous mèneront à des résultats tels, que nous serons en mesure de fournir une proposition concrète qui pourrait aider à optimiser les décisions des responsables des contrôles routiers. En effet, nous avons réussi à estimer, sur le réseau réel du Québec, avec un seuil de signification de 0,05 les valeurs des paramètres d'un modèle de choix de routes discrets, même lorsque les observations sont partielles. Ces résultats donneront lieu à des recommandations sur les changements à faire dans le questionnaire permettant de collecter des données.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Variational data assimilation in continuous time is revisited. The central techniques applied in this paper are in part adopted from the theory of optimal nonlinear control. Alternatively, the investigated approach can be considered as a continuous time generalization of what is known as weakly constrained four-dimensional variational assimilation (4D-Var) in the geosciences. The technique allows to assimilate trajectories in the case of partial observations and in the presence of model error. Several mathematical aspects of the approach are studied. Computationally, it amounts to solving a two-point boundary value problem. For imperfect models, the trade-off between small dynamical error (i.e. the trajectory obeys the model dynamics) and small observational error (i.e. the trajectory closely follows the observations) is investigated. This trade-off turns out to be trivial if the model is perfect. However, even in this situation, allowing for minute deviations from the perfect model is shown to have positive effects, namely to regularize the problem. The presented formalism is dynamical in character. No statistical assumptions on dynamical or observational noise are imposed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The application of the Bluetooth (BT) technology to transportation has been enabling researchers to make accurate travel time observations, in freeway and arterial roads. The Bluetooth traffic data are generally incomplete, for they only relate to those vehicles that are equipped with Bluetooth devices, and that are detected by the Bluetooth sensors of the road network. The fraction of detected vehicles versus the total number of transiting vehicles is often referred to as Bluetooth Penetration Rate (BTPR). The aim of this study is to precisely define the spatio-temporal relationship between the quantities that become available through the partial, noisy BT observations; and the hidden variables that describe the actual dynamics of vehicular traffic. To do so, we propose to incorporate a multi- class traffic model into a Sequential Montecarlo Estimation algorithm. Our framework has been applied for the empirical travel time investigations into the Brisbane Metropolitan region.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We have obtained H$\alpha$ high spatial and time resolution observations of the upper solar chromosphere and supplemented these with multi-wavelength observations from the Solar Dynamic Observatory (SDO) and the {\it Hinode} ExtremeUltraviolet Imaging Spectrometer (EIS). The H$\alpha$ observations were conducted on 11 February 2012 with the Hydrogen-Alpha Rapid Dynamics Camera (HARDcam) instrument at the National Solar Observatory's Dunn Solar Telescope. Our H$\alpha$ observations found large downflows of chromospheric material returning from coronal heights following a failed prominence eruption. We have detected several large condensations ("blobs") returning to the solar surface at velocities of $\approx$200 km s$^{-1}$ in both H$\alpha$ and several SDO AIA band passes. The average derived size of these "blobs" in H$\alpha$ is 500 by 3000 km$^2$ in the directions perpendicular and parallel to the direction of travel, respectively. A comparison of our "blob" widths to those found from coronal rain, indicate there are additional smaller, unresolved "blobs" in agreement with previous studies and recent numerical simulations. Our observed velocities and decelerations of the "blobs" in both H$\alpha$ and SDO bands are less than those expected for gravitational free-fall and imply additional magnetic or gas pressure impeding the flow. We derived a kinetic energy $\approx$2 orders of magnitude lower for the main eruption than a typical CME, which may explain its partial nature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reliable ambiguity resolution (AR) is essential to Real-Time Kinematic (RTK) positioning and its applications, since incorrect ambiguity fixing can lead to largely biased positioning solutions. A partial ambiguity fixing technique is developed to improve the reliability of AR, involving partial ambiguity decorrelation (PAD) and partial ambiguity resolution (PAR). Decorrelation transformation could substantially amplify the biases in the phase measurements. The purpose of PAD is to find the optimum trade-off between decorrelation and worst-case bias amplification. The concept of PAR refers to the case where only a subset of the ambiguities can be fixed correctly to their integers in the integer least-squares (ILS) estimation system at high success rates. As a result, RTK solutions can be derived from these integer-fixed phase measurements. This is meaningful provided that the number of reliably resolved phase measurements is sufficiently large for least-square estimation of RTK solutions as well. Considering the GPS constellation alone, partially fixed measurements are often insufficient for positioning. The AR reliability is usually characterised by the AR success rate. In this contribution an AR validation decision matrix is firstly introduced to understand the impact of success rate. Moreover the AR risk probability is included into a more complete evaluation of the AR reliability. We use 16 ambiguity variance-covariance matrices with different levels of success rate to analyse the relation between success rate and AR risk probability. Next, the paper examines during the PAD process, how a bias in one measurement is propagated and amplified onto many others, leading to more than one wrong integer and to affect the success probability. Furthermore, the paper proposes a partial ambiguity fixing procedure with a predefined success rate criterion and ratio-test in the ambiguity validation process. In this paper, the Galileo constellation data is tested with simulated observations. Numerical results from our experiment clearly demonstrate that only when the computed success rate is very high, the AR validation can provide decisions about the correctness of AR which are close to real world, with both low AR risk and false alarm probabilities. The results also indicate that the PAR procedure can automatically chose adequate number of ambiguities to fix at given high-success rate from the multiple constellations instead of fixing all the ambiguities. This is a benefit that multiple GNSS constellations can offer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Generalized planar fault energy (GPFE) curves have been used to predict partial-dislocation-mediated processes in nanocrystalline materials, but their validity has not been evaluated experimentally. We report experimental observations of a large quantity of both stacking faults and twins in nc Ni deformed at relatively low stresses in a tensile test. The experimental findings indicate that the GPFE curves can reasonably explain the formation of stacking faults, but they alone were not able to adequately predict the propensity of deformation twinning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deformation twins have been observed in nanocrystalline (nc) fcc metals with medium-to-high stacking fault energies such as aluminum, copper, and nickel. These metals in their coarse-grained states rarely deform by twining at room temperature and low strain rates. Several twinning mechanisms have been reported that are unique to nc metals. This paper reviews experimental evidences on deformation twinning and partial dislocation. emissions from grain boundaries, twinning mechanisms, and twins with zero-macro-strain. Factors that affect the twinning propensity and recent analytical models on the critical grain sizes for twinning are also discussed. The current issues on deformation twinning in nanocrystalline metals are listed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous experiments on nanocrystalline Ni were conducted under quasistatic strain rates (similar to 3x10(-3)/s), which are much lower than that used in typical molecular dynamics simulations (>3x10(7)/s), thus making direct comparison of modeling and experiments very difficult. In this study, the split Hopkinson bar tests revealed that nanocrystalline Ni prefers twinning to extended partials, especially under higher strain rates (10(3)/s). These observations contradict some reported molecular dynamics simulation results, where only extended partials, but no twins, were observed. The accuracy of the generalized planar fault energies is only partially responsible, but cannot fully account for such a difference. (C) 2007 American Institute of Physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gaussian processes are gaining increasing popularity among the control community, in particular for the modelling of discrete time state space systems. However, it has not been clear how to incorporate model information, in the form of known state relationships, when using a Gaussian process as a predictive model. An obvious example of known prior information is position and velocity related states. Incorporation of such information would be beneficial both computationally and for faster dynamics learning. This paper introduces a method of achieving this, yielding faster dynamics learning and a reduction in computational effort from O(Dn2) to O((D - F)n2) in the prediction stage for a system with D states, F known state relationships and n observations. The effectiveness of the method is demonstrated through its inclusion in the PILCO learning algorithm with application to the swing-up and balance of a torque-limited pendulum and the balancing of a robotic unicycle in simulation. © 2012 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present the Pan-STARRS1 discovery of the long-lived and blue transient PS1-11af, which was also detected by Galaxy Evolution Explorer with coordinated observations in the near-ultraviolet (NUV) band. PS1-11af is associated with the nucleus of an early type galaxy at redshift z = 0.4046 that exhibits no evidence for star formation or active galactic nucleus activity. Four epochs of spectroscopy reveal a pair of transient broad absorption features in the UV on otherwise featureless spectra. Despite the superficial similarity of these features to P-Cygni absorptions of supernovae (SNe), we conclude that PS1-11af is not consistent with the properties of known types of SNe. Blackbody fits to the spectral energy distribution are inconsistent with the cooling, expanding ejecta of a SN, and the velocities of the absorption features are too high to represent material in homologous expansion near a SN photosphere. However, the constant blue colors and slow evolution of the luminosity are similar to previous optically selected tidal disruption events (TDEs). The shape of the optical light curve is consistent with models for TDEs, but the minimum accreted mass necessary to power the observed luminosity is only 0.002 M, which points to a partial disruption model. A full disruption model predicts higher bolometric luminosities, which would require most of the radiation to be emitted in a separate component at high energies where we lack observations. In addition, the observed temperature is lower than that predicted by pure accretion disk models for TDEs and requires reprocessing to a constant, lower temperature. Three deep non-detections in the radio with the Very Large Array over the first two years after the event set strict limits on the production of any relativistic outflow comparable to Swift J1644+57, even if off-axis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Expression of the SS18/SYT-SSX fusion protein is believed to underlie the pathogenesis of synovial sarcoma (SS). Recent evidence suggests that deregulation of the Wnt pathway may play an important role in SS but the mechanisms whereby SS18-SSX might affect Wnt signaling remain to be elucidated. Here, we show that SS18/SSX tightly regulates the elevated expression of the key Wnt target AXIN2 in primary SS. SS18-SSX is shown to interact with TCF/LEF, TLE and HDAC but not β-catenin in vivo and to induce Wnt target gene expression by forming a complex containing promoter-bound TCF/LEF and HDAC but lacking β-catenin. Our observations provide a tumor-specific mechanistic basis for Wnt target gene induction in SS that can occur in the absence of Wnt ligand stimulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Broadband shortwave and longwave radiative fluxes observed both at the surface and from space during the Radiative Atmospheric Divergence using ARM Mobile Facility, GERB data and AMMA Stations (RADAGAST) experiment in Niamey, Niger, in 2006 are presented. The surface fluxes were measured by the Atmospheric Radiation Measurement (ARM) Program Mobile Facility (AMF) at Niamey airport, while the fluxes at the top of the atmosphere (TOA) are from the Geostationary Earth Radiation Budget (GERB) instrument on the Meteosat-8 satellite. The data are analyzed as daily averages, in order to minimize sampling differences between the surface and top of atmosphere instruments, while retaining the synoptic and seasonal changes that are the main focus of this study. A cloud mask is used to identify days with cloud versus those with predominantly clear skies. The influence of temperature, water vapor, aerosols, and clouds is investigated. Aerosols are ubiquitous throughout the year and have a significant impact on both the shortwave and longwave fluxes. The large and systematic seasonal changes in temperature and column integrated water vapor (CWV) through the dry and wet seasons are found to exert strong influences on the longwave fluxes. These influences are often in opposition to each other, because the highest temperatures occur at the end of the dry season when the CWV is lowest, while in the wet season the lowest temperatures are associated with the highest values of CWV. Apart from aerosols, the shortwave fluxes are also affected by clouds and by the seasonal changes in CWV. The fluxes are combined to provide estimates of the divergence of radiation across the atmosphere throughout 2006. The longwave divergence shows a relatively small variation through the year, because of a partial compensation between the seasonal variations in the outgoing longwave radiation (OLR) and surface net longwave radiation. A simple model of the greenhouse effect is used to interpret this result in terms of the dependence of the normalized greenhouse effect at the TOA and of the effective emissivity of the atmosphere at the surface on the CWV. It is shown that, as the CWV increases, the atmosphere loses longwave energy to the surface with about the same increasing efficiency with which it traps the OLR. When combined with the changes in temperature, this maintains the atmospheric longwave divergence within the narrow range that is observed. The shortwave divergence is mainly determined by the CWV and aerosol loadings and the effect of clouds is much smaller than on the component fluxes.