906 resultados para state estimation
Resumo:
2000 Mathematics Subject Classification: 60J80.
Resumo:
Crash reduction factors (CRFs) are used to estimate the potential number of traffic crashes expected to be prevented from investment in safety improvement projects. The method used to develop CRFs in Florida has been based on the commonly used before-and-after approach. This approach suffers from a widely recognized problem known as regression-to-the-mean (RTM). The Empirical Bayes (EB) method has been introduced as a means to addressing the RTM problem. This method requires the information from both the treatment and reference sites in order to predict the expected number of crashes had the safety improvement projects at the treatment sites not been implemented. The information from the reference sites is estimated from a safety performance function (SPF), which is a mathematical relationship that links crashes to traffic exposure. The objective of this dissertation was to develop the SPFs for different functional classes of the Florida State Highway System. Crash data from years 2001 through 2003 along with traffic and geometric data were used in the SPF model development. SPFs for both rural and urban roadway categories were developed. The modeling data used were based on one-mile segments that contain homogeneous traffic and geometric conditions within each segment. Segments involving intersections were excluded. The scatter plots of data show that the relationships between crashes and traffic exposure are nonlinear, that crashes increase with traffic exposure in an increasing rate. Four regression models, namely, Poisson (PRM), Negative Binomial (NBRM), zero-inflated Poisson (ZIP), and zero-inflated Negative Binomial (ZINB), were fitted to the one-mile segment records for individual roadway categories. The best model was selected for each category based on a combination of the Likelihood Ratio test, the Vuong statistical test, and the Akaike's Information Criterion (AIC). The NBRM model was found to be appropriate for only one category and the ZINB model was found to be more appropriate for six other categories. The overall results show that the Negative Binomial distribution model generally provides a better fit for the data than the Poisson distribution model. In addition, the ZINB model was found to give the best fit when the count data exhibit excess zeros and over-dispersion for most of the roadway categories. While model validation shows that most data points fall within the 95% prediction intervals of the models developed, the Pearson goodness-of-fit measure does not show statistical significance. This is expected as traffic volume is only one of the many factors contributing to the overall crash experience, and that the SPFs are to be applied in conjunction with Accident Modification Factors (AMFs) to further account for the safety impacts of major geometric features before arriving at the final crash prediction. However, with improved traffic and crash data quality, the crash prediction power of SPF models may be further improved.
Resumo:
As congestion management strategies begin to put more emphasis on person trips than vehicle trips, the need for vehicle occupancy data has become more critical. The traditional methods of collecting these data include the roadside windshield method and the carousel method. These methods are labor-intensive and expensive. An alternative to these traditional methods is to make use of the vehicle occupancy information in traffic accident records. This method is cost effective and may provide better spatial and temporal coverage than the traditional methods. However, this method is subject to potential biases resulting from under- and over-involvement of certain population sectors and certain types of accidents in traffic accident records. In this dissertation, three such potential biases, i.e., accident severity, driver’s age, and driver’s gender, were investigated and the corresponding bias factors were developed as needed. The results show that although multi-occupant vehicles are involved in higher percentages of severe accidents than are single-occupant vehicles, multi-occupant vehicles in the whole accident vehicle population were not overrepresented in the accident database. On the other hand, a significant difference was found between the distributions of the ages and genders of drivers involved in accidents and those of the general driving population. An information system that incorporates adjustments for the potential biases was developed to estimate the average vehicle occupancies (AVOs) for different types of roadways on the Florida state roadway system. A reasonableness check of the results from the system shows AVO estimates that are highly consistent with expectations. In addition, comparisons of AVOs from accident data with the field estimates show that the two data sources produce relatively consistent results. While accident records can be used to obtain the historical AVO trends and field data can be used to estimate the current AVOs, no known methods have been developed to project future AVOs. Four regression models for the purpose of predicting weekday AVOs on different levels of geographic areas and roadway types were developed as part of this dissertation. The models show that such socioeconomic factors as income, vehicle ownership, and employment have a significant impact on AVOs.
Resumo:
Annual Average Daily Traffic (AADT) is a critical input to many transportation analyses. By definition, AADT is the average 24-hour volume at a highway location over a full year. Traditionally, AADT is estimated using a mix of permanent and temporary traffic counts. Because field collection of traffic counts is expensive, it is usually done for only the major roads, thus leaving most of the local roads without any AADT information. However, AADTs are needed for local roads for many applications. For example, AADTs are used by state Departments of Transportation (DOTs) to calculate the crash rates of all local roads in order to identify the top five percent of hazardous locations for annual reporting to the U.S. DOT. ^ This dissertation develops a new method for estimating AADTs for local roads using travel demand modeling. A major component of the new method involves a parcel-level trip generation model that estimates the trips generated by each parcel. The model uses the tax parcel data together with the trip generation rates and equations provided by the ITE Trip Generation Report. The generated trips are then distributed to existing traffic count sites using a parcel-level trip distribution gravity model. The all-or-nothing assignment method is then used to assign the trips onto the roadway network to estimate the final AADTs. The entire process was implemented in the Cube demand modeling system with extensive spatial data processing using ArcGIS. ^ To evaluate the performance of the new method, data from several study areas in Broward County in Florida were used. The estimated AADTs were compared with those from two existing methods using actual traffic counts as the ground truths. The results show that the new method performs better than both existing methods. One limitation with the new method is that it relies on Cube which limits the number of zones to 32,000. Accordingly, a study area exceeding this limit must be partitioned into smaller areas. Because AADT estimates for roads near the boundary areas were found to be less accurate, further research could examine the best way to partition a study area to minimize the impact.^
Resumo:
As congestion management strategies begin to put more emphasis on person trips than vehicle trips, the need for vehicle occupancy data has become more critical. The traditional methods of collecting these data include the roadside windshield method and the carousel method. These methods are labor-intensive and expensive. An alternative to these traditional methods is to make use of the vehicle occupancy information in traffic accident records. This method is cost effective and may provide better spatial and temporal coverage than the traditional methods. However, this method is subject to potential biases resulting from under- and over-involvement of certain population sectors and certain types of accidents in traffic accident records. In this dissertation, three such potential biases, i.e., accident severity, driver¡¯s age, and driver¡¯s gender, were investigated and the corresponding bias factors were developed as needed. The results show that although multi-occupant vehicles are involved in higher percentages of severe accidents than are single-occupant vehicles, multi-occupant vehicles in the whole accident vehicle population were not overrepresented in the accident database. On the other hand, a significant difference was found between the distributions of the ages and genders of drivers involved in accidents and those of the general driving population. An information system that incorporates adjustments for the potential biases was developed to estimate the average vehicle occupancies (AVOs) for different types of roadways on the Florida state roadway system. A reasonableness check of the results from the system shows AVO estimates that are highly consistent with expectations. In addition, comparisons of AVOs from accident data with the field estimates show that the two data sources produce relatively consistent results. While accident records can be used to obtain the historical AVO trends and field data can be used to estimate the current AVOs, no known methods have been developed to project future AVOs. Four regression models for the purpose of predicting weekday AVOs on different levels of geographic areas and roadway types were developed as part of this dissertation. The models show that such socioeconomic factors as income, vehicle ownership, and employment have a significant impact on AVOs.
Resumo:
This paper presents a novel method for determining the temperature of a radiating body. The experimental method requires only very common instrumentation. It is based on the measurement of the stationary temperature of an object placed at different distances from the body and on the application of the energy balance equation in a stationary state. The method allows one to obtain the temperature of an inaccessible radiating body when radiation measurements are not available. The method has been applied to the determination of the filament temperature of incandescent lamps of different powers.
Resumo:
Single-particle mixing state information can be a powerful tool for assessing the relative impact of local and regional sources of ambient particulate matter in urban environments. However, quantitative mixing state data are challenging to obtain using single-particle mass spectrometers. In this study, the quantitative chemical composition of carbonaceous single particles has been determined using an aerosol time-of-flight mass spectrometer (ATOFMS) as part of the MEGAPOLI 2010 winter campaign in Paris, France. Relative peak areas of marker ions for elemental carbon (EC), organic aerosol (OA), ammonium, nitrate, sulfate and potassium were compared with concurrent measurements from an Aerodyne high-resolution time-of-flight aerosol mass spectrometer (HR-ToF-AMS), a thermal-optical OCEC analyser and a particle into liquid sampler coupled with ion chromatography (PILS-IC). ATOFMS-derived estimated mass concentrations reproduced the variability of these species well (R-2 = 0.67-0.78), and 10 discrete mixing states for carbonaceous particles were identified and quantified. The chemical mixing state of HR-ToF-AMS organic aerosol factors, resolved using positive matrix factorisation, was also investigated through comparison with the ATOFMS dataset. The results indicate that hydrocarbon-like OA (HOA) detected in Paris is associated with two EC-rich mixing states which differ in their relative sulfate content, while fresh biomass burning OA (BBOA) is associated with two mixing states which differ significantly in their OA/EC ratios. Aged biomass burning OA (OOA(2)-BBOA) was found to be significantly internally mixed with nitrate, while secondary, oxidised OA (OOA) was associated with five particle mixing states, each exhibiting different relative secondary inorganic ion content. Externally mixed secondary organic aerosol was not observed. These findings demonstrate the range of primary and secondary organic aerosol mixing states in Paris. Examination of the temporal behaviour and chemical composition of the ATOFMS classes also enabled estimation of the relative contribution of transported emissions of each chemical species and total particle mass in the size range investigated. Only 22% of the total ATOFMS-derived particle mass was apportioned to fresh, local emissions, with 78% apportioned to regional/continental-scale emissions. Single-particle mixing state information can be a powerful tool for assessing the relative impact of local and regional sources of ambient particulate matter in urban environments. However, quantitative mixing state data are challenging to obtain using single-particle mass spectrometers. In this study, the quantitative chemical composition of carbonaceous single particles has been determined using an aerosol time-of-flight mass spectrometer (ATOFMS) as part of the MEGAPOLI 2010 winter campaign in Paris, France. Relative peak areas of marker ions for elemental carbon (EC), organic aerosol (OA), ammonium, nitrate, sulfate and potassium were compared with concurrent measurements from an Aerodyne high-resolution time-of-flight aerosol mass spectrometer (HR-ToF-AMS), a thermal-optical OCEC analyser and a particle into liquid sampler coupled with ion chromatography (PILS-IC). ATOFMS-derived estimated mass concentrations reproduced the variability of these species well (R-2 = 0.67-0.78), and 10 discrete mixing states for carbonaceous particles were identified and quantified. The chemical mixing state of HR-ToF-AMS organic aerosol factors, resolved using positive matrix factorisation, was also investigated through comparison with the ATOFMS dataset. The results indicate that hydrocarbon-like OA (HOA) detected in Paris is associated with two EC-rich mixing states which differ in their relative sulfate content, while fresh biomass burning OA (BBOA) is associated with two mixing states which differ significantly in their OA/EC ratios. Aged biomass burning OA (OOA(2)-BBOA) was found to be significantly internally mixed with nitrate, while secondary, oxidised OA (OOA) was associated with five particle mixing states, each exhibiting different relative secondary inorganic ion content. Externally mixed secondary organic aerosol was not observed. These findings demonstrate the range of primary and secondary organic aerosol mixing states in Paris. Examination of the temporal behaviour and chemical composition of the ATOFMS classes also enabled estimation of the relative contribution of transported emissions of each chemical species and total particle mass in the size range investigated. Only 22% of the total ATOFMS-derived particle mass was apportioned to fresh, local emissions, with 78% apportioned to regional/continental-scale emissions.
Resumo:
The map representation of an environment should be selected based on its intended application. For example, a geometrically accurate map describing the Euclidean space of an environment is not necessarily the best choice if only a small subset its features are required. One possible subset is the orientations of the flat surfaces in the environment, represented by a special parameterization of normal vectors called axes. Devoid of positional information, the entries of an axis map form a non-injective relationship with the flat surfaces in the environment, which results in physically distinct flat surfaces being represented by a single axis. This drastically reduces the complexity of the map, but retains important information about the environment that can be used in meaningful applications in both two and three dimensions. This thesis presents axis mapping, which is an algorithm that accurately and automatically estimates an axis map of an environment based on sensor measurements collected by a mobile platform. Furthermore, two major applications of axis maps are developed and implemented. First, the LiDAR compass is a heading estimation algorithm that compares measurements of axes with an axis map of the environment. Pairing the LiDAR compass with simple translation measurements forms the basis for an accurate two-dimensional localization algorithm. It is shown that this algorithm eliminates the growth of heading error in both indoor and outdoor environments, resulting in accurate localization over long distances. Second, in the context of geotechnical engineering, a three-dimensional axis map is called a stereonet, which is used as a tool to examine the strength and stability of a rock face. Axis mapping provides a novel approach to create accurate stereonets safely, rapidly, and inexpensively compared to established methods. The non-injective property of axis maps is leveraged to probabilistically describe the relationships between non-sequential measurements of the rock face. The automatic estimation of stereonets was tested in three separate outdoor environments. It is shown that axis mapping can accurately estimate stereonets while improving safety, requiring significantly less time and effort, and lowering costs compared to traditional and current state-of-the-art approaches.
Resumo:
In this paper we present a convolutional neuralnetwork (CNN)-based model for human head pose estimation inlow-resolution multi-modal RGB-D data. We pose the problemas one of classification of human gazing direction. We furtherfine-tune a regressor based on the learned deep classifier. Next wecombine the two models (classification and regression) to estimateapproximate regression confidence. We present state-of-the-artresults in datasets that span the range of high-resolution humanrobot interaction (close up faces plus depth information) data tochallenging low resolution outdoor surveillance data. We buildupon our robust head-pose estimation and further introduce anew visual attention model to recover interaction with theenvironment. Using this probabilistic model, we show thatmany higher level scene understanding like human-human/sceneinteraction detection can be achieved. Our solution runs inreal-time on commercial hardware
Resumo:
We apply the formalism of quantum estimation theory to extract information about potential collapse mechanisms of the continuous spontaneous localisation (CSL) form.
In order to estimate the strength with which the field responsible for the CSL mechanism couples to massive systems, we consider the optomechanical interaction
between a mechanical resonator and a cavity field. Our estimation strategy passes through the probing of either the state of the oscillator or that of the electromagnetic field that drives its motion. In particular, we concentrate on all-optical measurements, such as homodyne and heterodyne measurements.
We also compare the performances of such strategies with those of a spin-assisted optomechanical system, where the estimation of the CSL parameter is performed
through time-gated spin-like measurements.
Resumo:
The present paper describes a system for the construction of visual maps ("mosaics") and motion estimation for a set of AUVs (Autonomous Underwater Vehicles). Robots are equipped with down-looking camera which is used to estimate their motion with respect to the seafloor and built an online mosaic. As the mosaic increases in size, a systematic bias is introduced in its alignment, resulting in an erroneous output. The theoretical concepts associated with the use of an Augmented State Kalman Filter (ASKF) were applied to optimally estimate both visual map and the fleet position.
Resumo:
This work presents a periodic state space model to model monthly temperature data. Additionally, some issues are discussed, as the parameter estimation or the Kalman filter recursions adapted to a periodic model. This framework is applied to monthly long-term temperature time series of Lisbon.
Resumo:
Recent developments in automation, robotics and artificial intelligence have given a push to a wider usage of these technologies in recent years, and nowadays, driverless transport systems are already state-of-the-art on certain legs of transportation. This has given a push for the maritime industry to join the advancement. The case organisation, AAWA initiative, is a joint industry-academia research consortium with the objective of developing readiness for the first commercial autonomous solutions, exploiting state-of-the-art autonomous and remote technology. The initiative develops both autonomous and remote operation technology for navigation, machinery, and all on-board operating systems. The aim of this study is to develop a model with which to estimate and forecast the operational costs, and thus enable comparisons between manned and autonomous cargo vessels. The building process of the model is also described and discussed. Furthermore, the model’s aim is to track and identify the critical success factors of the chosen ship design, and to enable monitoring and tracking of the incurred operational costs as the life cycle of the vessel progresses. The study adopts the constructive research approach, as the aim is to develop a construct to meet the needs of a case organisation. Data has been collected through discussions and meeting with consortium members and researchers, as well as through written and internal communications material. The model itself is built using activity-based life cycle costing, which enables both realistic cost estimation and forecasting, as well as the identification of critical success factors due to the process-orientation adopted from activity-based costing and the statistical nature of Monte Carlo simulation techniques. As the model was able to meet the multiple aims set for it, and the case organisation was satisfied with it, it could be argued that activity-based life cycle costing is the method with which to conduct cost estimation and forecasting in the case of autonomous cargo vessels. The model was able to perform the cost analysis and forecasting, as well as to trace the critical success factors. Later on, it also enabled, albeit hypothetically, monitoring and tracking of the incurred costs. By collecting costs this way, it was argued that the activity-based LCC model is able facilitate learning from and continuous improvement of the autonomous vessel. As with the building process of the model, an individual approach was chosen, while still using the implementation and model building steps presented in existing literature. This was due to two factors: the nature of the model and – perhaps even more importantly – the nature of the case organisation. Furthermore, the loosely organised network structure means that knowing the case organisation and its aims is of great importance when conducting a constructive research.
Resumo:
Due to increasing integration density and operating frequency of today's high performance processors, the temperature of a typical chip can easily exceed 100 degrees Celsius. However, the runtime thermal state of a chip is very hard to predict and manage due to the random nature in computing workloads, as well as the process, voltage and ambient temperature variability (together called PVT variability). The uneven nature (both in time and space) of the heat dissipation of the chip could lead to severe reliability issues and error-prone chip behavior (e.g. timing errors). Many dynamic power/thermal management techniques have been proposed to address this issue such as dynamic voltage and frequency scaling (DVFS), clock gating and etc. However, most of such techniques require accurate knowledge of the runtime thermal state of the chip to make efficient and effective control decisions. In this work we address the problem of tracking and managing the temperature of microprocessors which include the following sub-problems: (1) how to design an efficient sensor-based thermal tracking system on a given design that could provide accurate real-time temperature feedback; (2) what statistical techniques could be used to estimate the full-chip thermal profile based on very limited (and possibly noise-corrupted) sensor observations; (3) how do we adapt to changes in the underlying system's behavior, since such changes could impact the accuracy of our thermal estimation. The thermal tracking methodology proposed in this work is enabled by on-chip sensors which are already implemented in many modern processors. We first investigate the underlying relationship between heat distribution and power consumption, then we introduce an accurate thermal model for the chip system. Based on this model, we characterize the temperature correlation that exists among different chip modules and explore statistical approaches (such as those based on Kalman filter) that could utilize such correlation to estimate the accurate chip-level thermal profiles in real time. Such estimation is performed based on limited sensor information because sensors are usually resource constrained and noise-corrupted. We also took a further step to extend the standard Kalman filter approach to account for (1) nonlinear effects such as leakage-temperature interdependency and (2) varying statistical characteristics in the underlying system model. The proposed thermal tracking infrastructure and estimation algorithms could consistently generate accurate thermal estimates even when the system is switching among workloads that have very distinct characteristics. Through experiments, our approaches have demonstrated promising results with much higher accuracy compared to existing approaches. Such results can be used to ensure thermal reliability and improve the effectiveness of dynamic thermal management techniques.
Resumo:
In this paper we show how to accurately perform a quasi-a priori estimation of the truncation error of steady-state solutions computed by a discontinuous Galerkin spectral element method. We estimate the spatial truncation error using the ?-estimation procedure. While most works in the literature rely on fully time-converged solutions on grids with different spacing to perform the estimation, we use non time-converged solutions on one grid with different polynomial orders. The quasi-a priori approach estimates the error while the residual of the time-iterative method is not negligible. Furthermore, the method permits one to decouple the surface and the volume contributions of the truncation error, and provides information about the anisotropy of the solution as well as its rate of convergence in polynomial order. First, we focus on the analysis of one dimensional scalar conservation laws to examine the accuracy of the estimate. Then, we extend the analysis to two dimensional problems. We demonstrate that this quasi-a priori approach yields a spectrally accurate estimate of the truncation error.