929 resultados para Remediation time estimation
Resumo:
This study focuses on quantifying explicitly the sediment budget of deeply incised ravines in the lower Le Sueur River watershed, in southern Minnesota. High-rate-gully-erosion equations along with the Universal Soil Loss Equation (USLE) were implemented in a numerical modeling approach that is based on a time-integration of the sediment balance equations. The model estimates the rates of ravine width and depth change and the amount of sediment periodically flushing from the ravines. Components of the sediment budget of the ravines were simulated with the model and results suggest that the ravine walls are the major sediment source in the ravines. A sensitivity analysis revealed that the erodibility coefficients of the gully bed and wall, the local slope angle and the Manning’s coefficient are the key parameters controlling the rate of sediment production. Recommendations to guide further monitoring efforts in the watershed and increased detail modeling approaches are highlighted as a result of this modeling effort.
Resumo:
This thesis stems from the project with real-time environmental monitoring company EMSAT Corporation. They were looking for methods to automatically ag spikes and other anomalies in their environmental sensor data streams. The problem presents several challenges: near real-time anomaly detection, absence of labeled data and time-changing data streams. Here, we address this problem using both a statistical parametric approach as well as a non-parametric approach like Kernel Density Estimation (KDE). The main contribution of this thesis is extending the KDE to work more effectively for evolving data streams, particularly in presence of concept drift. To address that, we have developed a framework for integrating Adaptive Windowing (ADWIN) change detection algorithm with KDE. We have tested this approach on several real world data sets and received positive feedback from our industry collaborator. Some results appearing in this thesis have been presented at ECML PKDD 2015 Doctoral Consortium.
Resumo:
The quantitative diatom analysis of 218 surface sediment samples recovered in the Atlantic and western Indian sector of the Southern Ocean is used to define a base of reference data for paleotemperature estimations from diatom assemblages using the Imbrie and Kipp transfer function method. The criteria which justify the exclusion of samples and species out of the raw data set in order to define a reference database are outlined and discussed. Sensitivity tests with eight data sets were achieved evaluating the effects of overall dominance of single species, different methods of species abundance ranking, and no-analog conditions (e.g., Eucampia Antarctica) on the estimated paleotemperatures. The defined transfer functions were applied on a sediment core from the northern Antarctic zone. Overall dominance of Fragilariopsis kerguelensis in the diatom assemblages resulted in a close affinity between paleotemperature curve and relative abundance pattern of this species downcore. Logarithmic conversion of counting data applied with other ranking methods in order to compensate the dominance of F. kerguelensis revealed the best statistical results. A reliable diatom transfer function for future paleotemperature estimations is presented.
Resumo:
ODP Site 1089 is optimally located in order to monitor the occurrence of maxima in Agulhas heat and salt spillage from the Indian to the Atlantic Ocean. Radiolarian-based paleotemperature transfer functions allowed to reconstruct the climatic history for the last 450 kyr at this location. A warm sea surface temperature anomaly during Marine Isotope Stage (MIS) 10 was recognized and traced to other oceanic records along the surface branch of the global thermohaline (THC) circulation system, and is particularly marked at locations where a strong interaction between oceanic and atmospheric overturning cells and fronts occurs. This anomaly is absent in the Vostok ice core deuterium, and in oceanic records from the Antarctic Zone. However, it is present in the deuterium excess record from the Vostok ice core, interpreted as reflecting the temperature at the moisture source site for the snow precipitated at Vostok Station. As atmospheric models predict a subtropical Indian source for such moisture, this provides the necessary teleconnection between East Antarctica and ODP Site 1089, as the subtropical Indian is also the source area of the Agulhas Current, the main climate agent at our study location. The presence of the MIS 10 anomaly in the delta13C foraminiferal records from the same core supports its connection to oceanic mechanisms, linking stronger Agulhas spillover intensity to increased productivity in the study area. We suggest, in analogy to modern oceanographic observations, this to be a consequence of a shallow nutricline, induced by eddy mixing and baroclinic tide generation, which are in turn connected to the flow geometry, and intensity, of the Agulhas Current as it flows past the Agulhas Bank. We interpret the intensified inflow of Agulhas Current to the South Atlantic as responding to the switch between lower and higher amplitude in the insolation forcing in the Agulhas Current source area. This would result in higher SSTs in the Cape Basin during the glacial MIS 10, due to the release into the South Atlantic of the heat previously accumulating in the subtropical and equatorial Indian and Pacific Ocean. If our explanation for the MIS 10 anomaly in terms of an insolation variability switch is correct, we might expect that a future Agulhas SSST anomaly event will further delay the onset of next glacial age. In fact, the insolation forcing conditions for the Holocene (the current interglacial) are very similar to those present during MIS 11 (the interglacial preceding MIS 10), as both periods are characterized by a low insolation variability for the Agulhas Current source area. Natural climatic variability will force the Earth system in the same direction as the anthropogenic global warming trend, and will thus lead to even warmer than expected global temperatures in the near future.
Resumo:
Finite-Differences Time-Domain (FDTD) algorithms are well established tools of computational electromagnetism. Because of their practical implementation as computer codes, they are affected by many numerical artefact and noise. In order to obtain better results we propose using Principal Component Analysis (PCA) based on multivariate statistical techniques. The PCA has been successfully used for the analysis of noise and spatial temporal structure in a sequence of images. It allows a straightforward discrimination between the numerical noise and the actual electromagnetic variables, and the quantitative estimation of their respective contributions. Besides, The GDTD results can be filtered to clean the effect of the noise. In this contribution we will show how the method can be applied to several FDTD simulations: the propagation of a pulse in vacuum, the analysis of two-dimensional photonic crystals. In this last case, PCA has revealed hidden electromagnetic structures related to actual modes of the photonic crystal.
Resumo:
Prospective estimation of patient CT organ dose prior to examination can help technologist adjust CT scan settings to reduce radiation dose to patient while maintaining certain image quality. One possible way to achieve this is matching patient to digital models precisely. In previous work, patient matching was performed manually by matching the trunk height which was defined as the distance from top of clavicle to bottom of pelvis. However, this matching method is time consuming and impractical in scout images where entire trunk is not included. Purpose of this work was to develop an automatic patient matching strategy and verify its accuracy.
Resumo:
This dissertation contributes to the rapidly growing empirical research area in the field of operations management. It contains two essays, tackling two different sets of operations management questions which are motivated by and built on field data sets from two very different industries --- air cargo logistics and retailing.
The first essay, based on the data set obtained from a world leading third-party logistics company, develops a novel and general Bayesian hierarchical learning framework for estimating customers' spillover learning, that is, customers' learning about the quality of a service (or product) from their previous experiences with similar yet not identical services. We then apply our model to the data set to study how customers' experiences from shipping on a particular route affect their future decisions about shipping not only on that route, but also on other routes serviced by the same logistics company. We find that customers indeed borrow experiences from similar but different services to update their quality beliefs that determine future purchase decisions. Also, service quality beliefs have a significant impact on their future purchasing decisions. Moreover, customers are risk averse; they are averse to not only experience variability but also belief uncertainty (i.e., customer's uncertainty about their beliefs). Finally, belief uncertainty affects customers' utilities more compared to experience variability.
The second essay is based on a data set obtained from a large Chinese supermarket chain, which contains sales as well as both wholesale and retail prices of un-packaged perishable vegetables. Recognizing the special characteristics of this particularly product category, we develop a structural estimation model in a discrete-continuous choice model framework. Building on this framework, we then study an optimization model for joint pricing and inventory management strategies of multiple products, which aims at improving the company's profit from direct sales and at the same time reducing food waste and thus improving social welfare.
Collectively, the studies in this dissertation provide useful modeling ideas, decision tools, insights, and guidance for firms to utilize vast sales and operations data to devise more effective business strategies.
Resumo:
Spectral albedo has been measured at Dome C since December 2012 in the visible and near infrared (400 - 1050 nm) at sub-hourly resolution using a home-made spectral radiometer. Superficial specific surface area (SSA) has been estimated by fitting the observed albedo spectra to the analytical Asymptotic Approximation Radiative Transfer theory (AART). The dataset includes fully-calibrated albedo and SSA that pass several quality checks as described in the companion article. Only data for solar zenith angles less than 75° have been included, which theoretically spans the period October-March. In addition, to correct for residual errors still affecting data after the calibration, especially at the solar zenith angles higher than 60°, we produced a higher quality albedo time-series as follows: In the SSA estimation process described in the companion paper, a scaling coefficient A between the observed albedo and the theoretical model predictions was introduced to cope with these errors. This coefficient thus provides a first order estimate of the residual error. By dividing the albedo by this coefficient, we produced the "scaled fully-calibrated albedo". We strongly recommend to use the latter for most applications because it generally remains in the physical range 0-1. The former albedo is provided for reference to the companion paper and because it does not depend on the SSA estimation process and its underlying assumptions.
Resumo:
This paper introduces the LiDAR compass, a bounded and extremely lightweight heading estimation technique that combines a two-dimensional laser scanner and axis maps, which represent the orientations of flat surfaces in the environment. Although suitable for a variety of indoor and outdoor environments, the LiDAR compass is especially useful for embedded and real-time applications requiring low computational overhead. For example, when combined with a sensor that can measure translation (e.g., wheel encoders) the LiDAR compass can be used to yield accurate, lightweight, and very easily implementable localization that requires no prior mapping phase. The utility of using the LiDAR compass as part of a localization algorithm was tested on a widely-available open-source data set, an indoor environment, and a larger-scale outdoor environment. In all cases, it was shown that the growth in heading error was bounded, which significantly reduced the position error to less than 1% of the distance travelled.
Resumo:
When we study the variables that a ffect survival time, we usually estimate their eff ects by the Cox regression model. In biomedical research, e ffects of the covariates are often modi ed by a biomarker variable. This leads to covariates-biomarker interactions. Here biomarker is an objective measurement of the patient characteristics at baseline. Liu et al. (2015) has built up a local partial likelihood bootstrap model to estimate and test this interaction e ffect of covariates and biomarker, but the R code developed by Liu et al. (2015) can only handle one variable and one interaction term and can not t the model with adjustment to nuisance variables. In this project, we expand the model to allow adjustment to nuisance variables, expand the R code to take any chosen interaction terms, and we set up many parameters for users to customize their research. We also build up an R package called "lplb" to integrate the complex computations into a simple interface. We conduct numerical simulation to show that the new method has excellent fi nite sample properties under both the null and alternative hypothesis. We also applied the method to analyze data from a prostate cancer clinical trial with acid phosphatase (AP) biomarker.
Resumo:
The map representation of an environment should be selected based on its intended application. For example, a geometrically accurate map describing the Euclidean space of an environment is not necessarily the best choice if only a small subset its features are required. One possible subset is the orientations of the flat surfaces in the environment, represented by a special parameterization of normal vectors called axes. Devoid of positional information, the entries of an axis map form a non-injective relationship with the flat surfaces in the environment, which results in physically distinct flat surfaces being represented by a single axis. This drastically reduces the complexity of the map, but retains important information about the environment that can be used in meaningful applications in both two and three dimensions. This thesis presents axis mapping, which is an algorithm that accurately and automatically estimates an axis map of an environment based on sensor measurements collected by a mobile platform. Furthermore, two major applications of axis maps are developed and implemented. First, the LiDAR compass is a heading estimation algorithm that compares measurements of axes with an axis map of the environment. Pairing the LiDAR compass with simple translation measurements forms the basis for an accurate two-dimensional localization algorithm. It is shown that this algorithm eliminates the growth of heading error in both indoor and outdoor environments, resulting in accurate localization over long distances. Second, in the context of geotechnical engineering, a three-dimensional axis map is called a stereonet, which is used as a tool to examine the strength and stability of a rock face. Axis mapping provides a novel approach to create accurate stereonets safely, rapidly, and inexpensively compared to established methods. The non-injective property of axis maps is leveraged to probabilistically describe the relationships between non-sequential measurements of the rock face. The automatic estimation of stereonets was tested in three separate outdoor environments. It is shown that axis mapping can accurately estimate stereonets while improving safety, requiring significantly less time and effort, and lowering costs compared to traditional and current state-of-the-art approaches.
Resumo:
SELECTOR is a software package for studying the evolution of multiallelic genes under balancing or positive selection while simulating complex evolutionary scenarios that integrate demographic growth and migration in a spatially explicit population framework. Parameters can be varied both in space and time to account for geographical, environmental, and cultural heterogeneity. SELECTOR can be used within an approximate Bayesian computation estimation framework. We first describe the principles of SELECTOR and validate the algorithms by comparing its outputs for simple models with theoretical expectations. Then, we show how it can be used to investigate genetic differentiation of loci under balancing selection in interconnected demes with spatially heterogeneous gene flow. We identify situations in which balancing selection reduces genetic differentiation between population groups compared with neutrality and explain conflicting outcomes observed for human leukocyte antigen loci. These results and three previously published applications demonstrate that SELECTOR is efficient and robust for building insight into human settlement history and evolution.
Resumo:
In this paper we present a convolutional neuralnetwork (CNN)-based model for human head pose estimation inlow-resolution multi-modal RGB-D data. We pose the problemas one of classification of human gazing direction. We furtherfine-tune a regressor based on the learned deep classifier. Next wecombine the two models (classification and regression) to estimateapproximate regression confidence. We present state-of-the-artresults in datasets that span the range of high-resolution humanrobot interaction (close up faces plus depth information) data tochallenging low resolution outdoor surveillance data. We buildupon our robust head-pose estimation and further introduce anew visual attention model to recover interaction with theenvironment. Using this probabilistic model, we show thatmany higher level scene understanding like human-human/sceneinteraction detection can be achieved. Our solution runs inreal-time on commercial hardware
Resumo:
In-situ characterisation of thermocouple sensors is a challenging problem. Recently the authors presented a blind characterisation technique based on the cross-relation method of blind identification. The method allows in-situ identification of two thermocouple probes, each with a different dynamic response, using only sampled sensor measurement data. While the technique offers certain advantages over alternative methods, including low estimation variance and the ability to compensate for noise induced bias, the robustness of the method is limited by the multimodal nature of the cost function. In this paper, a normalisation term is proposed which improves the convexity of
the cost function. Further, a normalisation and bias compensation hybrid approach is presented that exploits the advantages of both normalisation and bias compensation. It is found that the optimum of the hybrid cost function is less biased and more stable than when only normalisation is applied. All results were verified by simulation.