929 resultados para Remediation time estimation
Resumo:
We apply the formalism of quantum estimation theory to extract information about potential collapse mechanisms of the continuous spontaneous localisation (CSL) form.
In order to estimate the strength with which the field responsible for the CSL mechanism couples to massive systems, we consider the optomechanical interaction
between a mechanical resonator and a cavity field. Our estimation strategy passes through the probing of either the state of the oscillator or that of the electromagnetic field that drives its motion. In particular, we concentrate on all-optical measurements, such as homodyne and heterodyne measurements.
We also compare the performances of such strategies with those of a spin-assisted optomechanical system, where the estimation of the CSL parameter is performed
through time-gated spin-like measurements.
Resumo:
In recent years, depth cameras have been widely utilized in camera tracking for augmented and mixed reality. Many of the studies focus on the methods that generate the reference model simultaneously with the tracking and allow operation in unprepared environments. However, methods that rely on predefined CAD models have their advantages. In such methods, the measurement errors are not accumulated to the model, they are tolerant to inaccurate initialization, and the tracking is always performed directly in reference model's coordinate system. In this paper, we present a method for tracking a depth camera with existing CAD models and the Iterative Closest Point (ICP) algorithm. In our approach, we render the CAD model using the latest pose estimate and construct a point cloud from the corresponding depth map. We construct another point cloud from currently captured depth frame, and find the incremental change in the camera pose by aligning the point clouds. We utilize a GPGPU-based implementation of the ICP which efficiently uses all the depth data in the process. The method runs in real-time, it is robust for outliers, and it does not require any preprocessing of the CAD models. We evaluated the approach using the Kinect depth sensor, and compared the results to a 2D edge-based method, to a depth-based SLAM method, and to the ground truth. The results show that the approach is more stable compared to the edge-based method and it suffers less from drift compared to the depth-based SLAM.
Resumo:
This master thesis proposes a solution to the approach problem in case of unknown severe microburst wind shear for a fixed-wing aircraft, accounting for both longitudinal and lateral dynamics. The adaptive controller design for wind rejection is also addressed, exploiting the wind estimation provided by suitable estimators. It is able to successfully complete the final approach phase even in presence of wind shear, and at the same time aerodynamic envelope protection is retained. The adaptive controller for wind compensation has been designed by a backstepping approach and feedback linearization for time-varying systems. The wind shear components have been estimated by higher-order sliding mode schemes. At the end of this work the results are provided, an autonomous final approach in presence of microburst is discussed, performances are analyzed, and estimation of the microburst characteristics from telemetry data is examined.
Resumo:
This analysis estimates several economic benefits derived from national implementation of the National Oceanic and Atmospheric Administration’s Physical Oceanographic Real-Time System (PORTS®) at the 175 largest ports in the United States. Significant benefits were observed owing to: (1) lower commercial marine accident rates and resultant reductions in morbidity, mortality and property damage; (2) reduced pollution remediation costs; and, (3) increased productivity associated with operation of more fully loaded commercial vessels. Evidence also suggested additional benefits from heightened commercial and recreational fish catch and diminished recreational boating accidents. Annual gross benefits from 58 current PORTS® locations exceeded $217 million with an addition $83 million possible if installed at the largest remaining 117 ports in the United States. Over the ten-year economic life of PORTS® instruments, the present value for installation at all 175 ports could approach $2.5 billion.
Resumo:
Les méthodes classiques d’analyse de survie notamment la méthode non paramétrique de Kaplan et Meier (1958) supposent l’indépendance entre les variables d’intérêt et de censure. Mais, cette hypothèse d’indépendance n’étant pas toujours soutenable, plusieurs auteurs ont élaboré des méthodes pour prendre en compte la dépendance. La plupart de ces méthodes émettent des hypothèses sur cette dépendance. Dans ce mémoire, nous avons proposé une méthode d’estimation de la dépendance en présence de censure dépendante qui utilise le copula-graphic estimator pour les copules archimédiennes (Rivest etWells, 2001) et suppose la connaissance de la distribution de la variable de censure. Nous avons ensuite étudié la consistance de cet estimateur à travers des simulations avant de l’appliquer sur un jeu de données réelles.
Resumo:
In recent papers, Wied and his coauthors have introduced change-point procedures to detect and estimate structural breaks in the correlation between time series. To prove the asymptotic distribution of the test statistic and stopping time as well as the change-point estimation rate, they use an extended functional Delta method and assume nearly constant expectations and variances of the time series. In this thesis, we allow asymptotically infinitely many structural breaks in the means and variances of the time series. For this setting, we present test statistics and stopping times which are used to determine whether or not the correlation between two time series is and stays constant, respectively. Additionally, we consider estimates for change-points in the correlations. The employed nonparametric statistics depend on the means and variances. These (nuisance) parameters are replaced by estimates in the course of this thesis. We avoid assuming a fixed form of these estimates but rather we use "blackbox" estimates, i.e. we derive results under assumptions that these estimates fulfill. These results are supplement with examples. This thesis is organized in seven sections. In Section 1, we motivate the issue and present the mathematical model. In Section 2, we consider a posteriori and sequential testing procedures, and investigate convergence rates for change-point estimation, always assuming that the means and the variances of the time series are known. In the following sections, the assumptions of known means and variances are relaxed. In Section 3, we present the assumptions for the mean and variance estimates that we will use for the mean in Section 4, for the variance in Section 5, and for both parameters in Section 6. Finally, in Section 7, a simulation study illustrates the finite sample behaviors of some testing procedures and estimates.
Resumo:
Studies of fluid-structure interactions associated with flexible structures such as flapping wings require the capture and quantification of large motions of bodies that may be opaque. Motion capture of a free flying insect is considered by using three synchronized high-speed cameras. A solid finite element representation is used as a reference body and successive snapshots in time of the displacement fields are reconstructed via an optimization procedure. An objective function is formulated, and various shape difference definitions are considered. The proposed methodology is first studied for a synthetic case of a flexible cantilever structure undergoing large deformations, and then applied to a Manduca Sexta (hawkmoth) in free flight. The three-dimensional motions of this flapping system are reconstructed from image date collected by using three cameras. The complete deformation geometry of this system is analyzed. Finally, a computational investigation is carried out to understand the flow physics and aerodynamic performance by prescribing the body and wing motions in a fluid-body code. This thesis work contains one of the first set of such motion visualization and deformation analyses carried out for a hawkmoth in free flight. The tools and procedures used in this work are widely applicable to the studies of other flying animals with flexible wings as well as synthetic systems with flexible body elements.
Resumo:
We present new methodologies to generate rational function approximations of broadband electromagnetic responses of linear and passive networks of high-speed interconnects, and to construct SPICE-compatible, equivalent circuit representations of the generated rational functions. These new methodologies are driven by the desire to improve the computational efficiency of the rational function fitting process, and to ensure enhanced accuracy of the generated rational function interpolation and its equivalent circuit representation. Toward this goal, we propose two new methodologies for rational function approximation of high-speed interconnect network responses. The first one relies on the use of both time-domain and frequency-domain data, obtained either through measurement or numerical simulation, to generate a rational function representation that extrapolates the input, early-time transient response data to late-time response while at the same time providing a means to both interpolate and extrapolate the used frequency-domain data. The aforementioned hybrid methodology can be considered as a generalization of the frequency-domain rational function fitting utilizing frequency-domain response data only, and the time-domain rational function fitting utilizing transient response data only. In this context, a guideline is proposed for estimating the order of the rational function approximation from transient data. The availability of such an estimate expedites the time-domain rational function fitting process. The second approach relies on the extraction of the delay associated with causal electromagnetic responses of interconnect systems to provide for a more stable rational function process utilizing a lower-order rational function interpolation. A distinctive feature of the proposed methodology is its utilization of scattering parameters. For both methodologies, the approach of fitting the electromagnetic network matrix one element at a time is applied. It is shown that, with regard to the computational cost of the rational function fitting process, such an element-by-element rational function fitting is more advantageous than full matrix fitting for systems with a large number of ports. Despite the disadvantage that different sets of poles are used in the rational function of different elements in the network matrix, such an approach provides for improved accuracy in the fitting of network matrices of systems characterized by both strongly coupled and weakly coupled ports. Finally, in order to provide a means for enforcing passivity in the adopted element-by-element rational function fitting approach, the methodology for passivity enforcement via quadratic programming is modified appropriately for this purpose and demonstrated in the context of element-by-element rational function fitting of the admittance matrix of an electromagnetic multiport.
Resumo:
Power system engineers face a double challenge: to operate electric power systems within narrow stability and security margins, and to maintain high reliability. There is an acute need to better understand the dynamic nature of power systems in order to be prepared for critical situations as they arise. Innovative measurement tools, such as phasor measurement units, can capture not only the slow variation of the voltages and currents but also the underlying oscillations in a power system. Such dynamic data accessibility provides us a strong motivation and a useful tool to explore dynamic-data driven applications in power systems. To fulfill this goal, this dissertation focuses on the following three areas: Developing accurate dynamic load models and updating variable parameters based on the measurement data, applying advanced nonlinear filtering concepts and technologies to real-time identification of power system models, and addressing computational issues by implementing the balanced truncation method. By obtaining more realistic system models, together with timely updated parameters and stochastic influence consideration, we can have an accurate portrait of the ongoing phenomena in an electrical power system. Hence we can further improve state estimation, stability analysis and real-time operation.
Resumo:
Understanding how aquatic species grow is fundamental in fisheries because stock assessment often relies on growth dependent statistical models. Length-frequency-based methods become important when more applicable data for growth model estimation are either not available or very expensive. In this article, we develop a new framework for growth estimation from length-frequency data using a generalized von Bertalanffy growth model (VBGM) framework that allows for time-dependent covariates to be incorporated. A finite mixture of normal distributions is used to model the length-frequency cohorts of each month with the means constrained to follow a VBGM. The variances of the finite mixture components are constrained to be a function of mean length, reducing the number of parameters and allowing for an estimate of the variance at any length. To optimize the likelihood, we use a minorization–maximization (MM) algorithm with a Nelder–Mead sub-step. This work was motivated by the decline in catches of the blue swimmer crab (BSC) (Portunus armatus) off the east coast of Queensland, Australia. We test the method with a simulation study and then apply it to the BSC fishery data.
Resumo:
Markov Chain analysis was recently proposed to assess the time scales and preferential pathways into biological or physical networks by computing residence time, first passage time, rates of transfer between nodes and number of passages in a node. We propose to adapt an algorithm already published for simple systems to physical systems described with a high resolution hydrodynamic model. The method is applied to bays and estuaries on the Eastern Coast of Canada for their interest in shellfish aquaculture. Current velocities have been computed by using a 2 dimensional grid of elements and circulation patterns were summarized by averaging Eulerian flows between adjacent elements. Flows and volumes allow computing probabilities of transition between elements and to assess the average time needed by virtual particles to move from one element to another, the rate of transfer between two elements, and the average residence time of each system. We also combined transfer rates and times to assess the main pathways of virtual particles released in farmed areas and the potential influence of farmed areas on other areas. We suggest that Markov chain is complementary to other sets of ecological indicators proposed to analyse the interactions between farmed areas - e.g. depletion index, carrying capacity assessment. Markov Chain has several advantages with respect to the estimation of connectivity between pair of sites. It makes possible to estimate transfer rates and times at once in a very quick and efficient way, without the need to perform long term simulations of particle or tracer concentration.
Resumo:
Due to increasing integration density and operating frequency of today's high performance processors, the temperature of a typical chip can easily exceed 100 degrees Celsius. However, the runtime thermal state of a chip is very hard to predict and manage due to the random nature in computing workloads, as well as the process, voltage and ambient temperature variability (together called PVT variability). The uneven nature (both in time and space) of the heat dissipation of the chip could lead to severe reliability issues and error-prone chip behavior (e.g. timing errors). Many dynamic power/thermal management techniques have been proposed to address this issue such as dynamic voltage and frequency scaling (DVFS), clock gating and etc. However, most of such techniques require accurate knowledge of the runtime thermal state of the chip to make efficient and effective control decisions. In this work we address the problem of tracking and managing the temperature of microprocessors which include the following sub-problems: (1) how to design an efficient sensor-based thermal tracking system on a given design that could provide accurate real-time temperature feedback; (2) what statistical techniques could be used to estimate the full-chip thermal profile based on very limited (and possibly noise-corrupted) sensor observations; (3) how do we adapt to changes in the underlying system's behavior, since such changes could impact the accuracy of our thermal estimation. The thermal tracking methodology proposed in this work is enabled by on-chip sensors which are already implemented in many modern processors. We first investigate the underlying relationship between heat distribution and power consumption, then we introduce an accurate thermal model for the chip system. Based on this model, we characterize the temperature correlation that exists among different chip modules and explore statistical approaches (such as those based on Kalman filter) that could utilize such correlation to estimate the accurate chip-level thermal profiles in real time. Such estimation is performed based on limited sensor information because sensors are usually resource constrained and noise-corrupted. We also took a further step to extend the standard Kalman filter approach to account for (1) nonlinear effects such as leakage-temperature interdependency and (2) varying statistical characteristics in the underlying system model. The proposed thermal tracking infrastructure and estimation algorithms could consistently generate accurate thermal estimates even when the system is switching among workloads that have very distinct characteristics. Through experiments, our approaches have demonstrated promising results with much higher accuracy compared to existing approaches. Such results can be used to ensure thermal reliability and improve the effectiveness of dynamic thermal management techniques.
Resumo:
Objectif : Le but de ce travail est d'étudier les corrélations existantes entre les patterns de l'iris, la perception du temps et la fréquence de clignement des paupières (eye blink rate) et ceci en relation avec l'addiction à la cigarette. Méthodologie: Revue de la littérature existante. Expériences sur une cohorte d'au moins trente sujets fumeurs/non-fumeurs. Analyses statistiques. Résultats: Nos résultats confirment qu'il existe des relations entre l'impulsivité, les patterns d'iris, l'eye blink rate spontané et la perception du temps. Nous observons également que l'addiction à la cigarette et son niveau de dépendance ont une influence sur ces différentes mesures. En effet, les sujets fumeurs tendent à avoir une personnalité plus impulsive par rapport aux sujets contrôles. On remarque également une nette diminution de l'eye blink rate dans le groupe des fumeurs et une tendance à la sur-estimation du temps qui passe. Conclusion : Ce travail nous permet de mieux comprendre les différentes corrélations qui existent entre les différentes variables que nous avons mesurées (patterns d'iris, score d'impulsivité et eye blink rate) ainsi que leur relation à l'addiction à la cigarette. Dès lors qu'il est avéré que les fumeurs peuvent avoir une perception du temps altérée par rapport au groupe contrôle, il serait intéressant d'en étudier l'évolution sur le long terme (aggravation avec la durée du tabagisme actif) ainsi que les conséquences qui en découlent écologiquement au moyen d'études longitudinales et de terrain.
Resumo:
It has been proposed that long-term consumption of diets rich in non-digestible carbohydrates (NDCs), such as cereals, fruit and vegetables might protect against several chronic diseases, however, it has been difficult to fully establish their impact on health in epidemiology studies. The wide range properties of the different NDCs may dilution their impact when they are combined in one category for statistical comparisons in correlations or multivariate analysis. Several mechanisms have been suggested to explain the protective effects of NDCs, including increased stool bulk, dilution of carcinogens in the colonic lumen, reduced transit time, lowering pH, and bacterial fermentation to short chain fatty acids (SCFA) in the colon. However, it is very difficult to measure SCFA in humans in vivo with any accuracy, so epidemiological studies on the impact of SCFA are not feasible. Most studies use dietary fibre (DF) or Non-Starch Polysaccharides (NSP) intake to estimate the levels, but not all fibres or NSP are equally fermentable. It has been proposed that long-term consumption of diets rich in non-digestible carbohydrates (NDCs), such as cereals, fruit and vegetables might protect against several chronic diseases, however, it has been difficult to fully establish their impact on health in epidemiology studies. The wide range properties of the different NDCs may dilution their impact when they are combined in one category for statistical comparisons in correlations or multivariate analysis. Several mechanisms have been suggested to explain the protective effects of NDCs, including increased stool bulk, dilution of carcinogens in the colonic lumen, reduced transit time, lowering pH, and bacterial fermentation to short chain fatty acids (SCFA) in the colon. However, it is very difficult to measure SCFA in humans in vivo with any accuracy, so epidemiological studies on the impact of SCFA are not feasible. Most studies use dietary fibre (DF) or Non-Starch Polysaccharides (NSP) intake to estimate the levels, but not all fibres or NSP are equally fermentable. The first aim of this thesis was the development of the equations used to estimate the amount of FC that reaches the human colon and is fermented fully to SCFA by the colonic bacteria. Therefore, several studies were examined for evidence to determine the different percentages of each type of NDCs that should be included in the final model, based on how much NDCs entered the colon intact and also to what extent they were fermented to SCFA in vivo. Our model equations are FC-DF or NSP$ 1: 100 % Soluble + 10 % insoluble + 100 % NDOs¥ + 5 % TS** FC-DF or NSP 2: 100 % Soluble + 50 % insoluble + 100 % NDOs + 5 % TS FC-DF* or NSP 3: 100 % Soluble + 10 % insoluble + 100 % NDOs + 10 % TS FC-DF or NSP 4: 100 % Soluble + 50 % insoluble + 100 % NDOs + 10 % TS *DF: Dietary fibre; **TS: Total starch; $NSP: non-starch polysaccharide; ¥NDOs: non-digestible oligosaccharide The second study of this thesis aimed to examine all four predicted FC-DF and FC-NSP equations developed, to estimate FC from dietary records against urinary colonic NDCs fermentation biomarkers. The main finding of a cross-sectional comparison of habitual diet with urinary excretion of SCFA products, showed weak but significant correlation between the 24 h urinary excretion of SCFA and acetate with the estimated FC-DF 4 and FC-NSP 4 when considering all of the study participants (n = 122). Similar correlations were observed with the data for valid participants (n = 78). It was also observed that FC-DF and FC-NSP had positive correlations with 24 h urinary acetate and SCFA compared with DF and NSP alone. Hence, it could be hypothesised that using the developed index to estimate FC in the diet form dietary records, might predict SCFA production in the colon in vivo in humans. The next study in this thesis aimed to validate the FC equations developed using in vitro models of small intestinal digestion and human colon fermentation. The main findings in these in vitro studies were that there were several strong agreements between the amounts of SCFA produced after actual in vitro fermentation of single fibre and different mixtures of NDCs, and those predicted by the estimated FC from our developed equation FC-DF 4. These results which demonstrated a strong relationship between SCFA production in vitro from a range of fermentations of single fibres and mixtures of NDCs and that from the predicted FC equation, support the use of the FC equation for estimation of FC from dietary records. Therefore, we can conclude that the newly developed predicted equations have been deemed a valid and practical tool to assess SCFA productions for in vitro fermentation.
Resumo:
Background: In sub-Saharan African countries, the chance of a child dying before the age of five years is high. The problem is similar in Ethiopia, but it shows a decrease over years. Methods: The 2000; 2005 and 2011 Ethiopian Demographic and Health Survey results were used for this work. The purpose of the study is to detect the pattern of under-five child mortality overtime. Indirect child mortality estimation technique is adapted to examine the under-five child mortality trend in Ethiopia. Results: From the result, it was possible to see the trend of under-five child mortality in Ethiopia. The under-five child mortality shows a decline in Ethiopia. Conclusion: From the study, it can be seen that there is a positive correlation between mother and child survival which is almost certain in any population. Therefore, this study shows the trend of under-five mortality in Ethiopia and decline over time.