929 resultados para Remediation time estimation
Resumo:
Kalman inverse filtering is used to develop a methodology for real-time estimation of forces acting at the interface between tyre and road on large off-highway mining trucks. The system model formulated is capable of estimating the three components of tyre-force at each wheel of the truck using a practical set of measurements and inputs. Good tracking is obtained by the estimated tyre-forces when compared with those simulated by an ADAMS virtual-truck model. A sensitivity analysis determines the susceptibility of the tyre-force estimates to uncertainties in the truck's parameters.
Foveation time measure in Congenital Nystagmus through second order approximation of the slow phases
Resumo:
Congenital Nystagmus (CN) is an ocular-motor disorder characterised by involuntary, conjugated ocular oscillations, and its pathogenesis is still unknown. The pathology is de fined as "congenital" from the onset time of its arise which could be at birth or in the first months of life. Visual acuity in CN subjects is often diminished due to nystagmus continuous oscillations, mainly on the horizontal plane, which disturb image fixation on the retina. However, during short periods in which eye velocity slows down while the target image is placed onto the fovea (called foveation intervals) the image of a given target can still be stable, allowing a subject to reach a higher visual acuity. In CN subjects, visual acuity is usually assessed both using typical measurement techniques (e.g. Landolt C test) and with eye movement recording in different gaze positions. The offline study of eye movement recordings allows physicians to analyse nystagmus main features such as waveform shape, amplitude and frequency and to compute estimated visual acuity predictors. This analytical functions estimates the best corrected visual acuity using foveation time and foveation position variability, hence a reliable estimation of this two parameters is a fundamental factor in assessing visual acuity. This work aims to enhance the foveation time estimation in CN eye movement recording, computing a second order approximation of the slow phase components of nystag-mus oscillations. About 19 infraredoculographic eye-movement recordings from 10 CN subjects were acquired and the visual acuity assessed with an acuity predictor was compared to the one measured in primary position. Results suggest that visual acuity measurements based on foveation time estimation obtained from interpolated data are closer to value obtained during Landolt C tests. © 2010 IEEE.
Resumo:
2000 Mathematics Subject Classification: Primary 60G55; secondary 60G25.
Resumo:
The position of a stationary target can be determined using triangulation in combination with time of arrival measurements at several sensors. In urban environments, none-line-of-sight (NLOS) propagation leads to biased time estimation and thus to inaccurate position estimates. Here, a semi-parametric approach is proposed to mitigate the effects of NLOS propagation. The degree of contamination by NLOS components in the observations, which result in asymmetric noise statistics, is determined and incorporated into the estimator. The proposed method is adequate for environments where the NLOS error plays a dominant role and outperforms previous approaches that assume a symmetric noise statistic.
Resumo:
Airflow rate is one of the most important parameters for the soil vapor extraction of contaminated sites, due to its direct influence on the mass transfer occurring during the remediation process. This work reports the study of airflow rate influence on soil vapor extractions, performed in sandy soils contaminated with benzene, toluene, ethylbenzene, xylene, trichloroethylene and perchloroethylene. The objectives were: (i) to analyze the influence of airflow rate on the process; (ii) to develop a methodology to predict the remediation time and the remediation efficiency; and (iii) to select the most efficient airflow rate. For dry sandy soils with negligible contents of clay and natural organic matter, containing the contaminants previously cited, it was concluded that: (i) if equilibrium between the pollutants and the different phases present in the soil matrix was reached and if slow diffusion effects did not occur, higher airflow rates exhibited the fastest remediations, (ii) it was possible to predict the remediation time and the efficiency of remediation with errors below 14%; and (iii) the most efficient remediation were reached with airflow rates below 1.2 cm3 s 1 standard temperature and pressure conditions.
Resumo:
Soil vapor extraction (SVE) and bioremediation (BR) are two of the most common soil remediation technologies. Their application is widespread; however, both present limitations, namely related to the efficiencies of SVE on organic soils and to the remediation times of some BR processes. This work aimed to study the combination of these two technologies in order to verify the achievement of the legal clean-up goals in soil remediation projects involving seven different simulated soils separately contaminated with toluene and xylene. The remediations consisted of the application of SVE followed by biostimulation. The results show that the combination of these two technologies is effective and manages to achieve the clean-up goals imposed by the Spanish Legislation. Under the experimental conditions used in this work, SVE is sufficient for the remediation of soils, contaminated separately with toluene and xylene, with organic matter contents (OMC) below 4 %. In soils with higher OMC, the use of BR, as a complementary technology, and when the concentration of contaminant in the gas phase of the soil reaches values near 1 mg/L, allows the achievement of the clean-up goals. The OMC was a key parameter because it hindered SVE due to adsorption phenomena but enhanced the BR process because it acted as a microorganism and nutrient source.
Resumo:
The subtribe Gentianinae comprises ca. 425 species, most of them within the well-studied genus Gentiana and mainly distributed over the Eurasian continent. Phylogenetic relationships between Gentiana and its closest relatives, the climbing gentians (Crawfurdia, Tripterospermum) and the new genus Metagentiana, remain unclear. All three genera were recently found to be polyphyletic, possibly because of poor sampling of Tripterospermum and Crawfurdia. Highest diversity of Gentianinae occurs in the western Himalaya, but the absence of uncontroversial fossil evidence limits our understanding of its biogeography. In the present study, we generated ITS and atpB-rbcL sequences for 19 species of Tripterospermum, 9 of Crawfurdia and 11 of Metagentiana, together representing about 60 percent of the species diversity of these genera. Our results show that only Metagentiana is polyphyletic and divided into three monophyletic entities. No unambiguous synapomorphies were associated with the three Metagentiana entities. Different combinations of three approximate calibration points were used to generate three divergence time estimation scenarios. Although dating hypotheses were mostly inconsistent, they concurred in associating radiation of Gentiana to an orogenic phase of the Himalaya between 15 and 10 million years ago. Our study illustrates the conceptual difficulties in addressing the time frame of diversification in a group lacking sufficient fossil number and quality.
Resumo:
Il est important pour les entreprises de compresser les informations détaillées dans des sets d'information plus compréhensibles. Au chapitre 1, je résume et structure la littérature sur le sujet « agrégation d'informations » en contrôle de gestion. Je récapitule l'analyse coûts-bénéfices que les comptables internes doivent considérer quand ils décident des niveaux optimaux d'agrégation d'informations. Au-delà de la perspective fondamentale du contenu d'information, les entreprises doivent aussi prendre en considération des perspectives cogni- tives et comportementales. Je développe ces aspects en faisant la part entre la comptabilité analytique, les budgets et plans, et la mesure de la performance. Au chapitre 2, je focalise sur un biais spécifique qui se crée lorsque les informations incertaines sont agrégées. Pour les budgets et plans, des entreprises doivent estimer les espérances des coûts et des durées des projets, car l'espérance est la seule mesure de tendance centrale qui est linéaire. A la différence de l'espérance, des mesures comme le mode ou la médiane ne peuvent pas être simplement additionnés. En considérant la forme spécifique de distributions des coûts et des durées, l'addition des modes ou des médianes résultera en une sous-estimation. Par le biais de deux expériences, je remarque que les participants tendent à estimer le mode au lieu de l'espérance résultant en une distorsion énorme de l'estimati¬on des coûts et des durées des projets. Je présente également une stratégie afin d'atténuer partiellement ce biais. Au chapitre 3, j'effectue une étude expérimentale pour comparer deux approches d'esti¬mation du temps qui sont utilisées en comptabilité analytique, spécifiquement « coûts basés sur les activités (ABC) traditionnelles » et « time driven ABC » (TD-ABC). Au contraire des affirmations soutenues par les défenseurs de l'approche TD-ABC, je constate que cette dernière n'est pas nécessairement appropriée pour les calculs de capacité. Par contre, je démontre que le TD-ABC est plus approprié pour les allocations de coûts que l'approche ABC traditionnelle. - It is essential for organizations to compress detailed sets of information into more comprehensi¬ve sets, thereby, establishing sharp data compression and good decision-making. In chapter 1, I review and structure the literature on information aggregation in management accounting research. I outline the cost-benefit trade-off that management accountants need to consider when they decide on the optimal levels of information aggregation. Beyond the fundamental information content perspective, organizations also have to account for cognitive and behavi¬oral perspectives. I elaborate on these aspects differentiating between research in cost accounti¬ng, budgeting and planning, and performance measurement. In chapter 2, I focus on a specific bias that arises when probabilistic information is aggregated. In budgeting and planning, for example, organizations need to estimate mean costs and durations of projects, as the mean is the only measure of central tendency that is linear. Different from the mean, measures such as the mode or median cannot simply be added up. Given the specific shape of cost and duration distributions, estimating mode or median values will result in underestimations of total project costs and durations. In two experiments, I find that participants tend to estimate mode values rather than mean values resulting in large distortions of estimates for total project costs and durations. I also provide a strategy that partly mitigates this bias. In the third chapter, I conduct an experimental study to compare two approaches to time estimation for cost accounting, i.e., traditional activity-based costing (ABC) and time-driven ABC (TD-ABC). Contrary to claims made by proponents of TD-ABC, I find that TD-ABC is not necessarily suitable for capacity computations. However, I also provide evidence that TD-ABC seems better suitable for cost allocations than traditional ABC.
Resumo:
Today cloud computing is the next stage in development information-oriented society in field of information technologies. Great attention is paid to cloud computing in general, but the lack of scientific consideration to components brings to the problem, that not all aspects are well examined. This thesis is an attempt to consider Platform as a Service (a technology of providing development environment through the Internet) from divergent angles. Technical characteristics, costs, time, estimation of effectiveness, risks, strategies that can be applied, migration process, advantages and disadvantages and the future of the approach are examined to get the overall picture of cloud platforms. During the work literature study was used to examine Platform as a Service, characteristics of existent cloud platforms were explored and a model of a typical software development company was developed to create a scenario of migration to cloud technologies. The research showed that besides all virtues in reducing costs and time, cloud platforms have some significant obstacles in adoption. Privacy, security and insufficient legislation impede the concept to be widespread.
Resumo:
The freezing times of fruit pulp models packed and conditioned in multi-layered boxes were evaluated under conditions similar to those employed commercially. Estimating the freezing time is a difficult practice due to the presence of significant voids in the boxes, whose influence may be analyzed by means of various methods. In this study, a procedure for estimating freezing time by using the models described in the literature was compared with experimental measurements by collecting time/temperature data. The following results show that the airflow through packages is a significant parameter for freezing time estimation. When the presence of preferential channels was considered, the predicted freezing time in the models could be 10% lower than the experimental values, depending on the method. The isotherms traced as a function of the location of the samples inside the boxes showed the displacement of the thermal center in relation to the geometric center of the product.
Resumo:
Le travail de modélisation a été réalisé à travers EGSnrc, un logiciel développé par le Conseil National de Recherche Canada.
Resumo:
L’émergence de nouvelles applications et de nouveaux services (tels que les applications multimédias, la voix-sur-IP, la télévision-sur-IP, la vidéo-sur-demande, etc.) et le besoin croissant de mobilité des utilisateurs entrainent une demande de bande passante de plus en plus croissante et une difficulté dans sa gestion dans les réseaux cellulaires sans fil (WCNs), causant une dégradation de la qualité de service. Ainsi, dans cette thèse, nous nous intéressons à la gestion des ressources, plus précisément à la bande passante, dans les WCNs. Dans une première partie de la thèse, nous nous concentrons sur la prédiction de la mobilité des utilisateurs des WCNs. Dans ce contexte, nous proposons un modèle de prédiction de la mobilité, relativement précis qui permet de prédire la destination finale ou intermédiaire et, par la suite, les chemins des utilisateurs mobiles vers leur destination prédite. Ce modèle se base sur : (a) les habitudes de l’utilisateur en terme de déplacements (filtrées selon le type de jour et le moment de la journée) ; (b) le déplacement courant de l’utilisateur ; (c) la connaissance de l’utilisateur ; (d) la direction vers une destination estimée ; et (e) la structure spatiale de la zone de déplacement. Les résultats de simulation montrent que ce modèle donne une précision largement meilleure aux approches existantes. Dans la deuxième partie de cette thèse, nous nous intéressons au contrôle d’admission et à la gestion de la bande passante dans les WCNs. En effet, nous proposons une approche de gestion de la bande passante comprenant : (1) une approche d’estimation du temps de transfert intercellulaire prenant en compte la densité de la zone de déplacement en terme d’utilisateurs, les caractéristiques de mobilité des utilisateurs et les feux tricolores ; (2) une approche d’estimation de la bande passante disponible à l’avance dans les cellules prenant en compte les exigences en bande passante et la durée de vie des sessions en cours ; et (3) une approche de réservation passive de bande passante dans les cellules qui seront visitées pour les sessions en cours et de contrôle d’admission des demandes de nouvelles sessions prenant en compte la mobilité des utilisateurs et le comportement des cellules. Les résultats de simulation indiquent que cette approche réduit largement les ruptures abruptes de sessions en cours, offre un taux de refus de nouvelles demandes de connexion acceptable et un taux élevé d’utilisation de la bande passante. Dans la troisième partie de la thèse, nous nous penchons sur la principale limite de la première et deuxième parties de la thèse, à savoir l’évolutivité (selon le nombre d’utilisateurs) et proposons une plateforme qui intègre des modèles de prédiction de mobilité avec des modèles de prédiction de la bande passante disponible. En effet, dans les deux parties précédentes de la thèse, les prédictions de la mobilité sont effectuées pour chaque utilisateur. Ainsi, pour rendre notre proposition de plateforme évolutive, nous proposons des modèles de prédiction de mobilité par groupe d’utilisateurs en nous basant sur : (a) les profils des utilisateurs (c’est-à-dire leur préférence en termes de caractéristiques de route) ; (b) l’état du trafic routier et le comportement des utilisateurs ; et (c) la structure spatiale de la zone de déplacement. Les résultats de simulation montrent que la plateforme proposée améliore la performance du réseau comparée aux plateformes existantes qui proposent des modèles de prédiction de la mobilité par groupe d’utilisateurs pour la réservation de bande passante.
Resumo:
Interference with time estimation from concurrent nontemporal processing has been shown to depend on the short-term memory requirements of the concurrent task (Fortin Breton, 1995; Fortin, Rousseau, Bourque, & Kirouac, 1993). In particular, it has been claimed that active processing of information in short-term memory produces interference, whereas simply maintaining information does not. Here, four experiments are reported in which subjects were trained to produce a 2,500-msec interval and then perform concurrent memory tasks. Interference with timing was demonstrated for concurrent memory tasks involving only maintenance. In one experiment, increasing set size in a pitch memory task systematically lengthened temporal production. Two further experiments suggested that this was due to a specific interaction between the short-term memory requirements of the pitch task and those of temporal production. In the final experiment, subjects performed temporal production while concurrently remembering the durations of a set of tones. Interference with interval production was comparable to that produced by the pitch memory task. Results are discussed in terms of a pacemaker-counter model of temporal processing, in which the counter component is supported by short-term memory.
Resumo:
The soil contamination with petroleum is one of the major concern of industries operating in the field and also of environmental agencies. The petroleum consists mainly of alkanes and aromatic hydrocarbons. The most common examples of hydrocarbons polyaromatic are: naphthalene, anthracene, phenanthrene, benzopyrene and their various isomers. These substances cause adverse effects on human and the environment. Thus, the main objective of this work is to study the advanced oxidation process using the oxidant potassium permanganate (KMnO4) for remediation of soils contaminated with two polyaromatic hydrocarbons (PAHs): anthracene and phenanthrene. This study was conducted at bench scale, where the first stage was at batch experiment, using the variables: the time and oxidant dosage in the soil. The second stage was the remediation conducted in continous by a fix column, to this stage, the only variable was remediation time. The concentration of oxidant in this stage was based on the best result obtained in the tests at batch, 2,464 mg / L. The results of degradation these contaminants were satisfactory, at the following dosages and time: (a) 5g of oxidant per kg soil for 48 hours, it was obtained residual contaminants 28 mg phenanthrene and 1.25 mg anthracene per kg of soil and (b) for 7g of oxidant per kg soil in 48 hours remaining 24 mg phenanthrene and anthracene 0.77 mg per kg soil, and therefore below the intervention limit residential and industrial proposed by the State Company of Environmental Sao Paulo (CETESB)
Resumo:
The purpose of this work is to compare two methods for time estimation of useful life for a small and shallow reservoir, located in Pirassununga, São Paulo State, Brazil: 1) sedimentometry; and 2) bathymetry. The model indicated a useful lifetime around 50 years for the studied dam. The values are between 51 years for the initial years of 1998/1999, and 46 years for the final years of 2004/2005, with an oscillation between 1999/2000 of 27 years, and between 2000/2001 of 76 years. The results show that it is possible to estimate the useful lifetime for small dams through the sedimentometric method, by knowing the correction coefficient value (K'). The coefficient is calculated using time values of useful lifetime calculated by the sedimentometric and bathymetric methods simultaneously, during the span of one year.