897 resultados para lifetime of isomer
Resumo:
Four 100 m lengths of both monofilament gill nets and trammel nets were deployed at depths between 15 and 18 m off the coast of the Algarve (south of Portugal) between April 1995 and June 1996. The nets were set on a natural rocky bottom with one end cut loose to simulate lost nets. Changes in net structure (net height, effective fishing area, movement, colonisation, wear and tear) and their catches (species, sizes, numbers, and biomass) were monitored by divers. Similar patterns were observed in all the nets, with a sharp decrease in net height and effective fishing area, and an increase in visibility within the first few weeks. Net movement was negligible except in the case of interference from other fishing gears. Catch rates were initially comparable to normally fished gill nets and trammel nets in this area, but decreased steadily over time. No sea birds, reptiles or mammals were caught in any of the 8 nets. Catches were dominated by fish (89 % by number, at least 27 species), in particular by sea breams (Sparidae) and wrasses (Labridae). Under the conditions experienced throughout the study the fishing Lifetime of a 'lost' net is between 15 and 20 wk. Based on an exponential model, we estimated that 100 m lengths of gill net and trammel net will catch 314 and 221 fish respectively over a 17 wk period. However, we consider this to be an underestimate due to high rates of predation and scavenging by octopuses, cuttlefish, moray eels, conger eels, and other fish such as the wrasse Coris julis. When the nets were surveyed in the following spring, 8 to 11 mo after being deployed, they were found to be completely destroyed or heavily colonised by algae and had become incorporated into the reef.
Resumo:
The initial aim of the CareMan project was to develop a joint degree programme that combined and utilised the strengths of the five collaborating universities that were already involved in delivering social and health care management education. Because the project was to be implemented in collaboration between education- al institutions, the collaboration had to be based on a detailed understanding of the national and institutional specifics of each of the individual academic enti- ties. During this process it was recognised that, due to a number of regulation issues, achieving the original aim would not be possible; ultimately, following a series of analytical works, which are presented below, it was decided that a set of three master’s level modules should be developed. One of the reasons was that the Finnish law on master’s degrees at universities of applied sciences (UAS) stated that the requirement for entry to a UAS master’s programme was a bachelor degree from a UAS or equivalent, plus a minimum of three years of work experience in an appropriate field. The three years’ work experience is also required from international students. In practice this meant that the participating Finnish UASs, Lahti and HAMK, could not award a diploma for foreign students without this work experience. The other European universities do not have the work experience requirement, although some take it as a bonus for admission (FHS UK). There were also other differences in law (e.g., requirements for mini - mum standards in Social Work education at FHS UK) that could not have been overcome during the period of project realisation. Consequently, the outcome was the development of only three common educational modules, each for 10 ECTS, which were developed, delivered and assessed during the lifetime of the project. The intention was that these would be integrated into the current masters’ level provision in each of the universities
Resumo:
This report documents work carried out in order to develop and prove a model for predicting the lifetime of painted metal components, with a particular emphasis on Colorbond® due to its prominent use throughout Australia. This work continues on from previous developments reported in 2002-059-B No. 12 [1]. Extensions of work included the following research: (1) Experimental proving of the leaching of chromate inhibitors from Colorbond® materials. (2) Updated models for the accumulation of salts and the time of wetness for gutters, based upon field observations. (3) Electrochemical Impedance Spectroscopy investigations aimed at correlating the corrosion rates of weathered Colorbond® with those predicted by modeling.
Resumo:
Modern Engineering Asset Management (EAM) requires the accurate assessment of current and the prediction of future asset health condition. Suitable mathematical models that are capable of predicting Time-to-Failure (TTF) and the probability of failure in future time are essential. In traditional reliability models, the lifetime of assets is estimated using failure time data. However, in most real-life situations and industry applications, the lifetime of assets is influenced by different risk factors, which are called covariates. The fundamental notion in reliability theory is the failure time of a system and its covariates. These covariates change stochastically and may influence and/or indicate the failure time. Research shows that many statistical models have been developed to estimate the hazard of assets or individuals with covariates. An extensive amount of literature on hazard models with covariates (also termed covariate models), including theory and practical applications, has emerged. This paper is a state-of-the-art review of the existing literature on these covariate models in both the reliability and biomedical fields. One of the major purposes of this expository paper is to synthesise these models from both industrial reliability and biomedical fields and then contextually group them into non-parametric and semi-parametric models. Comments on their merits and limitations are also presented. Another main purpose of this paper is to comprehensively review and summarise the current research on the development of the covariate models so as to facilitate the application of more covariate modelling techniques into prognostics and asset health management.
Resumo:
Modern Engineering Asset Management (EAM) requires the accurate assessment of current and the prediction of future asset health condition. Appropriate mathematical models that are capable of estimating times to failures and the probability of failures in the future are essential in EAM. In most real-life situations, the lifetime of an engineering asset is influenced and/or indicated by different factors that are termed as covariates. Hazard prediction with covariates is an elemental notion in the reliability theory to estimate the tendency of an engineering asset failing instantaneously beyond the current time assumed that it has already survived up to the current time. A number of statistical covariate-based hazard models have been developed. However, none of them has explicitly incorporated both external and internal covariates into one model. This paper introduces a novel covariate-based hazard model to address this concern. This model is named as Explicit Hazard Model (EHM). Both the semi-parametric and non-parametric forms of this model are presented in the paper. The major purpose of this paper is to illustrate the theoretical development of EHM. Due to page limitation, a case study with the reliability field data is presented in the applications part of this study.
Resumo:
Hazard and reliability prediction of an engineering asset is one of the significant fields of research in Engineering Asset Health Management (EAHM). In real-life situations where an engineering asset operates under dynamic operational and environmental conditions, the lifetime of an engineering asset can be influenced and/or indicated by different factors that are termed as covariates. The Explicit Hazard Model (EHM) as a covariate-based hazard model is a new approach for hazard prediction which explicitly incorporates both internal and external covariates into one model. EHM is an appropriate model to use in the analysis of lifetime data in presence of both internal and external covariates in the reliability field. This paper presents applications of the methodology which is introduced and illustrated in the theory part of this study. In this paper, the semi-parametric EHM is applied to a case study so as to predict the hazard and reliability of resistance elements on a Resistance Corrosion Sensor Board (RCSB).
Resumo:
Free-radical processes underpin the thermo-oxidative degradation of polyolefins. Thus, to extend the lifetime of these polymers, stabilizers are generally added during processing to scavenge the free radicals formed as the polymer degrades. Nitroxide radical precursors, such as hindered amine stabilizers (HAS),1,2 are common polypropylene additives as the nitroxide moiety is a potent scavenger of polymer alkyl radicals (R¥). Oxidation of HAS by radicals formed during polypropylene degradation yields nitroxide radicals (RRNO¥), which rapidly trap the polymer degradation species to produce alkoxyamines, thus retarding oxidative polymer degradation. This increase in polymer stability is demonstrated by a lengthening of the “induction period” of the polymer (the time prior to a sharp rise in the oxidation of the polymer). Instrumental techniques such as chemiluminescence or infrared spectroscopy are somewhat limited in detecting changes in the polymer during the initial stages of degradation. Therefore, other methods for observing polymer degradation have been sought as the useful life of a polymer does not extend far beyond its “induction period”
Resumo:
The challenge of persistent navigation and mapping is to develop an autonomous robot system that can simultaneously localize, map and navigate over the lifetime of the robot with little or no human intervention. Most solutions to the simultaneous localization and mapping (SLAM) problem aim to produce highly accurate maps of areas that are assumed to be static. In contrast, solutions for persistent navigation and mapping must produce reliable goal-directed navigation outcomes in an environment that is assumed to be in constant flux. We investigate the persistent navigation and mapping problem in the context of an autonomous robot that performs mock deliveries in a working office environment over a two-week period. The solution was based on the biologically inspired visual SLAM system, RatSLAM. RatSLAM performed SLAM continuously while interacting with global and local navigation systems, and a task selection module that selected between exploration, delivery, and recharging modes. The robot performed 1,143 delivery tasks to 11 different locations with only one delivery failure (from which it recovered), traveled a total distance of more than 40 km over 37 hours of active operation, and recharged autonomously a total of 23 times.
Resumo:
In wireless mobile ad hoc networks (MANETs), packet transmission is impaired by radio link fluctuations. This paper proposes a novel channel adaptive routing protocol which extends the Ad-hoc On-Demand Multipath Distance Vector routing protocol (AOMDV) to accommodate channel fading. Specifically, the proposed Channel Aware AOMDV (CA-AOMDV) uses the channel average non-fading duration as a routing metric to select stable links for path discovery, and applies a preemptive handoff strategy to maintain reliable connections by exploiting channel state information. Using the same information, paths can be reused when they become available again, rather than being discarded. We provide new theoretical results for the downtime and lifetime of a live-die-live multiple path system, as well as detailed theoretical expressions for common network performance measures, providing useful insights into the differences in performance between CA-AOMDV and AOMDV. Simulation and theoretical results show that CA-AOMDV has greatly improved network performance over AOMDV.
Resumo:
Special collections, because of the issues associated with conservation and use, a feature they share with archives, tend to be the most digitized areas in libraries. The Nineteenth Century Schoolbooks collection is a collection of 9000 rarely held nineteenth-century schoolbooks that were painstakingly collected over a lifetime of work by Prof. John A. Nietz, and donated to the Hillman Library at the University of Pittsburgh in 1958, which has since grown to 15,000. About 140 of these texts are completely digitized and showcased in a publicly accessible website through the University of Pittsburgh’s Library, along with a searchable bibliography of the entire collection, which expanded the awareness of this collection and its user base to beyond the academic community. The URL for the website is http://digital.library.pitt.edu/nietz/. The collection is a rich resource for researchers studying the intellectual, educational, and textbook publishing history of the United States. In this study, we examined several existing records collected by the Digital Research Library at the University of Pittsburgh in order to determine the identity and searching behaviors of the users of this collection. Some of the records examined include: 1) The results of a 3-month long user survey, 2) User access statistics including search queries for a period of one year, a year after the digitized collection became publicly available in 2001, and 3) E-mail input received by the website over 4 years from 2000-2004. The results of the study demonstrate the differences in online retrieval strategies used by academic researchers and historians, archivists, avocationists, and the general public, and the importance of facilitating the discovery of digitized special collections through the use of electronic finding aids and an interactive interface with detailed metadata.
Resumo:
One of the major challenges in achieving long term robot autonomy is the need for a SLAM algorithm that can perform SLAM over the operational lifetime of the robot, preferably without human intervention or supervision. In this paper we present insights gained from a two week long persistent SLAM experiment, in which a Pioneer robot performed mock deliveries in a busy office environment. We used the biologically inspired visual SLAM system, RatSLAM, combined with a hybrid control architecture that selected between exploring the environment, performing deliveries, and recharging. The robot performed more than a thousand successful deliveries with only one failure (from which it recovered), travelled more than 40 km over 37 hours of active operation, and recharged autonomously 23 times. We discuss several issues arising from the success (and limitations) of this experiment and two subsequent avenues of work.
Resumo:
The recent expansion of prediction markets provides a great opportunity to test the market efficiency hypothesis and the calibration of trader judgements. Using a large database of observed prices, this article studies the calibration of prediction markets prices on sporting events using both nonparametric and parametric methods. While only minor bias can be observed during most of the lifetime of the contracts, the calibration of prices deteriorates very significantly in the last moments of the contracts’ lives. Traders tend to overestimate the probability of the losing team to reverse the situation in the last minutes of the game.
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
The challenge of persistent appearance-based navigation and mapping is to develop an autonomous robotic vision system that can simultaneously localize, map and navigate over the lifetime of the robot. However, the computation time and memory requirements of current appearance-based methods typically scale not only with the size of the environment but also with the operation time of the platform; also, repeated revisits to locations will develop multiple competing representations which reduce recall performance. In this paper we present a solution to the persistent localization, mapping and global path planning problem in the context of a delivery robot in an office environment over a one-week period. Using a graphical appearance-based SLAM algorithm, CAT-Graph, we demonstrate constant time and memory loop closure detection with minimal degradation during repeated revisits to locations, along with topological path planning that improves over time without using a global metric representation. We compare the localization performance of CAT-Graph to openFABMAP, an appearance-only SLAM algorithm, and the path planning performance to occupancy-grid based metric SLAM. We discuss the limitations of the algorithm with regard to environment change over time and illustrate how the topological graph representation can be coupled with local movement behaviors for persistent autonomous robot navigation.
Resumo:
It is widely recognised that defining trade-offs between greenhouse gas emissions using ‘emission equivalence’ based on global warming potentials (GWPs) referenced to carbon dioxide produces anomalous results when applied to methane. The short atmospheric lifetime of methane, compared to the timescales of CO2 uptake, leads to the greenhouse warming depending strongly on the temporal pattern of emission substitution. We argue that a more appropriate way to consider the relationship between the warming effects of methane and carbon dioxide is to define a ‘mixed metric’ that compares ongoing methane emissions (or reductions) to one-off emissions (or reductions) of carbon dioxide. Quantifying this approach, we propose that a one-off sequestration of 1 t of carbon would offset an ongoing methane emission in the range 0.90–1.05 kg CH4 per year. We present an example of how our approach would apply to rangeland cattle production, and consider the broader context of mitigation of climate change, noting the reverse trade-off would raise significant challenges in managing the risk of non-compliance. Our analysis is consistent with other approaches to addressing the criticisms of GWP-based emission equivalence, but provides a simpler and more robust approach while still achieving close equivalence of climate mitigation outcomes ranging over decadal to multi-century timescales.