969 resultados para Motion estimation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Continuous passive motion (CPM) is currently a part of patient rehabilitation regimens after a variety of orthopedic surgical procedures. While CPM can enhance the joint healing process, the direct effects of CPM on cartilage metabolism remain unknown. Recent in vivo and in vitro observations suggest that mechanical stimuli can regulate articular cartilage metabolism of proteoglycan 4 (PRG4), a putative lubricating and chondroprotective molecule found in synovial fluid and at the articular cartilage surface. ----- ----- Objectives: (1) Determine the topographical variation in intrinsic cartilage PRG4 secretion. (2) Apply a CPM device to whole joints in bioreactors and assess effects of CPM on PRG4 biosynthesis.----- ----- Methods: A bioreactor was developed to apply CPM to bovine stifle joints in vitro. Effects of 24 h of CPM on PRG4 biosynthesis were determined.----- ----- Results: PRG4 secretion rate varied markedly over the joint surface. Rehabilitative joint motion applied in the form of CPM regulated PRG4 biosynthesis, in a manner dependent on the duty cycle of cartilage sliding against opposing tissues. Specifically, in certain regions of the femoral condyle that were continuously or intermittently sliding against meniscus and tibial cartilage during CPM, chondrocyte PRG4 synthesis was higher with CPM than without.----- ----- Conclusions: Rehabilitative joint motion, applied in the form of CPM, stimulates chondrocyte PRG4 metabolism. The stimulation of PRG4 synthesis is one mechanism by which CPM may benefit cartilage and joint health in post-operative rehabilitation. (C) 2006 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis is concerned with the sloshing motion of water in a moonpool. It is a relatively new problem, that is particularly predominant in moonpools with relatively large dimensions. The problem is further complicated by the additional behaviour of vertical oscillation. It is inevitable that large moonpools will be needed as offshore technology advances, therefore making a problem an important one. The research involves two parts, the theoretical and experimental study. The theoretical study consists of idealising the moonpool to a two dimensional system, represented by two surface piercing parallel barriers at a distance 2a apart. The barriers are forced to undergo roll motion which in turn generates waves. These travelling waves are travelling in opposite directions to each other and have the same amplitude and period, and thus can be expressed in terms of a standing wave. This is mathematically achieved by applying the theory of wavemaking, and therefore the wave amplitude at the side wall can be evaluated at near resonant conditions. The experimental study comprises of comparing the results obtained from the tank and moonpool experiments. The rolling motion creates the sloshing waves in both cases, in addition the vertical oscillation in the moonpool is produced by generating waves at one end of the towing tank. Apart from highlighting influencing parameters, the resonant frequencies obtained from these experiments are then compared with the theoretical values. Experiments in demonstrating the effect of increasing damping with the aid of baffles are also conducted. This is an important aspect which is very necessary if operations in launching and retrieving are to be carried out efficiently and safely.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability to accurately predict the remaining useful life of machine components is critical for machine continuous operation and can also improve productivity and enhance system’s safety. In condition-based maintenance (CBM), maintenance is performed based on information collected through condition monitoring and assessment of the machine health. Effective diagnostics and prognostics are important aspects of CBM for maintenance engineers to schedule a repair and to acquire replacement components before the components actually fail. Although a variety of prognostic methodologies have been reported recently, their application in industry is still relatively new and mostly focused on the prediction of specific component degradations. Furthermore, they required significant and sufficient number of fault indicators to accurately prognose the component faults. Hence, sufficient usage of health indicators in prognostics for the effective interpretation of machine degradation process is still required. Major challenges for accurate longterm prediction of remaining useful life (RUL) still remain to be addressed. Therefore, continuous development and improvement of a machine health management system and accurate long-term prediction of machine remnant life is required in real industry application. This thesis presents an integrated diagnostics and prognostics framework based on health state probability estimation for accurate and long-term prediction of machine remnant life. In the proposed model, prior empirical (historical) knowledge is embedded in the integrated diagnostics and prognostics system for classification of impending faults in machine system and accurate probability estimation of discrete degradation stages (health states). The methodology assumes that machine degradation consists of a series of degraded states (health states) which effectively represent the dynamic and stochastic process of machine failure. The estimation of discrete health state probability for the prediction of machine remnant life is performed using the ability of classification algorithms. To employ the appropriate classifier for health state probability estimation in the proposed model, comparative intelligent diagnostic tests were conducted using five different classifiers applied to the progressive fault data of three different faults in a high pressure liquefied natural gas (HP-LNG) pump. As a result of this comparison study, SVMs were employed in heath state probability estimation for the prediction of machine failure in this research. The proposed prognostic methodology has been successfully tested and validated using a number of case studies from simulation tests to real industry applications. The results from two actual failure case studies using simulations and experiments indicate that accurate estimation of health states is achievable and the proposed method provides accurate long-term prediction of machine remnant life. In addition, the results of experimental tests show that the proposed model has the capability of providing early warning of abnormal machine operating conditions by identifying the transitional states of machine fault conditions. Finally, the proposed prognostic model is validated through two industrial case studies. The optimal number of health states which can minimise the model training error without significant decrease of prediction accuracy was also examined through several health states of bearing failure. The results were very encouraging and show that the proposed prognostic model based on health state probability estimation has the potential to be used as a generic and scalable asset health estimation tool in industrial machinery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The State Library of Queensland is delighted to present Lumia: art/light/motion, a culmination of many years of collaboration by the Kuuki collective led by Priscilla Bracks and Gavin Sade. This extraordinary exhibition not only showcases the unique talent of these Queenslanders, it also opens up a world of future possibilities while re-presenting the past and present. These contemporary new media installations sit comfortably within the walls of the library as they are the distinctive products of inquisitive and philosophical minds. In a sense the exhibition highlights the longevity and purposefulness of a cultural learning institution, through the non-traditional use of data, information, research and collection interpretation. The exhibition simultaneously articulates one of our key objectives – to progress the state’s digital agenda. Two academic essays have been commissioned for this joint Kuuki and State Library of Queensland publication. The first is by artist and writer Paul Brown, who has specialised in art, science and technology since the late 1960s and in computational and generative art since the mid 1970s. Brown investigates the history of new media, which is celebrating its 60th anniversary, and clearly places Sade and Bracks at the forefront of this genre nationally. The second essay is by arts writer Linda Carroli, who has delved deeply into the thoughts and processes of the artists to bring to light the complex workings of the artists’ minds. The publication also features an interview Carroli conducted with the artists. This exhibition is playful, informative and contemplative. The audience is invited to play, and consequently to ponder the way we live and the environmental and social implications of our choices. The exhibition tempts us to travel deep into the Antarctic, plunge into the Great Barrier Reef, be swamped by an orchestra of crickets, enter the Charmed world and travel back in time to a Victorian parlour where you can interact with a ‘new-world’ lyrebird and consider a brave new world where our only link to the animal world is with robotic representations. In essence this exhibition is about ideas and knowledge and what better institution than the State Library of Queensland to partner such a project?. State Library is committed to preserving culture, exploring new media and creating new content as a lasting legacy of Queensland for all Queenslanders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a method for measuring the in-bucket payload volume on a dragline excavator for the purpose of estimating the material's bulk density in real-time. Knowledge of the payload's bulk density can provide feedback to mine planning and scheduling to improve blasting and therefore provide a more uniform bulk density across the excavation site. This allows a single optimal bucket size to be used for maximum overburden removal per dig and in turn reduce costs and emissions in dragline operation and maintenance. The proposed solution uses a range bearing laser to locate and scan full buckets between the lift and dump stages of the dragline cycle. The bucket is segmented from the scene using cluster analysis, and the pose of the bucket is calculated using the Iterative Closest Point (ICP) algorithm. Payload points are identified using a known model and subsequently converted into a height grid for volume estimation. Results from both scaled and full scale implementations show that this method can achieve an accuracy of above 95%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over recent years a significant amount of research has been undertaken to develop prognostic models that can be used to predict the remaining useful life of engineering assets. Implementations by industry have only had limited success. By design, models are subject to specific assumptions and approximations, some of which are mathematical, while others relate to practical implementation issues such as the amount of data required to validate and verify a proposed model. Therefore, appropriate model selection for successful practical implementation requires not only a mathematical understanding of each model type, but also an appreciation of how a particular business intends to utilise a model and its outputs. This paper discusses business issues that need to be considered when selecting an appropriate modelling approach for trial. It also presents classification tables and process flow diagrams to assist industry and research personnel select appropriate prognostic models for predicting the remaining useful life of engineering assets within their specific business environment. The paper then explores the strengths and weaknesses of the main prognostics model classes to establish what makes them better suited to certain applications than to others and summarises how each have been applied to engineering prognostics. Consequently, this paper should provide a starting point for young researchers first considering options for remaining useful life prediction. The models described in this paper are Knowledge-based (expert and fuzzy), Life expectancy (stochastic and statistical), Artificial Neural Networks, and Physical models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many traffic situations require drivers to cross or merge into a stream having higher priority. Gap acceptance theory enables us to model such processes to analyse traffic operation. This discussion demonstrated that numerical search fine tuned by statistical analysis can be used to determine the most likely critical gap for a sample of drivers, based on their largest rejected gap and accepted gap. This method shares some common features with the Maximum Likelihood Estimation technique (Troutbeck 1992) but lends itself well to contemporary analysis tools such as spreadsheet and is particularly analytically transparent. This method is considered not to bias estimation of critical gap due to very small rejected gaps or very large rejected gaps. However, it requires a sufficiently large sample that there is reasonable representation of largest rejected gap/accepted gap pairs within a fairly narrow highest likelihood search band.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Markov chain Monte Carlo (MCMC) estimation provides a solution to the complex integration problems that are faced in the Bayesian analysis of statistical problems. The implementation of MCMC algorithms is, however, code intensive and time consuming. We have developed a Python package, which is called PyMCMC, that aids in the construction of MCMC samplers and helps to substantially reduce the likelihood of coding error, as well as aid in the minimisation of repetitive code. PyMCMC contains classes for Gibbs, Metropolis Hastings, independent Metropolis Hastings, random walk Metropolis Hastings, orientational bias Monte Carlo and slice samplers as well as specific modules for common models such as a module for Bayesian regression analysis. PyMCMC is straightforward to optimise, taking advantage of the Python libraries Numpy and Scipy, as well as being readily extensible with C or Fortran.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical VC dimension, empirical VC entropy, and margin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-function methods. In this paper we introduce GPOMDP, a simulation-based algorithm for generating a biased estimate of the gradient of the average reward in Partially Observable Markov Decision Processes (POMDPs) controlled by parameterized stochastic policies. A similar algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm's chief advantages are that it requires storage of only twice the number of policy parameters, uses one free parameter β ∈ [0,1) (which has a natural interpretation in terms of bias-variance trade-off), and requires no knowledge of the underlying state. We prove convergence of GPOMDP, and show how the correct choice of the parameter β is related to the mixing time of the controlled POMDP. We briefly describe extensions of GPOMDP to controlled Markov chains, continuous state, observation and control spaces, multiple-agents, higher-order derivatives, and a version for training stochastic policies with internal states. In a companion paper (Baxter, Bartlett, & Weaver, 2001) we show how the gradient estimates generated by GPOMDP can be used in both a traditional stochastic gradient algorithm and a conjugate-gradient procedure to find local optima of the average reward. ©2001 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.