993 resultados para consuming


Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the advances in computer hardware and software development techniques in the past 25 years, digital computer simulation of train movement and traction systems has been widely adopted as a standard computer-aided engineering tool [1] during the design and development stages of existing and new railway systems. Simulators of different approaches and scales are used extensively to investigate various kinds of system studies. Simulation is now proven to be the cheapest means to carry out performance predication and system behaviour characterisation. When computers were first used to study railway systems, they were mainly employed to perform repetitive but time-consuming computational tasks, such as matrix manipulations for power network solution and exhaustive searches for optimal braking trajectories. With only simple high-level programming languages available at the time, full advantage of the computing hardware could not be taken. Hence, structured simulations of the whole railway system were not very common. Most applications focused on isolated parts of the railway system. It is more appropriate to regard those applications as primarily mechanised calculations rather than simulations. However, a railway system consists of a number of subsystems, such as train movement, power supply and traction drives, which inevitably contains many complexities and diversities. These subsystems interact frequently with each other while the trains are moving; and they have their special features in different railway systems. To further complicate the simulation requirements, constraints like track geometry, speed restrictions and friction have to be considered, not to mention possible non-linearities and uncertainties in the system. In order to provide a comprehensive and accurate account of system behaviour through simulation, a large amount of data has to be organised systematically to ensure easy access and efficient representation; the interactions and relationships among the subsystems should be defined explicitly. These requirements call for sophisticated and effective simulation models for each component of the system. The software development techniques available nowadays allow the evolution of such simulation models. Not only can the applicability of the simulators be largely enhanced by advanced software design, maintainability and modularity for easy understanding and further development, and portability for various hardware platforms are also encouraged. The objective of this paper is to review the development of a number of approaches to simulation models. Attention is, in particular, given to models for train movement, power supply systems and traction drives. These models have been successfully used to enable various ‘what-if’ issues to be resolved effectively in a wide range of applications, such as speed profiles, energy consumption, run times etc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent years have seen an increased uptake of business process management technology in industries. This has resulted in organizations trying to manage large collections of business process models. One of the challenges facing these organizations concerns the retrieval of models from large business process model repositories. For example, in some cases new process models may be derived from existing models, thus finding these models and adapting them may be more effective and less error-prone than developing them from scratch. Since process model repositories may be large, query evaluation may be time consuming. Hence, we investigate the use of indexes to speed up this evaluation process. To make our approach more applicable, we consider the semantic similarity between labels. Experiments are conducted to demonstrate that our approach is efficient.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Particulate pollution has been widely recognised as an important risk factor to human health. In addition to increases in respiratory and cardiovascular morbidity associated with exposure to particulate matter (PM), WHO estimates that urban PM causes 0.8 million premature deaths globally and that 1.5 million people die prematurely from exposure to indoor smoke generated from the combustion of solid fuels. Despite the availability of a huge body of research, the underlying toxicological mechanisms by which particles induce adverse health effects are not yet entirely understood. Oxidative stress caused by generation of free radicals and related reactive oxygen species (ROS) at the sites of deposition has been proposed as a mechanism for many of the adverse health outcomes associated with exposure to PM. In addition to particle-induced generation of ROS in lung tissue cells, several recent studies have shown that particles may also contain ROS. As such, they present a direct cause of oxidative stress and related adverse health effects. Cellular responses to oxidative stress have been widely investigated using various cell exposure assays. However, for a rapid screening of the oxidative potential of PM, less time-consuming and less expensive, cell-free assays are needed. The main aim of this research project was to investigate the application of a novel profluorescent nitroxide probe, synthesised at QUT, as a rapid screening assay in assessing the oxidative potential of PM. Considering that this was the first time that a profluorescent nitroxide probe was applied in investigating the oxidative stress potential of PM, the proof of concept regarding the detection of PM–derived ROS by using such probes needed to be demonstrated and a sampling methodology needed to be developed. Sampling through an impinger containing profluorescent nitroxide solution was chosen as a means of particle collection as it allowed particles to react with the profluorescent nitroxide probe during sampling, avoiding in that way any possible chemical changes resulting from delays between the sampling and the analysis of the PM. Among several profluorescent nitroxide probes available at QUT, bis(phenylethynyl)anthracene-nitroxide (BPEAnit) was found to be the most suitable probe, mainly due to relatively long excitation and emission wavelengths (λex= 430 nm; λem= 485 and 513 nm). These wavelengths are long enough to avoid overlap with the background fluorescence coming from light absorbing compounds which may be present in PM (e.g. polycyclic aromatic hydrocarbons and their derivatives). Given that combustion, in general, is one of the major sources of ambient PM, this project aimed at getting an insight into the oxidative stress potential of combustion-generated PM, namely cigarette smoke, diesel exhaust and wood smoke PM. During the course of this research project, it was demonstrated that the BPEAnit probe based assay is sufficiently sensitive and robust enough to be applied as a rapid screening test for PM-derived ROS detection. Considering that for all three aerosol sources (i.e. cigarette smoke, diesel exhaust and wood smoke) the same assay was applied, the results presented in this thesis allow direct comparison of the oxidative potential measured for all three sources of PM. In summary, it was found that there was a substantial difference between the amounts of ROS per unit of PM mass (ROS concentration) for particles emitted by different combustion sources. For example, particles from cigarette smoke were found to have up to 80 times less ROS per unit of mass than particles produced during logwood combustion. For both diesel and wood combustion it has been demonstrated that the type of fuel significantly affects the oxidative potential of the particles emitted. Similarly, the operating conditions of the combustion source were also found to affect the oxidative potential of particulate emissions. Moreover, this project has demonstrated a strong link between semivolatile (i.e. organic) species and ROS and therefore, clearly highlights the importance of semivolatile species in particle-induced toxicity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background and Significance Venous leg ulcers are a significant cause of chronic ill-health for 1–3% of those aged over 60 years, increasing in incidence with age. The condition is difficult and costly to heal, consuming 1–2.5% of total health budgets in developed countries and up to 50% of community nursing time. Unfortunately after healing, there is a recurrence rate of 60 to 70%, frequently within the first 12 months after heaing. Although some risk factors associated with higher recurrence rates have been identified (e.g. prolonged ulcer duration, deep vein thrombosis), in general there is limited evidence on treatments to effectively prevent recurrence. Patients are generally advised to undertake activities which aim to improve the impaired venous return (e.g. compression therapy, leg elevation, exercise). However, only compression therapy has some evidence to support its effectiveness in prevention and problems with adherence to this strategy are well documented. Aim The aim of this research was to identify factors associated with recurrence by determining relationships between recurrence and demographic factors, health, physical activity, psychosocial factors and self-care activities to prevent recurrence. Methods Two studies were undertaken: a retrospective study of participants diagnosed with a venous leg ulcer which healed 12 to 36 months prior to the study (n=122); and a prospective longitudinal study of participants recruited as their ulcer healed and data collected for 12 months following healing (n=80). Data were collected from medical records on demographics, medical history and ulcer history and treatments; and from self-report questionnaires on physical activity, nutrition, psychosocial measures, ulcer history, compression and other self-care activities. Follow-up data for the prospective study were collected every three months for 12 months after healing. For the retrospective study, a logistic regression model determined the independent influences of variables on recurrence. For the prospective study, median time to recurrence was calculated using the Kaplan-Meier method and a Cox proportional-hazards regression model was used to adjust for potential confounders and determine effects of preventive strategies and psychosocial factors on recurrence. Results In total, 68% of participants in the retrospective study and 44% of participants in the prospective study suffered a recurrence. After mutual adjustment for all variables in multivariable regression models, leg elevation, compression therapy, self efficacy and physical activity were found to be consistently related to recurrence in both studies. In the retrospective study, leg elevation, wearing Class 2 or 3 compression hosiery, the level of physical activity, cardiac disease and self efficacy scores remained significantly associated (p<0.05) with recurrence. The model was significant (p <0.001); with a R2 equivalent of 0.62. Examination of relationships between psychosocial factors and adherence to wearing compression hosiery found wearing compression hosiery was significantly positively associated with participants’ knowledge of the cause of their condition (p=0.002), higher self-efficacy scores (p=0.026) and lower depression scores (p=0.009). Analysis of data from the prospective study found there were 35 recurrences (44%) in the 12 months following healing and median time to recurrence was 27 weeks. After adjustment for potential confounders, a Cox proportional hazards regression model found that at least an hour/day of leg elevation, six or more days/week in Class 2 (20–25mmHg) or 3 (30–40mmHg) compression hosiery, higher social support scale scores and higher General Self-Efficacy scores remained significantly associated (p<0.05) with a lower risk of recurrence, while male gender and a history of DVT remained significant risk factors for recurrence. Overall the model was significant (p <0.001); with an R2 equivalent 0.72. Conclusions The high rates of recurrence found in the studies highlight the urgent need for further information in this area to support development of effective strategies for prevention. Overall, results indicate leg elevation, physical activity, compression hosiery and strategies to improve self-efficacy are likely to prevent recurrence. In addition, optimal management of depression and strategies to improve patient knowledge and self-efficacy may positively influence adherence to compression therapy. This research provides important information for development of strategies to prevent recurrence of venous leg ulcers, with the potential to improve health and decrease health care costs in this population.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are many reasons why interface design for interactive courseware fails to support quality of learning experiences. The causes such as the level of interactivity, the availability of the interfaces to interact with the end users and a lack of deep knowledge about the role of interface design by the designers in the development process are most acknowledged. Related to this, as a creator for the interactive courseware, generally the developers expect the resources that they produced are effective, accurate and robust. However, rarely do the developers have the opportunity to create good interfaces with the emphasis on time consuming, money and skill. Thus, some challenges faces by them in the interface design development can’t be underestimated as well. Therefore, their perspective of the interactive courseware is important to ensure the material and also the features of the interactive courseware can facilitate teaching and learning activity. Within this context in mind, this paper highlights the challenges that faces by the Malaysian developer from the ten face to face interviewed data gathered. It discusses from the Malaysian developer perspectives that involved in the development of interface design for interactive courseware for the Smart School Project. Particularly, in creating such a great interfaces, the highlights challenges will present within the constraints of time, curriculum demand, and competencies of the development team.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a previous chapter (Dean and Kavanagh, Chapter 37), the authors made a case for applying low intensity (LI) cognitive behaviour therapy (CBT) to people with serious mental illness (SMI). As in other populations, LI CBT interventions typically deal with circumscribed problems or behaviours. LI CBT retains an emphasis on self-management, has restricted content and segment length, and does not necessarily require extensive CBT training. In applying these interventions to SMI, adjustments may be needed to address cognitive and symptomatic difficulties often faced by these groups. What may take a single session in a less affected population may require several sessions or a thematic application of the strategy within case management. In some cases, the LI CBT may begin to appear more like a high-intensity (HI) intervention, albeit simple and with many LI CBT characteristics still retained. So, if goal setting were introduced in one or two sessions, it could clearly be seen as an LI intervention. When applied to several different situations and across many sessions, it may be indistinguishable from a simple HI treatment, even if it retains the same format and is effectively applied by a practitioner with limited CBT training. ----- ----- In some ways, LI CBT should be well suited to case management of patients with SMI. treating staff typically have heavy workloads, and find it difficult to apply time-consuming treatments (Singh et al. 2003). LI CBT may allow provision of support to greater numbers of service users, and allow staff to spend more time on those who need intensive and sustained support. However, the introduction of any change in practice has to address significant challenges, and LI CBT is no exception. ----- ----- Many of the issues that we face in applying LI CBT to routine case management in a mnetal health service and their potential solutions are essentially the same as in a range of other problem domains (Turner and Sanders 2006)- and, indeed, are similar to those in any adoption of innovation (Rogers 2003). Over the last 20 years, several commentators have described barriers to implementing evidence-based innovations in mental health services (Corrigan et al. 1992; Deane et al. 2006; Kavanagh et al. 1993). The aim of the current chapter is to present a cognitive behavioural conceptualisation of problems and potential solutions for dissemination of LI CBT.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present the application of a non-linear dimensionality reduction technique for the learning and probabilistic classification of hyperspectral image. Hyperspectral image spectroscopy is an emerging technique for geological investigations from airborne or orbital sensors. It gives much greater information content per pixel on the image than a normal colour image. This should greatly help with the autonomous identification of natural and manmade objects in unfamiliar terrains for robotic vehicles. However, the large information content of such data makes interpretation of hyperspectral images time-consuming and userintensive. We propose the use of Isomap, a non-linear manifold learning technique combined with Expectation Maximisation in graphical probabilistic models for learning and classification. Isomap is used to find the underlying manifold of the training data. This low dimensional representation of the hyperspectral data facilitates the learning of a Gaussian Mixture Model representation, whose joint probability distributions can be calculated offline. The learnt model is then applied to the hyperspectral image at runtime and data classification can be performed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Digital forensic examiners often need to identify the type of a file or file fragment based only on the content of the file. Content-based file type identification schemes typically use a byte frequency distribution with statistical machine learning to classify file types. Most algorithms analyze the entire file content to obtain the byte frequency distribution, a technique that is inefficient and time consuming. This paper proposes two techniques for reducing the classification time. The first technique selects a subset of features based on the frequency of occurrence. The second speeds classification by sampling several blocks from the file. Experimental results demonstrate that up to a fifteen-fold reduction in file size analysis time can be achieved with limited impact on accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When crest-fixed thin trapezoidal steel cladding with closely spaced ribs is subjected to wind uplift/suction forces, local dimpling or pull-through failures occur prematurely at their screw connections because of the large stress concentrations in the cladding under the screw heads. Currently, the design of crest-fixed profiled steel cladding is mainly based on time consuming and expensive laboratory tests due to the lack of adequate design rules. In this research, a shell finite element model of crest-fixed trapezoidal steel cladding with closely spaced ribs was developed and validated using experimental results. The finite element model included a recently developed splitting criterion and other advanced features including geometric imperfections, buckling effects, contact modelling and hyperelastic behaviour of neoprene washers, and was used in a detailed parametric study to develop suitable design formulae for local failures. This paper presents the details of the finite element analyses, large scale experiments and their results including the new wind uplift design strength formulae for trapezoidal steel cladding with closely spaced ribs. The new design formulae can be used to achieve both safe and optimised solutions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Markov chain Monte Carlo (MCMC) estimation provides a solution to the complex integration problems that are faced in the Bayesian analysis of statistical problems. The implementation of MCMC algorithms is, however, code intensive and time consuming. We have developed a Python package, which is called PyMCMC, that aids in the construction of MCMC samplers and helps to substantially reduce the likelihood of coding error, as well as aid in the minimisation of repetitive code. PyMCMC contains classes for Gibbs, Metropolis Hastings, independent Metropolis Hastings, random walk Metropolis Hastings, orientational bias Monte Carlo and slice samplers as well as specific modules for common models such as a module for Bayesian regression analysis. PyMCMC is straightforward to optimise, taking advantage of the Python libraries Numpy and Scipy, as well as being readily extensible with C or Fortran.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stem cells have attracted tremendous interest in recent times due to their promise in providing innovative new treatments for a great range of currently debilitating diseases. This is due to their potential ability to regenerate and repair damaged tissue, and hence restore lost body function, in a manner beyond the body's usual healing process. Bone marrow-derived mesenchymal stem cells or bone marrow stromal cells are one type of adult stem cells that are of particular interest. Since they are derived from a living human adult donor, they do not have the ethical issues associated with the use of human embryonic stem cells. They are also able to be taken from a patient or other donors with relative ease and then grown readily in the laboratory for clinical application. Despite the attractive properties of bone marrow stromal cells, there is presently no quick and easy way to determine the quality of a sample of such cells. Presently, a sample must be grown for weeks and subject to various time-consuming assays, under the direction of an expert cell biologist, to determine whether it will be useful. Hence there is a great need for innovative new ways to assess the quality of cell cultures for research and potential clinical application. The research presented in this thesis investigates the use of computerised image processing and pattern recognition techniques to provide a quicker and simpler method for the quality assessment of bone marrow stromal cell cultures. In particular, aim of this work is to find out whether it is possible, through the use of image processing and pattern recognition techniques, to predict the growth potential of a culture of human bone marrow stromal cells at early stages, before it is readily apparent to a human observer. With the above aim in mind, a computerised system was developed to classify the quality of bone marrow stromal cell cultures based on phase contrast microscopy images. Our system was trained and tested on mixed images of both healthy and unhealthy bone marrow stromal cell samples taken from three different patients. This system, when presented with 44 previously unseen bone marrow stromal cell culture images, outperformed human experts in the ability to correctly classify healthy and unhealthy cultures. The system correctly classified the health status of an image 88% of the time compared to an average of 72% of the time for human experts. Extensive training and testing of the system on a set of 139 normal sized images and 567 smaller image tiles showed an average performance of 86% and 85% correct classifications, respectively. The contributions of this thesis include demonstrating the applicability and potential of computerised image processing and pattern recognition techniques to the task of quality assessment of bone marrow stromal cell cultures. As part of this system, an image normalisation method has been suggested and a new segmentation algorithm has been developed for locating cell regions of irregularly shaped cells in phase contrast images. Importantly, we have validated the efficacy of both the normalisation and segmentation method, by demonstrating that both methods quantitatively improve the classification performance of subsequent pattern recognition algorithms, in discriminating between cell cultures of differing health status. We have shown that the quality of a cell culture of bone marrow stromal cells may be assessed without the need to either segment individual cells or to use time-lapse imaging. Finally, we have proposed a set of features, that when extracted from the cell regions of segmented input images, can be used to train current state of the art pattern recognition systems to predict the quality of bone marrow stromal cell cultures earlier and more consistently than human experts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The major purpose of Vehicular Ad Hoc Networks (VANETs) is to provide safety-related message access for motorists to react or make a life-critical decision for road safety enhancement. Accessing safety-related information through the use of VANET communications, therefore, must be protected, as motorists may make critical decisions in response to emergency situations in VANETs. If introducing security services into VANETs causes considerable transmission latency or processing delays, this would defeat the purpose of using VANETs to improve road safety. Current research in secure messaging for VANETs appears to focus on employing certificate-based Public Key Cryptosystem (PKC) to support security. The security overhead of such a scheme, however, creates a transmission delay and introduces a time-consuming verification process to VANET communications. This paper proposes an efficient public key management system for VANETs: the Public Key Registry (PKR) system. Not only does this paper demonstrate that the proposed PKR system can maintain security, but it also asserts that it can improve overall performance and scalability at a lower cost, compared to the certificate-based PKC scheme. It is believed that the proposed PKR system will create a new dimension to the key management and verification services for VANETs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The uniformization method (also known as randomization) is a numerically stable algorithm for computing transient distributions of a continuous time Markov chain. When the solution is needed after a long run or when the convergence is slow, the uniformization method involves a large number of matrix-vector products. Despite this, the method remains very popular due to its ease of implementation and its reliability in many practical circumstances. Because calculating the matrix-vector product is the most time-consuming part of the method, overall efficiency in solving large-scale problems can be significantly enhanced if the matrix-vector product is made more economical. In this paper, we incorporate a new relaxation strategy into the uniformization method to compute the matrix-vector products only approximately. We analyze the error introduced by these inexact matrix-vector products and discuss strategies for refining the accuracy of the relaxation while reducing the execution cost. Numerical experiments drawn from computer systems and biological systems are given to show that significant computational savings are achieved in practical applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As the popularity of video as an information medium rises, the amount of video content that we produce and archive keeps growing. This creates a demand for shorter representations of videos in order to assist the task of video retrieval. The traditional solution is to let humans watch these videos and write textual summaries based on what they saw. This summarisation process, however, is time-consuming. Moreover, a lot of useful audio-visual information contained in the original video can be lost. Video summarisation aims to turn a full-length video into a more concise version that preserves as much information as possible. The problem of video summarisation is to minimise the trade-off between how concise and how representative a summary is. There are also usability concerns that need to be addressed in a video summarisation scheme. To solve these problems, this research aims to create an automatic video summarisation framework that combines and improves on existing video summarisation techniques, with the focus on practicality and user satisfaction. We also investigate the need for different summarisation strategies in different kinds of videos, for example news, sports, or TV series. Finally, we develop a video summarisation system based on the framework, which is validated by subjective and objective evaluation. The evaluation results shows that the proposed framework is effective for creating video skims, producing high user satisfaction rate and having reasonably low computing requirement. We also demonstrate that the techniques presented in this research can be used for visualising video summaries in the form web pages showing various useful information, both from the video itself and from external sources.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since manually constructing domain-specific sentiment lexicons is extremely time consuming and it may not even be feasible for domains where linguistic expertise is not available. Research on the automatic construction of domain-specific sentiment lexicons has become a hot topic in recent years. The main contribution of this paper is the illustration of a novel semi-supervised learning method which exploits both term-to-term and document-to-term relations hidden in a corpus for the construction of domain specific sentiment lexicons. More specifically, the proposed two-pass pseudo labeling method combines shallow linguistic parsing and corpusbase statistical learning to make domain-specific sentiment extraction scalable with respect to the sheer volume of opinionated documents archived on the Internet these days. Another novelty of the proposed method is that it can utilize the readily available user-contributed labels of opinionated documents (e.g., the user ratings of product reviews) to bootstrap the performance of sentiment lexicon construction. Our experiments show that the proposed method can generate high quality domain-specific sentiment lexicons as directly assessed by human experts. Moreover, the system generated domain-specific sentiment lexicons can improve polarity prediction tasks at the document level by 2:18% when compared to other well-known baseline methods. Our research opens the door to the development of practical and scalable methods for domain-specific sentiment analysis.