923 resultados para automated full waveform logging system


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: An estimated 285 million people worldwide have diabetes and its prevalence is predicted to increase to 439 million by 2030. For the year 2010, it is estimated that 3.96 million excess deaths in the age group 20-79 years are attributable to diabetes around the world. Self-management is recognised as an integral part of diabetes care. This paper describes the protocol of a randomised controlled trial of an automated interactive telephone system aiming to improve the uptake and maintenance of essential diabetes self-management behaviours. ---------- Methods/Design: A total of 340 individuals with type 2 diabetes will be randomised, either to the routine care arm, or to the intervention arm in which participants receive the Telephone-Linked Care (TLC) Diabetes program in addition to their routine care. The intervention requires the participants to telephone the TLC Diabetes phone system weekly for 6 months. They receive the study handbook and a glucose meter linked to a data uploading device. The TLC system consists of a computer with software designed to provide monitoring, tailored feedback and education on key aspects of diabetes self-management, based on answers voiced or entered during the current or previous conversations. Data collection is conducted at baseline (Time 1), 6-month follow-up (Time 2), and 12-month follow-up (Time 3). The primary outcomes are glycaemic control (HbA1c) and quality of life (Short Form-36 Health Survey version 2). Secondary outcomes include anthropometric measures, blood pressure, blood lipid profile, psychosocial measures as well as measures of diet, physical activity, blood glucose monitoring, foot care and medication taking. Information on utilisation of healthcare services including hospital admissions, medication use and costs is collected. An economic evaluation is also planned.---------- Discussion: Outcomes will provide evidence concerning the efficacy of a telephone-linked care intervention for self-management of diabetes. Furthermore, the study will provide insight into the potential for more widespread uptake of automated telehealth interventions, globally.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Generative systems are now being proposed for addressing major ecological problems. The Complex Urban Systems Project (CUSP) founded in 2008 at the Queensland University of Technology, emphasises the ecological significance of the generative global networking of urban environments. It argues that the natural planetary systems for balancing global ecology are no longer able to respond sufficiently rapidly to the ecological damage caused by humankind and by dense urban conurbations in particular as evidenced by impacts such as climate change. The proposal of this research project is to provide a high speed generative nervous system for the planet by connecting major cities globally to interact directly with natural ecosystems to engender rapid ecological response. This would be achieved by active interactions of the global urban network with the natural ecosystem in the ecological principle of entropy. The key goal is to achieve ecologically positive cities by activating self-organising cities capable of full integration into natural eco-systems and to netowork the cities globally to provide the planet with a nervous system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Future air traffic management concepts often involve the proposal of automated separation management algorithms that replaces human air traffic controllers. This paper proposes a new type of automated separation management algorithm (based on the satisficing approach) that utilizes inter-aircraft communication and a track file manager (or bank of Kalman filters) that is capable of resolving conflicts during periods of communication failure. The proposed separation management algorithm is tested in a range of flight scenarios involving during periods of communication failure, in both simulation and flight test (flight tests were conducted as part of the Smart Skies project). The intention of the conducted flight tests was to investigate the benefits of using inter-aircraft communication to provide an extra layer of safety protection in support air traffic management during periods of failure of the communication network. These benefits were confirmed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automobiles have deeply impacted the way in which we travel but they have also contributed to many deaths and injury due to crashes. A number of reasons for these crashes have been pointed out by researchers. Inexperience has been identified as a contributing factor to road crashes. Driver’s driving abilities also play a vital role in judging the road environment and reacting in-time to avoid any possible collision. Therefore driver’s perceptual and motor skills remain the key factors impacting on road safety. Our failure to understand what is really important for learners, in terms of competent driving, is one of the many challenges for building better training programs. Driver training is one of the interventions aimed at decreasing the number of crashes that involve young drivers. Currently, there is a need to develop comprehensive driver evaluation system that benefits from the advances in Driver Assistance Systems. A multidisciplinary approach is necessary to explain how driving abilities evolves with on-road driving experience. To our knowledge, driver assistance systems have never been comprehensively used in a driver training context to assess the safety aspect of driving. The aim and novelty of this thesis is to develop and evaluate an Intelligent Driver Training System (IDTS) as an automated assessment tool that will help drivers and their trainers to comprehensively view complex driving manoeuvres and potentially provide effective feedback by post processing the data recorded during driving. This system is designed to help driver trainers to accurately evaluate driver performance and has the potential to provide valuable feedback to the drivers. Since driving is dependent on fuzzy inputs from the driver (i.e. approximate distance calculation from the other vehicles, approximate assumption of the other vehicle speed), it is necessary that the evaluation system is based on criteria and rules that handles uncertain and fuzzy characteristics of the driving tasks. Therefore, the proposed IDTS utilizes fuzzy set theory for the assessment of driver performance. The proposed research program focuses on integrating the multi-sensory information acquired from the vehicle, driver and environment to assess driving competencies. After information acquisition, the current research focuses on automated segmentation of the selected manoeuvres from the driving scenario. This leads to the creation of a model that determines a “competency” criterion through the driving performance protocol used by driver trainers (i.e. expert knowledge) to assess drivers. This is achieved by comprehensively evaluating and assessing the data stream acquired from multiple in-vehicle sensors using fuzzy rules and classifying the driving manoeuvres (i.e. overtake, lane change, T-crossing and turn) between low and high competency. The fuzzy rules use parameters such as following distance, gaze depth and scan area, distance with respect to lanes and excessive acceleration or braking during the manoeuvres to assess competency. These rules that identify driving competency were initially designed with the help of expert’s knowledge (i.e. driver trainers). In-order to fine tune these rules and the parameters that define these rules, a driving experiment was conducted to identify the empirical differences between novice and experienced drivers. The results from the driving experiment indicated that significant differences existed between novice and experienced driver, in terms of their gaze pattern and duration, speed, stop time at the T-crossing, lane keeping and the time spent in lanes while performing the selected manoeuvres. These differences were used to refine the fuzzy membership functions and rules that govern the assessments of the driving tasks. Next, this research focused on providing an integrated visual assessment interface to both driver trainers and their trainees. By providing a rich set of interactive graphical interfaces, displaying information about the driving tasks, Intelligent Driver Training System (IDTS) visualisation module has the potential to give empirical feedback to its users. Lastly, the validation of the IDTS system’s assessment was conducted by comparing IDTS objective assessments, for the driving experiment, with the subjective assessments of the driver trainers for particular manoeuvres. Results show that not only IDTS was able to match the subjective assessments made by driver trainers during the driving experiment but also identified some additional driving manoeuvres performed in low competency that were not identified by the driver trainers due to increased mental workload of trainers when assessing multiple variables that constitute driving. The validation of IDTS emphasized the need for an automated assessment tool that can segment the manoeuvres from the driving scenario, further investigate the variables within that manoeuvre to determine the manoeuvre’s competency and provide integrated visualisation regarding the manoeuvre to its users (i.e. trainers and trainees). Through analysis and validation it was shown that IDTS is a useful assistance tool for driver trainers to empirically assess and potentially provide feedback regarding the manoeuvres undertaken by the drivers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Item folksonomy or tag information is popularly available on the web now. However, since tags are arbitrary words given by users, they contain a lot of noise such as tag synonyms, semantic ambiguities and personal tags. Such noise brings difficulties to improve the accuracy of item recommendations. In this paper, we propose to combine item taxonomy and folksonomy to reduce the noise of tags and make personalized item recommendations. The experiments conducted on the dataset collected from Amazon.com demonstrated the effectiveness of the proposed approaches. The results suggested that the recommendation accuracy can be further improved if we consider the viewpoints and the vocabularies of both experts and users.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an automated verification method for security of Diffie–Hellman–based key exchange protocols. The method includes a Hoare-style logic and syntactic checking. The method is applied to protocols in a simplified version of the Bellare–Rogaway–Pointcheval model (2000). The security of the protocol in the complete model can be established automatically by a modular proof technique of Kudla and Paterson (2005).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bana et al. proposed the relation formal indistinguishability (FIR), i.e. an equivalence between two terms built from an abstract algebra. Later Ene et al. extended it to cover active adversaries and random oracles. This notion enables a framework to verify computational indistinguishability while still offering the simplicity and formality of symbolic methods. We are in the process of making an automated tool for checking FIR between two terms. First, we extend the work by Ene et al. further, by covering ordered sorts and simplifying the way to cope with random oracles. Second, we investigate the possibility of combining algebras together, since it makes the tool scalable and able to cover a wide class of cryptographic schemes. Specially, we show that the combined algebra is still computationally sound, as long as each algebra is sound. Third, we design some proving strategies and implement the tool. Basically, the strategies allow us to find a sequence of intermediate terms, which are formally indistinguishable, between two given terms. FIR between the two given terms is then guaranteed by the transitivity of FIR. Finally, we show applications of the work, e.g. on key exchanges and encryption schemes. In the future, the tool should be extended easily to cover many schemes. This work continues previous research of ours on use of compilers to aid in automated proofs for key exchange.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inspection of solder joints has been a critical process in the electronic manufacturing industry to reduce manufacturing cost, improve yield, and ensure product quality and reliability. This paper proposes two inspection modules for an automatic solder joint classification system. The “front-end” inspection system includes illumination normalisation, localisation and segmentation. The “back-end” inspection involves the classification of solder joints using the Log Gabor filter and classifier fusion. Five different levels of solder quality with respect to the amount of solder paste have been defined. The Log Gabor filter has been demonstrated to achieve high recognition rates and is resistant to misalignment. This proposed system does not need any special illumination system, and the images are acquired by an ordinary digital camera. This system could contribute to the development of automated non-contact, non-destructive and low cost solder joint quality inspection systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Circuit-breakers (CBs) are subject to electrical stresses with restrikes during capacitor bank operation. Stresses are caused by the overvoltages across CBs, the interrupting currents and the rate of rise of recovery voltage (RRRV). Such electrical stresses also depend on the types of system grounding and the types of dielectric strength curves. The aim of this study is to demonstrate a restrike waveform predictive model for a SF6 CB that considered the types of system grounding: grounded and non-grounded and the computation accuracy comparison on the application of the cold withstand dielectric strength and the hot recovery dielectric strength curve including the POW (point-on-wave) recommendations to make an assessment of increasing the CB remaining life. The simulation of SF6 CB stresses in a typical 400 kV system was undertaken and the results in the applications are presented. The simulated restrike waveforms produced with the identified features using wavelet transform can be used for restrike diagnostic algorithm development with wavelet transform to locate a substation with breaker restrikes. This study found that the hot withstand dielectric strength curve has less magnitude than the cold withstand dielectric strength curve for restrike simulation results. Computation accuracy improved with the hot withstand dielectric strength and POW controlled switching can increase the life for a SF6 CB.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single particle analysis (SPA) coupled with high-resolution electron cryo-microscopy is emerging as a powerful technique for the structure determination of membrane protein complexes and soluble macromolecular assemblies. Current estimates suggest that ∼104–105 particle projections are required to attain a 3 Å resolution 3D reconstruction (symmetry dependent). Selecting this number of molecular projections differing in size, shape and symmetry is a rate-limiting step for the automation of 3D image reconstruction. Here, we present SwarmPS, a feature rich GUI based software package to manage large scale, semi-automated particle picking projects. The software provides cross-correlation and edge-detection algorithms. Algorithm-specific parameters are transparently and automatically determined through user interaction with the image, rather than by trial and error. Other features include multiple image handling (∼102), local and global particle selection options, interactive image freezing, automatic particle centering, and full manual override to correct false positives and negatives. SwarmPS is user friendly, flexible, extensible, fast, and capable of exporting boxed out projection images, or particle coordinates, compatible with downstream image processing suites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data preprocessing is widely recognized as an important stage in anomaly detection. This paper reviews the data preprocessing techniques used by anomaly-based network intrusion detection systems (NIDS), concentrating on which aspects of the network traffic are analyzed, and what feature construction and selection methods have been used. Motivation for the paper comes from the large impact data preprocessing has on the accuracy and capability of anomaly-based NIDS. The review finds that many NIDS limit their view of network traffic to the TCP/IP packet headers. Time-based statistics can be derived from these headers to detect network scans, network worm behavior, and denial of service attacks. A number of other NIDS perform deeper inspection of request packets to detect attacks against network services and network applications. More recent approaches analyze full service responses to detect attacks targeting clients. The review covers a wide range of NIDS, highlighting which classes of attack are detectable by each of these approaches. Data preprocessing is found to predominantly rely on expert domain knowledge for identifying the most relevant parts of network traffic and for constructing the initial candidate set of traffic features. On the other hand, automated methods have been widely used for feature extraction to reduce data dimensionality, and feature selection to find the most relevant subset of features from this candidate set. The review shows a trend toward deeper packet inspection to construct more relevant features through targeted content parsing. These context sensitive features are required to detect current attacks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent surveys of information technology management professionals show that understanding business domains in terms of business productivity and cost reduction potential, knowledge of different vertical industry segments and their information requirements, understanding of business processes and client-facing skills are more critical for Information Systems personnel than ever before. In an attempt to restrucuture the information systems curriculum accordingly, our view it that information systems students need to develop an appreciation for organizational work systems in order to understand the operation and significance of information systems within such work systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Masonry is one of the most ancient construction materials in the World. When compared to other civil engineering practices, masonry construction is highly labour intensive, which can affect the quality and productivity adversely. With a view to improving quality and in light of the limited skilled labour in the recent times several innovative masonry construction methods such as the dry stack and the thin bed masonry have been developed. This paper focuses on the thin bed masonry system, which is used in many parts of Europe. Thin bed masonry system utilises thin layer of polymer modified mortars connecting the accurately dimensioned and/or interlockable units. This assembly process has the potential for automated panelised construction system in the industry setting or being adopted in the site using less skilled labour, without sacrificing the quality. This is because unlike the conventional masonry construction, the thin bed technology uses thinner mortar (or glue) layer which can be controlled easily through some novel methods described in this paper. Structurally, reduction in the thickness of the mortar joint has beneficial effects; for example it increases the compressive strength of masonry; in addition polymer added glue mortar enhances lateral load capacity relative to conventional masonry. This paper reviews the details of the recent research outcomes on the structural characteristics and construction practices of thin bed masonry. Finally the suitability of thin bed masonry in developing countries where masonry remains as the most common material for residential building construction is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At NTCIR-9, we participated in the cross-lingual link discovery (Crosslink) task. In this paper we describe our approaches to discovering Chinese, Japanese, and Korean (CJK) cross-lingual links for English documents in Wikipedia. Our experimental results show that a link mining approach that mines the existing link structure for anchor probabilities and relies on the “translation” using cross-lingual document name triangulation performs very well. The evaluation shows encouraging results for our system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recommender systems are one of the recent inventions to deal with ever growing information overload in relation to the selection of goods and services in a global economy. Collaborative Filtering (CF) is one of the most popular techniques in recommender systems. The CF recommends items to a target user based on the preferences of a set of similar users known as the neighbours, generated from a database made up of the preferences of past users. With sufficient background information of item ratings, its performance is promising enough but research shows that it performs very poorly in a cold start situation where there is not enough previous rating data. As an alternative to ratings, trust between the users could be used to choose the neighbour for recommendation making. Better recommendations can be achieved using an inferred trust network which mimics the real world "friend of a friend" recommendations. To extend the boundaries of the neighbour, an effective trust inference technique is required. This thesis proposes a trust interference technique called Directed Series Parallel Graph (DSPG) which performs better than other popular trust inference algorithms such as TidalTrust and MoleTrust. Another problem is that reliable explicit trust data is not always available. In real life, people trust "word of mouth" recommendations made by people with similar interests. This is often assumed in the recommender system. By conducting a survey, we can confirm that interest similarity has a positive relationship with trust and this can be used to generate a trust network for recommendation. In this research, we also propose a new method called SimTrust for developing trust networks based on user's interest similarity in the absence of explicit trust data. To identify the interest similarity, we use user's personalised tagging information. However, we are interested in what resources the user chooses to tag, rather than the text of the tag applied. The commonalities of the resources being tagged by the users can be used to form the neighbours used in the automated recommender system. Our experimental results show that our proposed tag-similarity based method outperforms the traditional collaborative filtering approach which usually uses rating data.