156 resultados para Wearable Computing, sistemi Hands-Free, riconoscimento vocale, framework, Android
em University of Queensland eSpace - Australia
Resumo:
As the first step in developing a protocol for the use of video-phones in community health, we carried out a feasibility study among clients with a range of health needs. Clients were equipped with a commercially available video-phone connected using the client's home telephone line. A hands-free speaker-phone and a miniature video-camera (for close-up views) were connected to the video-phone. Ten clients participated: five required wound care, two palliative care, two long-term therapy monitoring and one was a rural client. All but two were aged 75 years or more. Each client had a video-phone for an average of two to three weeks. During the six months of the study, 43 client calls were made, of which 36 (84%) were converted to video-calls. The speaker-phone was used on 24 occasions (56%) and the close-up camera on 23 occasions (53%). Both clients and nurses rated the equipment as satisfactory or better in questionnaires. None of the nurses felt that the equipment was difficult to use, including unpacking it and setting it up; only one client found it difficult. Taking into account the clients' responses, including their free-text comments, a judgement was made as to whether the video-phone had been useful to their nursing care. In seven cases it was felt to be unhelpful and in three cases it was judged helpful. Although the study sample was small, the results suggest that home telenursing is likely to be useful for rural clients in Australia, unsurprisingly, because of the distances involved.
Resumo:
Fundamental principles of precaution are legal maxims that ask for preventive actions, perhaps as contingent interim measures while relevant information about causality and harm remains unavailable, to minimize the societal impact of potentially severe or irreversible outcomes. Such principles do not explain how to make choices or how to identify what is protective when incomplete and inconsistent scientific evidence of causation characterizes the potential hazards. Rather, they entrust lower jurisdictions, such as agencies or authorities, to make current decisions while recognizing that future information can contradict the scientific basis that supported the initial decision. After reviewing and synthesizing national and international legal aspects of precautionary principles, this paper addresses the key question: How can society manage potentially severe, irreversible or serious environmental outcomes when variability, uncertainty, and limited causal knowledge characterize their decision-making? A decision-analytic solution is outlined that focuses on risky decisions and accounts for prior states of information and scientific beliefs that can be updated as subsequent information becomes available. As a practical and established approach to causal reasoning and decision-making under risk, inherent to precautionary decision-making, these (Bayesian) methods help decision-makers and stakeholders because they formally account for probabilistic outcomes, new information, and are consistent and replicable. Rational choice of an action from among various alternatives-defined as a choice that makes preferred consequences more likely-requires accounting for costs, benefits and the change in risks associated with each candidate action. Decisions under any form of the precautionary principle reviewed must account for the contingent nature of scientific information, creating a link to the decision-analytic principle of expected value of information (VOI), to show the relevance of new information, relative to the initial ( and smaller) set of data on which the decision was based. We exemplify this seemingly simple situation using risk management of BSE. As an integral aspect of causal analysis under risk, the methods developed in this paper permit the addition of non-linear, hormetic dose-response models to the current set of regulatory defaults such as the linear, non-threshold models. This increase in the number of defaults is an important improvement because most of the variants of the precautionary principle require cost-benefit balancing. Specifically, increasing the set of causal defaults accounts for beneficial effects at very low doses. We also show and conclude that quantitative risk assessment dominates qualitative risk assessment, supporting the extension of the set of default causal models.
Resumo:
There is growing interest in the use of context-awareness as a technique for developing pervasive computing applications that are flexible, adaptable, and capable of acting autonomously on behalf of users. However, context-awareness introduces various software engineering challenges, as well as privacy and usability concerns. In this paper, we present a conceptual framework and software infrastructure that together address known software engineering challenges, and enable further practical exploration of social and usability issues by facilitating the prototyping and fine-tuning of context-aware applications.
Resumo:
Grid computing is an emerging technology for providing the high performance computing capability and collaboration mechanism for solving the collaborated and complex problems while using the existing resources. In this paper, a grid computing based framework is proposed for the probabilistic based power system reliability and security analysis. The suggested name of this computing grid is Reliability and Security Grid (RSA-Grid). Then the architecture of this grid is presented. A prototype system has been built for further development of grid-based services for power systems reliability and security assessment based on probabilistic techniques, which require high performance computing and large amount of memory. Preliminary results based on prototype of this grid show that RSA-Grid can provide the comprehensive assessment results for real power systems efficiently and economically.
Resumo:
We explore of the feasibility of the computationally oriented institutional agency framework proposed by Governatori and Rotolo testing it against an industrial strength scenario. In particular we show how to encode in defeasible logic the dispute resolution policy described in Article 67 of FIDIC.
Resumo:
Expokit provides a set of routines aimed at computing matrix exponentials. More precisely, it computes either a small matrix exponential in full, the action of a large sparse matrix exponential on an operand vector, or the solution of a system of linear ODEs with constant inhomogeneity. The backbone of the sparse routines consists of matrix-free Krylov subspace projection methods (Arnoldi and Lanczos processes), and that is why the toolkit is capable of coping with sparse matrices of large dimension. The software handles real and complex matrices and provides specific routines for symmetric and Hermitian matrices. The computation of matrix exponentials is a numerical issue of critical importance in the area of Markov chains and furthermore, the computed solution is subject to probabilistic constraints. In addition to addressing general matrix exponentials, a distinct attention is assigned to the computation of transient states of Markov chains.
Resumo:
Continuous-valued recurrent neural networks can learn mechanisms for processing context-free languages. The dynamics of such networks is usually based on damped oscillation around fixed points in state space and requires that the dynamical components are arranged in certain ways. It is shown that qualitatively similar dynamics with similar constraints hold for a(n)b(n)c(n), a context-sensitive language. The additional difficulty with a(n)b(n)c(n), compared with the context-free language a(n)b(n), consists of 'counting up' and 'counting down' letters simultaneously. The network solution is to oscillate in two principal dimensions, one for counting up and one for counting down. This study focuses on the dynamics employed by the sequential cascaded network, in contrast to the simple recurrent network, and the use of backpropagation through time. Found solutions generalize well beyond training data, however, learning is not reliable. The contribution of this study lies in demonstrating how the dynamics in recurrent neural networks that process context-free languages can also be employed in processing some context-sensitive languages (traditionally thought of as requiring additional computation resources). This continuity of mechanism between language classes contributes to our understanding of neural networks in modelling language learning and processing.
Resumo:
In the past century, the debate over whether or not density-dependent factors regulate populations has generally focused on changes in mean population density, ignoring the spatial variance around the mean as unimportant noise. In an attempt to provide a different framework for understanding population dynamics based on individual fitness, this paper discusses the crucial role of spatial variability itself on the stability of insect populations. The advantages of this method are the following: (1) it is founded on evolutionary principles rather than post hoc assumptions; (2) it erects hypotheses that can be tested; and (3) it links disparate ecological schools, including spatial dynamics, behavioral ecology, preference-performance, and plant apparency into an overall framework. At the core of this framework, habitat complexity governs insect spatial variance. which in turn determines population stability. First, the minimum risk distribution (MRD) is defined as the spatial distribution of individuals that results in the minimum number of premature deaths in a population given the distribution of mortality risk in the habitat (and, therefore, leading to maximized population growth). The greater the divergence of actual spatial patterns of individuals from the MRD, the greater the reduction of population growth and size from high, unstable levels. Then, based on extensive data from 29 populations of the processionary caterpillar, Ochrogaster lunifer, four steps are used to test the effect of habitat interference on population growth rates. (1) The costs (increasing the risk of scramble competition) and benefits (decreasing the risk of inverse density-dependent predation) of egg and larval aggregation are quantified. (2) These costs and benefits, along with the distribution of resources, are used to construct the MRD for each habitat. (3) The MRD is used as a benchmark against which the actual spatial pattern of individuals is compared. The degree of divergence of the actual spatial pattern from the MRD is quantified for each of the 29 habitats. (4) Finally, indices of habitat complexity are used to provide highly accurate predictions of spatial divergence from the MRD, showing that habitat interference reduces population growth rates from high, unstable levels. The reason for the divergence appears to be that high levels of background vegetation (vegetation other than host plants) interfere with female host-searching behavior. This leads to a spatial distribution of egg batches with high mortality risk, and therefore lower population growth. Knowledge of the MRD in other species should be a highly effective means of predicting trends in population dynamics. Species with high divergence between their actual spatial distribution and their MRD may display relatively stable dynamics at low population levels. In contrast, species with low divergence should experience high levels of intragenerational population growth leading to frequent habitat-wide outbreaks and unstable dynamics in the long term. Six hypotheses, erected under the framework of spatial interference, are discussed, and future tests are suggested.
Resumo:
Fault diagnosis has become an important component in intelligent systems, such as intelligent control systems and intelligent eLearning systems. Reiter's diagnosis theory, described by first-order sentences, has been attracting much attention in this field. However, descriptions and observations of most real-world situations are related to fuzziness because of the incompleteness and the uncertainty of knowledge, e. g., the fault diagnosis of student behaviors in the eLearning processes. In this paper, an extension of Reiter's consistency-based diagnosis methodology, Fuzzy Diagnosis, has been proposed, which is able to deal with incomplete or fuzzy knowledge. A number of important properties of the Fuzzy diagnoses schemes have also been established. The computing of fuzzy diagnoses is mapped to solving a system of inequalities. Some special cases, abstracted from real-world situations, have been discussed. In particular, the fuzzy diagnosis problem, in which fuzzy observations are represented by clause-style fuzzy theories, has been presented and its solving method has also been given. A student fault diagnostic problem abstracted from a simplified real-world eLearning case is described to demonstrate the application of our diagnostic framework.
Resumo:
Mixed ammonia-water vapor postsynthesis treatment provides a simple and convenient method for stabilizing mesostructured silica films. X-ray diffraction, transmission electron microscopy, nitrogen adsorption/desorption, and solid-state NMR (C-13, Si-29) were applied to study the effects of mixed ammonia-water vapor at 90 degreesC on the mesostructure of the films. An increased cross-linking of the silica network was observed. Subsequent calcination of the silica films was seen to cause a bimodal pore-size distribution, with an accompanying increase in the volume and surface area ratios of the primary (d = 3 nm) to secondary (d = 5-30 nm) pores. Additionally, mixed ammonia-water treatment was observed to cause a narrowing of the primary pore-size distribution. These findings have implications for thin film based applications and devices, such as sensors, membranes, or surfaces for heterogeneous catalysis.
Resumo:
Adsorption of pure nitrogen, argon, acetone, chloroform and acetone-chloroform mixture on graphitized thermal carbon black is considered at sub-critical conditions by means of molecular layer structure theory (MLST). In the present version of the MLST an adsorbed fluid is considered as a sequence of 2D molecular layers, whose Helmholtz free energies are obtained directly from the analysis of experimental adsorption isotherm of pure components. The interaction of the nearest layers is accounted for in the framework of mean field approximation. This approach allows quantitative correlating of experimental nitrogen and argon adsorption isotherm both in the monolayer region and in the range of multi-layer coverage up to 10 molecular layers. In the case of acetone and chloroform the approach also leads to excellent quantitative correlation of adsorption isotherms, while molecular approaches such as the non-local density functional theory (NLDFT) fail to describe those isotherms. We extend our new method to calculate the Helmholtz free energy of an adsorbed mixture using a simple mixing rule, and this allows us to predict mixture adsorption isotherms from pure component adsorption isotherms. The approach, which accounts for the difference in composition in different molecular layers, is tested against the experimental data of acetone-chloroform mixture (non-ideal mixture) adsorption on graphitized thermal carbon black at 50 degrees C. (C) 2005 Elsevier Ltd. All rights reserved.
Resumo:
This paper is an expanded and more detailed version of the work [1] in which the Operator Quantum Error Correction formalism was introduced. This is a new scheme for the error correction of quantum operations that incorporates the known techniques - i.e. the standard error correction model, the method of decoherence-free subspaces, and the noiseless subsystem method - as special cases, and relies on a generalized mathematical framework for noiseless subsystems that applies to arbitrary quantum operations. We also discuss a number of examples and introduce the notion of unitarily noiseless subsystems.
Resumo:
As process management projects have increased in size due to globalised and company-wide initiatives, a corresponding growth in the size of process modeling projects can be observed. Despite advances in languages, tools and methodologies, several aspects of these projects have been largely ignored by the academic community. This paper makes a first contribution to a potential research agenda in this field by defining the characteristics of large-scale process modeling projects and proposing a framework of related issues. These issues are derived from a semi -structured interview and six focus groups conducted in Australia, Germany and the USA with enterprise and modeling software vendors and customers. The focus groups confirm the existence of unresolved problems in business process modeling projects. The outcomes provide a research agenda which directs researchers into further studies in global process management, process model decomposition and the overall governance of process modeling projects. It is expected that this research agenda will provide guidance to researchers and practitioners by focusing on areas of high theoretical and practical relevance.