88 resultados para Automation
Resumo:
Permanent hearing loss is a leading global health care burden, with 1 in 10 people affected to a mild or greater degree. A shortage of trained healthcare professionals and associated infrastructure and resource limitations mean that hearing health services are unavailable to the majority of the world population. Utilizing information and communication technology in hearing health care, or tele-audiology, combined with automation offer unique opportunities for improved clinical care, widespread access to services, and more cost-effective and sustainable hearing health care. Tele-audiology demonstrates significant potential in areas such as education and training of hearing health care professionals, paraprofessionals, parents, and adults with hearing disorders; screening for auditory disorders; diagnosis of hearing loss; and intervention services. Global connectivity is rapidly growing with increasingly widespread distribution into underserved communities where audiological services may be facilitated through telehealth models. Although many questions related to aspects such as quality control, licensure, jurisdictional responsibility, certification and reimbursement still need to be addressed; no alternative strategy can currently offer the same potential reach for impacting the global burden of hearing loss in the near and foreseeable future.
Resumo:
A novel technique for selecting the poles of orthonormal basis functions (OBF) in Volterra models of any order is presented. It is well-known that the usual large number of parameters required to describe the Volterra kernels can be significantly reduced by representing each kernel using an appropriate basis of orthonormal functions. Such a representation results in the so-called OBF Volterra model, which has a Wiener structure consisting of a linear dynamic generated by the orthonormal basis followed by a nonlinear static mapping given by the Volterra polynomial series. Aiming at optimizing the poles that fully parameterize the orthonormal bases, the exact gradients of the outputs of the orthonormal filters with respect to their poles are computed analytically by using a back-propagation-through-time technique. The expressions relative to the Kautz basis and to generalized orthonormal bases of functions (GOBF) are addressed; the ones related to the Laguerre basis follow straightforwardly as a particular case. The main innovation here is that the dynamic nature of the OBF filters is fully considered in the gradient computations. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into account the error of estimation of the system output. The Levenberg-Marquardt algorithm is adopted here as the optimization procedure. Unlike previous related work, the proposed approach relies solely on input-output data measured from the system to be modeled, i.e., no information about the Volterra kernels is required. Examples are presented to illustrate the application of this approach to the modeling of dynamic systems, including a real magnetic levitation system with nonlinear oscillatory behavior.
Resumo:
Let M be a finite-dimensional manifold and Sigma be a driftless control system on M of full rank. We prove that for a given initial state x epsilon M, the covering space Gamma(Sigma, x) for a monotonic homotopy of trajectories of Sigma which is recently constructed in [1] coincides with the simply connected universal covering manifold of M and that the terminal projection epsilon(x) : Gamma(Sigma, x) -> M given by epsilon(x) ([alpha]) = alpha(1) is a covering mapping.
Resumo:
Two fundamental processes usually arise in the production planning of many industries. The first one consists of deciding how many final products of each type have to be produced in each period of a planning horizon, the well-known lot sizing problem. The other process consists of cutting raw materials in stock in order to produce smaller parts used in the assembly of final products, the well-studied cutting stock problem. In this paper the decision variables of these two problems are dependent of each other in order to obtain a global optimum solution. Setups that are typically present in lot sizing problems are relaxed together with integer frequencies of cutting patterns in the cutting problem. Therefore, a large scale linear optimizations problem arises, which is exactly solved by a column generated technique. It is worth noting that this new combined problem still takes the trade-off between storage costs (for final products and the parts) and trim losses (in the cutting process). We present some sets of computational tests, analyzed over three different scenarios. These results show that, by combining the problems and using an exact method, it is possible to obtain significant gains when compared to the usual industrial practice, which solve them in sequence. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we study binary differential equations a(x, y)dy (2) + 2b(x, y) dx dy + c(x, y)dx (2) = 0, where a, b, and c are real analytic functions. Following the geometric approach of Bruce and Tari in their work on multiplicity of implicit differential equations, we introduce a definition of the index for this class of equations that coincides with the classical Hopf`s definition for positive binary differential equations. Our results also apply to implicit differential equations F(x, y, p) = 0, where F is an analytic function, p = dy/dx, F (p) = 0, and F (pp) not equal aEuro parts per thousand 0 at the singular point. For these equations, we relate the index of the equation at the singular point with the index of the gradient of F and index of the 1-form omega = dy -aEuro parts per thousand pdx defined on the singular surface F = 0.
Resumo:
OWL-S is an application of OWL, the Web Ontology Language, that describes the semantics of Web Services so that their discovery, selection, invocation and composition can be automated. The research literature reports the use of UML diagrams for the automatic generation of Semantic Web Service descriptions in OWL-S. This paper demonstrates a higher level of automation by generating complete complete Web applications from OWL-S descriptions that have themselves been generated from UML. Previously, we proposed an approach for processing OWL-S descriptions in order to produce MVC-based skeletons for Web applications. The OWL-S ontology undergoes a series of transformations in order to generate a Model-View-Controller application implemented by a combination of Java Beans, JSP, and Servlets code, respectively. In this paper, we show in detail the documents produced at each processing step. We highlight the connections between OWL-S specifications and executable code in the various Java dialects and show the Web interfaces that result from this process.
Resumo:
Planning to reach a goal is an essential capability for rational agents. In general, a goal specifies a condition to be achieved at the end of the plan execution. In this article, we introduce nondeterministic planning for extended reachability goals (i.e., goals that also specify a condition to be preserved during the plan execution). We show that, when this kind of goal is considered, the temporal logic CTL turns out to be inadequate to formalize plan synthesis and plan validation algorithms. This is mainly due to the fact that the CTL`s semantics cannot discern among the various actions that produce state transitions. To overcome this limitation, we propose a new temporal logic called alpha-CTL. Then, based on this new logic, we implement a planner capable of synthesizing reliable plans for extended reachability goals, as a side effect of model checking.
Resumo:
In chemical analyses performed by laboratories, one faces the problem of determining the concentration of a chemical element in a sample. In practice, one deals with the problem using the so-called linear calibration model, which considers that the errors associated with the independent variables are negligible compared with the former variable. In this work, a new linear calibration model is proposed assuming that the independent variables are subject to heteroscedastic measurement errors. A simulation study is carried out in order to verify some properties of the estimators derived for the new model and it is also considered the usual calibration model to compare it with the new approach. Three applications are considered to verify the performance of the new approach. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
The main objective of this paper is to study a logarithm extension of the bimodal skew normal model introduced by Elal-Olivero et al. [1]. The model can then be seen as an alternative to the log-normal model typically used for fitting positive data. We study some basic properties such as the distribution function and moments, and discuss maximum likelihood for parameter estimation. We report results of an application to a real data set related to nickel concentration in soil samples. Model fitting comparison with several alternative models indicates that the model proposed presents the best fit and so it can be quite useful in real applications for chemical data on substance concentration. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
In this paper, we present a Bayesian approach for estimation in the skew-normal calibration model, as well as the conditional posterior distributions which are useful for implementing the Gibbs sampler. Data transformation is thus avoided by using the methodology proposed. Model fitting is implemented by proposing the asymmetric deviance information criterion, ADIC, a modification of the ordinary DIC. We also report an application of the model studied by using a real data set, related to the relationship between the resistance and the elasticity of a sample of concrete beams. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
This work describes the development and optimization of a sequential injection method to automate the determination of paraquat by square-wave voltammetry employing a hanging mercury drop electrode. Automation by sequential injection enhanced the sampling throughput, improving the sensitivity and precision of the measurements as a consequence of the highly reproducible and efficient conditions of mass transport of the analyte toward the electrode surface. For instance, 212 analyses can be made per hour if the sample/standard solution is prepared off-line and the sequential injection system is used just to inject the solution towards the flow cell. In-line sample conditioning reduces the sampling frequency to 44 h(-1). Experiments were performed in 0.10 M NaCl, which was the carrier solution, using a frequency of 200 Hz, a pulse height of 25 mV, a potential step of 2 mV, and a flow rate of 100 mu L s(-1). For a concentration range between 0.010 and 0.25 mg L(-1), the current (i(p), mu A) read at the potential corresponding to the peak maximum fitted the following linear equation with the paraquat concentration (mg L(-1)): ip = (-20.5 +/- 0.3) Cparaquat -(0.02 +/- 0.03). The limits of detection and quantification were 2.0 and 7.0 mu g L(-1), respectively. The accuracy of the method was evaluated by recovery studies using spiked water samples that were also analyzed by molecular absorption spectrophotometry after reduction of paraquat with sodium dithionite in an alkaline medium. No evidence of statistically significant differences between the two methods was observed at the 95% confidence level.
Resumo:
This paper describes the automation of a fully electrochemical system for preconcentration, cleanup, separation and detection, comprising the hyphenation of a thin layer electrochemical flow cell with CE coupled with contactless conductivity detection (CE-C(4)D). Traces of heavy metal ions were extracted from the pulsed-flowing sample and accumulated on a glassy carbon working electrode by electroreduction for some minutes. Anodic stripping of the accumulated metals was synchronized with hydrodynamic injection into the capillary. The effect of the angle of the slant polished tip of the CE capillary and its orientation against the working electrode in the electrochemical preconcentration (EPC) flow cell and of the accumulation time were studied, aiming at maximum CE-C(4)D signal enhancement. After 6 min of EPC, enhancement factors close to 50 times were obtained for thallium, lead, cadmium and copper ions, and about 16 for zinc ions. Limits of detection below 25 nmol/L were estimated for all target analytes but zinc. A second separation dimension was added to the CE separation capabilities by staircase scanning of the potentiostatic deposition and/or stripping potentials of metal ions, as implemented with the EPC-CE-C(4)D flow system. A matrix exchange between the deposition and stripping steps, highly valuable for sample cleanup, can be straightforwardly programmed with the multi-pumping flow management system. The automated simultaneous determination of the traces of five accumulable heavy metals together with four non-accumulated alkaline and alkaline earth metals in a single run was demonstrated, to highlight the potentiality of the system.
Resumo:
Direct analysis, with minimal sample pretreatment, of antidepressant drugs, fluoxetine, imipramine, desipramine, amitriptyline, and nortriptyline in biofluids was developed with a total run time of 8 min. The setup consists of two HPLC pumps, injection valve, capillary RAM-ADS-C18 pre-column and a capillary analytical C 18 column connected by means of a six-port valve in backflush mode. Detection was performed with ESI-MS/MS and only 1 mu m of sample was injected. Validation was adequately carried out using FLU-d(5) as internal standard. Calibration curves were constructed under a linear range of 1-250 ng mL(-1) in plasma, being the limit of quantification (LOQ), determined as 1 ng mL(-1), for all the analytes. With the described approach it was possible to reach a quantified mass sensitivity of 0.3 pg for each analyte (equivalent to 1.1-1.3 fmol), translating to a lower sample consumption (in the order of 103 less sample than using conventional methods). (C) 2008 Elsevier B.V. All rights reserved.