953 resultados para Artificial Intelligence, Constraint Programming, set variables, representation
Resumo:
In chemical analyses performed by laboratories, one faces the problem of determining the concentration of a chemical element in a sample. In practice, one deals with the problem using the so-called linear calibration model, which considers that the errors associated with the independent variables are negligible compared with the former variable. In this work, a new linear calibration model is proposed assuming that the independent variables are subject to heteroscedastic measurement errors. A simulation study is carried out in order to verify some properties of the estimators derived for the new model and it is also considered the usual calibration model to compare it with the new approach. Three applications are considered to verify the performance of the new approach. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
In this paper, we present a Bayesian approach for estimation in the skew-normal calibration model, as well as the conditional posterior distributions which are useful for implementing the Gibbs sampler. Data transformation is thus avoided by using the methodology proposed. Model fitting is implemented by proposing the asymmetric deviance information criterion, ADIC, a modification of the ordinary DIC. We also report an application of the model studied by using a real data set, related to the relationship between the resistance and the elasticity of a sample of concrete beams. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
A decision support system (DSS) was implemented based on a fuzzy logic inference system (FIS) to provide assistance in dose alteration of Duodopa infusion in patients with advanced Parkinson’s disease, using data from motor state assessments and dosage. Three-tier architecture with an object oriented approach was used. The DSS has a web enabled graphical user interface that presents alerts indicating non optimal dosage and states, new recommendations, namely typical advice with typical dose and statistical measurements. One data set was used for design and tuning of the FIS and another data set was used for evaluating performance compared with actual given dose. Overall goodness-of-fit for the new patients (design data) was 0.65 and for the ongoing patients (evaluation data) 0.98. User evaluation is now ongoing. The system could work as an assistant to clinical staff for Duodopa treatment in advanced Parkinson’s disease.
Resumo:
A challenge for the clinical management of advanced Parkinson’s disease (PD) patients is the emergence of fluctuations in motor performance, which represents a significant source of disability during activities of daily living of the patients. There is a lack of objective measurement of treatment effects for in-clinic and at-home use that can provide an overview of the treatment response. The objective of this paper was to develop a method for objective quantification of advanced PD motor symptoms related to off episodes and peak dose dyskinesia, using spiral data gathered by a touch screen telemetry device. More specifically, the aim was to objectively characterize motor symptoms (bradykinesia and dyskinesia), to help in automating the process of visual interpretation of movement anomalies in spirals as rated by movement disorder specialists. Digitized upper limb movement data of 65 advanced PD patients and 10 healthy (HE) subjects were recorded as they performed spiral drawing tasks on a touch screen device in their home environment settings. Several spatiotemporal features were extracted from the time series and used as inputs to machine learning methods. The methods were validated against ratings on animated spirals scored by four movement disorder specialists who visually assessed a set of kinematic features and the motor symptom. The ability of the method to discriminate between PD patients and HE subjects and the test-retest reliability of the computed scores were also evaluated. Computed scores correlated well with mean visual ratings of individual kinematic features. The best performing classifier (Multilayer Perceptron) classified the motor symptom (bradykinesia or dyskinesia) with an accuracy of 84% and area under the receiver operating characteristics curve of 0.86 in relation to visual classifications of the raters. In addition, the method provided high discriminating power when distinguishing between PD patients and HE subjects as well as had good test-retest reliability. This study demonstrated the potential of using digital spiral analysis for objective quantification of PD-specific and/or treatment-induced motor symptoms.
Resumo:
This paper is a preliminary investigation into the application of the formal-logical theory of normative positions to the characterisation of normative-informational positions, pertaining to rules that are meant to regulate the supply of information. First, we present the proposed framework. Next, we identify the kinds of nuances and distinctions that can be articulated in such a logical framework. Finally, we show how such nuances can arise in specific regulations. Reference is made to Data Protection Law and Contract Law, among others. The proposed approach is articulated around two essential steps. The first involves identifying the set of possible interpretations that can be given to a particular norm. This is done by using formal methods. The second involves picking out one of these interpretations as the most likely one. This second step can be resolved only by using further information (e.g., the context or other parts of the regulation).
Resumo:
We prove the completeness of the regular strategy of derivations for superposition-based calculi. The regular strategy was pioneered by Kanger in [Kan63], who proposed that all equality inferences take place before all other steps in the proof. We show that the strategy is complete with the elimination of tautologies. The implication of our result is the completeness of non-standard selection functions by which in non-relational clauses only equality literals (and all of them) are selected.
Resumo:
First-order temporal logic is a concise and powerful notation, with many potential applications in both Computer Science and Artificial Intelligence. While the full logic is highly complex, recent work on monodic first-order temporal logics has identified important enumerable and even decidable fragments including the guarded fragment with equality. In this paper, we specialise the monodic resolution method to the guarded monodic fragment with equality and first-order temporal logic over expanding domains. We introduce novel resolution calculi that can be applied to formulae in the normal form associated with the clausal resolution method, and state correctness and completeness results.
Resumo:
First-order temporal logic is a concise and powerful notation, with many potential applications in both Computer Science and Artificial Intelligence. While the full logic is highly complex, recent work on monodic first-order temporal logics has identified important enumerable and even decidable fragments. In this paper, we develop a clausal resolution method for the monodic fragment of first-order temporal logic over expanding domains. We first define a normal form for monodic formulae and then introduce novel resolution calculi that can be applied to formulae in this normal form. We state correctness and completeness results for the method. We illustrate the method on a comprehensive example. The method is based on classical first-order resolution and can, thus, be efficiently implemented.
Resumo:
In this paper, we show how the clausal temporal resolution technique developed for temporal logic provides an effective method for searching for invariants, and so is suitable for mechanising a wide class of temporal problems. We demonstrate that this scheme of searching for invariants can be also applied to a class of multi-predicate induction problems represented by mutually recursive definitions. Completeness of the approach, examples of the application of the scheme, and overview of the implementation are described.
Resumo:
The Rational Agent model have been a foundational basis for theoretical models such as Economics, Management Science, Artificial Intelligence and Game Theory, mainly by the ¿maximization under constraints¿ principle, e.g. the ¿Expected Utility Models¿, among them, the Subjective Expected Utility (SEU) Theory, from Savage, placed as most influence player over theoretical models we¿ve seen nowadays, even though many other developments have been done, indeed also in non-expected utility theories field. Having the ¿full rationality¿ assumption, going for a less idealistic sight ¿bounded rationality¿ of Simon, or for classical anomalies studies, such as the ¿heuristics and bias¿ analysis by Kahneman e Tversky, ¿Prospect Theory¿ also by Kahneman & Tversky, or Thaler¿s Anomalies, and many others, what we can see now is that Rational Agent Model is a ¿Management by Exceptions¿ example, as for each new anomalies¿s presentation, in sequence, a ¿problem solving¿ development is needed. This work is a theoretical essay, which tries to understand: 1) The rational model as a ¿set of exceptions¿; 2) The actual situation unfeasibility, since once an anomalie is identified, we need it¿s specific solution developed, and since the number of anomalies increases every year, making strongly difficult to manage rational model; 3) That behaviors judged as ¿irrationals¿ or deviated, by the Rational Model, are truly not; 4) That¿s the right moment to emerge a Theory including mental processes used in decision making; and 5) The presentation of an alternative model, based on some cognitive and experimental psychology analysis, such as conscious and uncounscious processes, cognition, intuition, analogy-making, abstract roles, and others. Finally, we present conclusions and future research, that claims for deeper studies in this work¿s themes, for mathematical modelling, and studies about a rational analysis and cognitive models possible integration. .
Resumo:
Embora os progressos na área de informática sejam bastante significativos e velozes, na tradução automática há muito ainda o que ser feito. Desde meados dos anos 40 já havia um interesse, em especial pelos americanos e ingleses, numa tradução mais rápida e eficiente de documentos russos, porém até hoje o que se vê em termos de tradução automática está aquém daquilo que se possa chamar de uma boa tradução. Para buscar uma tradução automática eficiente os cientistas têm usado como fonte principal meios estatísticos de solução para tal problema. Esse trabalho visa dar um novo enfoque a tal questão, buscando na ciência cognitiva sua principal fonte de inspiração. O resultado a que se chega com o presente trabalho é que a estatística deve continuar sendo sim uma fonte de auxílio em especial na definição de padrões. Porém, o trabalho trás consigo o propósito de levantar a sobreposição semântica como via de possível solução que possa vir auxiliar, ou, até mesmo trazer maior rapidez a questão da tradução automática. No campo organizacional levanta uma questão interessante, o valor da experiência como meio inteligente de buscar melhores resultados para as empresas.
Resumo:
The advantages offered by the electronic component LED (Light Emitting Diode) have caused a quick and wide application of this device in replacement of incandescent lights. However, in its combined application, the relationship between the design variables and the desired effect or result is very complex and it becomes difficult to model by conventional techniques. This work consists of the development of a technique, through comparative analysis of neuro-fuzzy architectures, to make possible to obtain the luminous intensity values of brake lights using LEDs from design data.
Resumo:
This paper presents an efficient neural network for solving constrained nonlinear optimization problems. More specifically, a two-stage neural network architecture is developed and its internal parameters are computed using the valid-subspace technique. The main advantage of the developed network is that it treats optimization and constraint terms in different stages with no interference with each other. Moreover, the proposed approach does not require specification of penalty or weighting parameters for its initialization.
Resumo:
A novel approach for solving robust parameter estimation problems is presented for processes with unknown-but-bounded errors and uncertainties. An artificial neural network is developed to calculate a membership set for model parameters. Techniques of fuzzy logic control lead the network to its equilibrium points. Simulated examples are presented as an illustration of the proposed technique. The result represent a significant improvement over previously proposed methods. (C) 1999 IMACS/Elsevier B.V. B.V. All rights reserved.
Resumo:
Este artigo é uma tentativa de delinear as principais características da pesquisa numa nova área de estudos a chamada Inteligência Artificial (AI). Os itens 1 e 2 constituem um rápido histórico da AI e seus pressupostos básicos. O item 3 trata da teoria de resolução de problemas, desenvolvida por A. Newell e H. Simon. O item 4 procura mostrar a relevância da AI para a Filosofia, em especial para a filosofia da Mente e para a Teoria do Conhecimento.