891 resultados para rule-based logic
Resumo:
New evidence shows that older adults need more dietary protein than do younger adults to support good health, promote recovery from illness, and maintain functionality. Older people need to make up for age-related changes in protein metabolism, such as high splanchnic extraction and declining anabolic responses to ingested protein. They also need more protein to offset inflammatory and catabolic conditions associated with chronic and acute diseases that occur commonly with aging. With the goal of developing updated, evidence-based recommendations for optimal protein intake by older people, the European Union Geriatric Medicine Society (EUGMS), in cooperation with other scientific organizations, appointed an international study group to review dietary protein needs with aging (PROT-AGE Study Group). To help older people (>65 years) maintain and regain lean body mass and function, the PROT-AGE study group recommends average daily intake at least in the range of 1.0 to 1.2 g protein per kilogram of body weight per day. Both endurance- and resistance-type exercises are recommended at individualized levels that are safe and tolerated, and higher protein intake (ie, ≥1.2 g/kg body weight/d) is advised for those who are exercising and otherwise active. Most older adults who have acute or chronic diseases need even more dietary protein (ie, 1.2-1.5 g/kg body weight/d). Older people with severe kidney disease (ie, estimated GFR <30 mL/min/1.73m(2)), but who are not on dialysis, are an exception to this rule; these individuals may need to limit protein intake. Protein quality, timing of ingestion, and intake of other nutritional supplements may be relevant, but evidence is not yet sufficient to support specific recommendations. Older people are vulnerable to losses in physical function capacity, and such losses predict loss of independence, falls, and even mortality. Thus, future studies aimed at pinpointing optimal protein intake in specific populations of older people need to include measures of physical function.
Resumo:
Background: The Pulmonary Embolism Rule-out Criteria (PERC) rule is a clinical diagnostic rule designed to exclude pulmonary embolism (PE) without further testing. We sought to externally validate the diagnostic performance of the PERC rule alone and combined with clinical probability assessment based on the revised Geneva score. Methods: The PERC rule was applied retrospectively to consecutive patients who presented with a clinical suspicion of PE to six emergency departments, and who were enrolled in a randomized trial of PE diagnosis. Patients who met all eight PERC criteria [PERC(-)] were considered to be at a very low risk for PE. We calculated the prevalence of PE among PERC(-) patients according to their clinical pretest probability of PE. We estimated the negative likelihood ratio of the PERC rule to predict PE. Results: Among 1675 patients, the prevalence of PE was 21.3%. Overall, 13.2% of patients were PERC(-). The prevalence of PE was 5.4% [95% confidence interval (CI): 3.1-9.3%] among PERC(-) patients overall and 6.4% (95% CI: 3.7-10.8%) among those PERC(-) patients with a low clinical pretest probability of PE. The PERC rule had a negative likelihood ratio of 0.70 (95% CI: 0.67-0.73) for predicting PE overall, and 0.63 (95% CI: 0.38-1.06) in low-risk patients. Conclusions: Our results suggest that the PERC rule alone or even when combined with the revised Geneva score cannot safely identify very low risk patients in whom PE can be ruled out without additional testing, at least in populations with a relatively high prevalence of PE.
Resumo:
In this article, the objective is to demonstrate the effects of different decision styles on strategic decisions and likewise, on an organization. The technique that was presented in the study is based on the transformation of linguistic variables to numerical value intervals. In this model, the study benefits from fuzzy logic methodology and fuzzy numbers. This fuzzy methodology approach allows us to examine the relations between decision making styles and strategic management processes when there is uncertainty. The purpose is to provide results to companies that may help them to exercise the most appropriate decision making style for its different strategic management processes. The study is leaving more research topics for further studies that may be applied to other decision making areas within the strategic management process.
Resumo:
This study looks at how increased memory utilisation affects throughput and energy consumption in scientific computing, especially in high-energy physics. Our aim is to minimise energy consumed by a set of jobs without increasing the processing time. The earlier tests indicated that, especially in data analysis, throughput can increase over 100% and energy consumption decrease 50% by processing multiple jobs in parallel per CPU core. Since jobs are heterogeneous, it is not possible to find an optimum value for the number of parallel jobs. A better solution is based on memory utilisation, but finding an optimum memory threshold is not straightforward. Therefore, a fuzzy logic-based algorithm was developed that can dynamically adapt the memory threshold based on the overall load. In this way, it is possible to keep memory consumption stable with different workloads while achieving significantly higher throughput and energy-efficiency than using a traditional fixed number of jobs or fixed memory threshold approaches.
Resumo:
To cite this article: Ponvert C, Perrin Y, Bados-Albiero A, Le Bourgeois M, Karila C, Delacourt C, Scheinmann P, De Blic J. Allergy to betalactam antibiotics in children: results of a 20-year study based on clinical history, skin and challenge tests. Pediatr Allergy Immunol 2011; 22: 411-418. ABSTRACT: Studies based on skin and challenge tests have shown that 12-60% of children with suspected betalactam hypersensitivity were allergic to betalactams. Responses in skin and challenge tests were studied in 1865 children with suspected betalactam allergy (i) to confirm or rule out the suspected diagnosis; (ii) to evaluate diagnostic value of immediate and non-immediate responses in skin and challenge tests; (iii) to determine frequency of betalactam allergy in those children, and (iv) to determine potential risk factors for betalactam allergy. The work-up was completed in 1431 children, of whom 227 (15.9%) were diagnosed allergic to betalactams. Betalactam hypersensitivity was diagnosed in 50 of the 162 (30.9%) children reporting immediate reactions and in 177 of the 1087 (16.7%) children reporting non-immediate reactions (p < 0.001). The likelihood of betalactam hypersensitivity was also significantly higher in children reporting anaphylaxis, serum sickness-like reactions, and (potentially) severe skin reactions such as acute generalized exanthematic pustulosis, Stevens-Johnson syndrome, and drug reaction with systemic symptoms than in other children (p < 0.001). Skin tests diagnosed 86% of immediate and 31.6% of non-immediate sensitizations. Cross-reactivity and/or cosensitization among betalactams was diagnosed in 76% and 14.7% of the children with immediate and non-immediate hypersensitivity, respectively. The number of children diagnosed allergic to betalactams decreased with time between the reaction and the work-up, probably because the majority of children with severe and worrying reactions were referred for allergological work-up more promptly than the other children. Sex, age, and atopy were not risk factors for betalactam hypersensitivity. In conclusion, we confirm in numerous children that (i) only a few children with suspected betalactam hypersensitivity are allergic to betalactams; (ii) the likelihood of betalactam allergy increases with earliness and/or severity of the reactions; (iii) although non-immediate-reading skin tests (intradermal and patch tests) may diagnose non-immediate sensitizations in children with non-immediate reactions to betalactams (maculopapular rashes and potentially severe skin reactions especially), the diagnostic value of non-immediate-reading skin tests is far lower than the diagnostic value of immediate-reading skin tests, most non-immediate sensitizations to betalactams being diagnosed by means of challenge tests; (iv) cross-reactivity and/or cosensitizations among betalactams are much more frequent in children reporting immediate and/or anaphylactic reactions than in the other children; (v) age, sex and personal atopy are not significant risk factors for betalactam hypersensitivity; and (vi) the number of children with diagnosed allergy to betalactams (of the immediate-type hypersensitivity especially) decreases with time between the reaction and allergological work-up. Finally, based on our experience, we also propose a practical diagnostic approach in children with suspected betalactam hypersensitivity.
Resumo:
The atomic force microscope is not only a very convenient tool for studying the topography of different samples, but it can also be used to measure specific binding forces between molecules. For this purpose, one type of molecule is attached to the tip and the other one to the substrate. Approaching the tip to the substrate allows the molecules to bind together. Retracting the tip breaks the newly formed bond. The rupture of a specific bond appears in the force-distance curves as a spike from which the binding force can be deduced. In this article we present an algorithm to automatically process force-distance curves in order to obtain bond strength histograms. The algorithm is based on a fuzzy logic approach that permits an evaluation of "quality" for every event and makes the detection procedure much faster compared to a manual selection. In this article, the software has been applied to measure the binding strength between tubuline and microtubuline associated proteins.
Resumo:
Background: Johanson-Blizzard syndrome (JBS; OMIM 243800) is an autosomal recessive disorder that includes congenital exocrine pancreatic insufficiency, facial dysmorphism with the characteristic nasal wing hypoplasia, multiple malformations, and frequent mental retardation. Our previous work has shown that JBS is caused by mutations in human UBR1, which encodes one of the E3 ubiquitin ligases of the N-end rule pathway. The N-end rule relates the regulation of the in vivo half-life of a protein to the identity of its N-terminal residue. One class of degradation signals (degrons) recognized by UBR1 are destabilizing N-terminal residues of protein substrates.Methodology/Principal Findings: Most JBS-causing alterations of UBR1 are nonsense, frameshift or splice-site mutations that abolish UBR1 activity. We report here missense mutations of human UBR1 in patients with milder variants of JBS. These single-residue changes, including a previously reported missense mutation, involve positions in the RING-H2 and UBR domains of UBR1 that are conserved among eukaryotes. Taking advantage of this conservation, we constructed alleles of the yeast Saccharomyces cerevisiae UBR1 that were counterparts of missense JBS-UBR1 alleles. Among these yeast Ubr1 mutants, one of them (H160R) was inactive in yeast-based activity assays, the other one (Q1224E) had a detectable but weak activity, and the third one (V146L) exhibited a decreased but significant activity, in agreement with manifestations of JBS in the corresponding JBS patients.Conclusions/Significance: These results, made possible by modeling defects of a human ubiquitin ligase in its yeast counterpart, verified and confirmed the relevance of specific missense UBR1 alleles to JBS, and suggested that a residual activity of a missense allele is causally associated with milder variants of JBS.
Resumo:
In this paper we propose an endpoint detection system based on the use of several features extracted from each speech frame, followed by a robust classifier (i.e Adaboost and Bagging of decision trees, and a multilayer perceptron) and a finite state automata (FSA). We present results for four different classifiers. The FSA module consisted of a 4-state decision logic that filtered false alarms and false positives. We compare the use of four different classifiers in this task. The look ahead of the method that we propose was of 7 frames, which are the number of frames that maximized the accuracy of the system. The system was tested with real signals recorded inside a car, with signal to noise ratio that ranged from 6 dB to 30dB. Finally we present experimental results demonstrating that the system yields robust endpoint detection.
Resumo:
A table showing a comparison and classification of tools (intelligent tutoring systems) for e-learning of Logic at a college level.
Resumo:
Tehoelektoniikkalaitteella tarkoitetaan ohjaus- ja säätöjärjestelmää, jolla sähköä muokataan saatavilla olevasta muodosta haluttuun uuteen muotoon ja samalla hallitaan sähköisen tehon virtausta lähteestä käyttökohteeseen. Tämä siis eroaa signaalielektroniikasta, jossa sähköllä tyypillisesti siirretään tietoa hyödyntäen eri tiloja. Tehoelektroniikkalaitteita vertailtaessa katsotaan yleensä niiden luotettavuutta, kokoa, tehokkuutta, säätötarkkuutta ja tietysti hintaa. Tyypillisiä tehoelektroniikkalaitteita ovat taajuudenmuuttajat, UPS (Uninterruptible Power Supply) -laitteet, hitsauskoneet, induktiokuumentimet sekä erilaiset teholähteet. Perinteisesti näiden laitteiden ohjaus toteutetaan käyttäen mikroprosessoreja, ASIC- (Application Specific Integrated Circuit) tai IC (Intergrated Circuit) -piirejä sekä analogisia säätimiä. Tässä tutkimuksessa on analysoitu FPGA (Field Programmable Gate Array) -piirien soveltuvuutta tehoelektroniikan ohjaukseen. FPGA-piirien rakenne muodostuu erilaisista loogisista elementeistä ja niiden välisistä yhdysjohdoista.Loogiset elementit ovat porttipiirejä ja kiikkuja. Yhdysjohdot ja loogiset elementit ovat piirissä kiinteitä eikä koostumusta tai lukumäärää voi jälkikäteen muuttaa. Ohjelmoitavuus syntyy elementtien välisistä liitännöistä. Piirissä on lukuisia, jopa miljoonia kytkimiä, joiden asento voidaan asettaa. Siten piirin peruselementeistä voidaan muodostaa lukematon määrä erilaisia toiminnallisia kokonaisuuksia. FPGA-piirejä on pitkään käytetty kommunikointialan tuotteissa ja siksi niiden kehitys on viime vuosina ollut nopeaa. Samalla hinnat ovat pudonneet. Tästä johtuen FPGA-piiristä on tullut kiinnostava vaihtoehto myös tehoelektroniikkalaitteiden ohjaukseen. Väitöstyössä FPGA-piirien käytön soveltuvuutta on tutkittu käyttäen kahta vaativaa ja erilaista käytännön tehoelektroniikkalaitetta: taajuudenmuuttajaa ja hitsauskonetta. Molempiin testikohteisiin rakennettiin alan suomalaisten teollisuusyritysten kanssa soveltuvat prototyypit,joiden ohjauselektroniikka muutettiin FPGA-pohjaiseksi. Lisäksi kehitettiin tätä uutta tekniikkaa hyödyntävät uudentyyppiset ohjausmenetelmät. Prototyyppien toimivuutta verrattiin vastaaviin perinteisillä menetelmillä ohjattuihin kaupallisiin tuotteisiin ja havaittiin FPGA-piirien mahdollistaman rinnakkaisen laskennantuomat edut molempien tehoelektroniikkalaitteiden toimivuudessa. Työssä on myösesitetty uusia menetelmiä ja työkaluja FPGA-pohjaisen säätöjärjestelmän kehitykseen ja testaukseen. Esitetyillä menetelmillä tuotteiden kehitys saadaan mahdollisimman nopeaksi ja tehokkaaksi. Lisäksi työssä on kehitetty FPGA:n sisäinen ohjaus- ja kommunikointiväylärakenne, joka palvelee tehoelektroniikkalaitteiden ohjaussovelluksia. Uusi kommunikointirakenne edistää lisäksi jo tehtyjen osajärjestelmien uudelleen käytettävyyttä tulevissa sovelluksissa ja tuotesukupolvissa.
Resumo:
Pulsewidth-modulated (PWM) rectifier technology is increasingly used in industrial applications like variable-speed motor drives, since it offers several desired features such as sinusoidal input currents, controllable power factor, bidirectional power flow and high quality DC output voltage. To achieve these features,however, an effective control system with fast and accurate current and DC voltage responses is required. From various control strategies proposed to meet these control objectives, in most cases the commonly known principle of the synchronous-frame current vector control along with some space-vector PWM scheme have been applied. Recently, however, new control approaches analogous to the well-established direct torque control (DTC) method for electrical machines have also emerged to implement a high-performance PWM rectifier. In this thesis the concepts of classical synchronous-frame current control and DTC-based PWM rectifier control are combined and a new converter-flux-based current control (CFCC) scheme is introduced. To achieve sufficient dynamic performance and to ensure a stable operation, the proposed control system is thoroughly analysed and simple rules for the controller design are suggested. Special attention is paid to the estimationof the converter flux, which is the key element of converter-flux-based control. Discrete-time implementation is also discussed. Line-voltage-sensorless reactive reactive power control methods for the L- and LCL-type line filters are presented. For the L-filter an open-loop control law for the d-axis current referenceis proposed. In the case of the LCL-filter the combined open-loop control and feedback control is proposed. The influence of the erroneous filter parameter estimates on the accuracy of the developed control schemes is also discussed. A newzero vector selection rule for suppressing the zero-sequence current in parallel-connected PWM rectifiers is proposed. With this method a truly standalone and independent control of the converter units is allowed and traditional transformer isolation and synchronised-control-based solutions are avoided. The implementation requires only one additional current sensor. The proposed schemes are evaluated by the simulations and laboratory experiments. A satisfactory performance and good agreement between the theory and practice are demonstrated.
Resumo:
Fuzzy set theory and Fuzzy logic is studied from a mathematical point of view. The main goal is to investigatecommon mathematical structures in various fuzzy logical inference systems and to establish a general mathematical basis for fuzzy logic when considered as multi-valued logic. The study is composed of six distinct publications. The first paper deals with Mattila'sLPC+Ch Calculus. THis fuzzy inference system is an attempt to introduce linguistic objects to mathematical logic without defining these objects mathematically.LPC+Ch Calculus is analyzed from algebraic point of view and it is demonstratedthat suitable factorization of the set of well formed formulae (in fact, Lindenbaum algebra) leads to a structure called ET-algebra and introduced in the beginning of the paper. On its basis, all the theorems presented by Mattila and many others can be proved in a simple way which is demonstrated in the Lemmas 1 and 2and Propositions 1-3. The conclusion critically discusses some other issues of LPC+Ch Calculus, specially that no formal semantics for it is given.In the second paper the characterization of solvability of the relational equation RoX=T, where R, X, T are fuzzy relations, X the unknown one, and o the minimum-induced composition by Sanchez, is extended to compositions induced by more general products in the general value lattice. Moreover, the procedure also applies to systemsof equations. In the third publication common features in various fuzzy logicalsystems are investigated. It turns out that adjoint couples and residuated lattices are very often present, though not always explicitly expressed. Some minor new results are also proved.The fourth study concerns Novak's paper, in which Novak introduced first-order fuzzy logic and proved, among other things, the semantico-syntactical completeness of this logic. He also demonstrated that the algebra of his logic is a generalized residuated lattice. In proving that the examination of Novak's logic can be reduced to the examination of locally finite MV-algebras.In the fifth paper a multi-valued sentential logic with values of truth in an injective MV-algebra is introduced and the axiomatizability of this logic is proved. The paper developes some ideas of Goguen and generalizes the results of Pavelka on the unit interval. Our proof for the completeness is purely algebraic. A corollary of the Completeness Theorem is that fuzzy logic on the unit interval is semantically complete if, and only if the algebra of the valuesof truth is a complete MV-algebra. The Compactness Theorem holds in our well-defined fuzzy sentential logic, while the Deduction Theorem and the Finiteness Theorem do not. Because of its generality and good-behaviour, MV-valued logic can be regarded as a mathematical basis of fuzzy reasoning. The last paper is a continuation of the fifth study. The semantics and syntax of fuzzy predicate logic with values of truth in ana injective MV-algerba are introduced, and a list of universally valid sentences is established. The system is proved to be semanticallycomplete. This proof is based on an idea utilizing some elementary properties of injective MV-algebras and MV-homomorphisms, and is purely algebraic.
Resumo:
In the last decade defeasible argumentation frameworks have evolved to become a sound setting to formalize commonsense, qualitative reasoning. The logic programming paradigm has shown to be particularly useful for developing different argument-based frameworks on the basis of different variants of logic programming which incorporate defeasible rules. Most of such frameworks, however, are unable to deal with explicit uncertainty, nor with vague knowledge, as defeasibility is directly encoded in the object language. This paper presents Possibilistic Logic Programming (P-DeLP), a new logic programming language which combines features from argumentation theory and logic programming, incorporating as well the treatment of possibilistic uncertainty. Such features are formalized on the basis of PGL, a possibilistic logic based on G¨odel fuzzy logic. One of the applications of P-DeLP is providing an intelligent agent with non-monotonic, argumentative inference capabilities. In this paper we also provide a better understanding of such capabilities by defining two non-monotonic operators which model the expansion of a given program P by adding new weighed facts associated with argument conclusions and warranted literals, respectively. Different logical properties for the proposed operators are studied
Resumo:
OBJECTIVES: To investigate the frequency of interim analyses, stopping rules, and data safety and monitoring boards (DSMBs) in protocols of randomized controlled trials (RCTs); to examine these features across different reasons for trial discontinuation; and to identify discrepancies in reporting between protocols and publications. STUDY DESIGN AND SETTING: We used data from a cohort of RCT protocols approved between 2000 and 2003 by six research ethics committees in Switzerland, Germany, and Canada. RESULTS: Of 894 RCT protocols, 289 prespecified interim analyses (32.3%), 153 stopping rules (17.1%), and 257 DSMBs (28.7%). Overall, 249 of 894 RCTs (27.9%) were prematurely discontinued; mostly due to reasons such as poor recruitment, administrative reasons, or unexpected harm. Forty-six of 249 RCTs (18.4%) were discontinued due to early benefit or futility; of those, 37 (80.4%) were stopped outside a formal interim analysis or stopping rule. Of 515 published RCTs, there were discrepancies between protocols and publications for interim analyses (21.1%), stopping rules (14.4%), and DSMBs (19.6%). CONCLUSION: Two-thirds of RCT protocols did not consider interim analyses, stopping rules, or DSMBs. Most RCTs discontinued for early benefit or futility were stopped without a prespecified mechanism. When assessing trial manuscripts, journals should require access to the protocol.
Resumo:
BACKGROUND: The diagnosis of Pulmonary Embolism (PE) in the emergency department (ED) is crucial. As emergency physicians fear missing this potential life-threatening condition, PE tends to be over-investigated, exposing patients to unnecessary risks and uncertain benefit in terms of outcome. The Pulmonary Embolism Rule-out Criteria (PERC) is an eight-item block of clinical criteria that can identify patients who can safely be discharged from the ED without further investigation for PE. The endorsement of this rule could markedly reduce the number of irradiative imaging studies, ED length of stay, and rate of adverse events resulting from both diagnostic and therapeutic interventions. Several retrospective and prospective studies have shown the safety and benefits of the PERC rule for PE diagnosis in low-risk patients, but the validity of this rule is still controversial. We hypothesize that in European patients with a low gestalt clinical probability and who are PERC-negative, PE can be safely ruled out and the patient discharged without further testing. METHODS/DESIGN: This is a controlled, cluster randomized trial, in 15 centers in France. Each center will be randomized for the sequence of intervention periods: a 6-month intervention period (PERC-based strategy) followed by a 6-month control period (usual care), or in reverse order, with 2 months of "wash-out" between the 2 periods. Adult patients presenting to the ED with a suspicion of PE and a low pre test probability estimated by clinical gestalt will be eligible. The primary outcome is the percentage of failure resulting from the diagnostic strategy, defined as diagnosed venous thromboembolic events at 3-month follow-up, among patients for whom PE has been initially ruled out. DISCUSSION: The PERC rule has the potential to decrease the number of irradiative imaging studies in the ED, and is reported to be safe. However, no randomized study has ever validated the safety of PERC. Furthermore, some studies have challenged the safety of a PERC-based strategy to rule-out PE, especially in Europe where the prevalence of PE diagnosed in the ED is high. The PROPER study should provide high-quality evidence to settle this issue. If it confirms the safety of the PERC rule, physicians will be able to reduce the number of investigations, associated subsequent adverse events, costs, and ED length of stay for patients with a low clinical probability of PE. TRIAL REGISTRATION: NCT02375919 .