769 resultados para Could computing
Resumo:
Different procedures to obtain atom condensed Fukui functions are described. It is shown how the resulting values may differ depending on the exact approach to atom condensed Fukui functions. The condensed Fukui function can be computed using either the fragment of molecular response approach or the response of molecular fragment approach. The two approaches are nonequivalent; only the latter approach corresponds in general with a population difference expression. The Mulliken approach does not depend on the approach taken but has some computational drawbacks. The different resulting expressions are tested for a wide set of molecules. In practice one must make seemingly arbitrary choices about how to compute condensed Fukui functions, which suggests questioning the role of these indicators in conceptual density-functional theory
Resumo:
The alignment between competences, teaching-learning methodologies and assessment is a key element of the European Higher Education Area. This paper presents the efforts carried out by six Telematics, Computer Science and Electronic Engineering Education teachers towards achieving this alignment in their subjects. In a joint work with pedagogues, a set of recommended actions were identified. A selection of these actions were applied and evaluated in the six subjects. The cross-analysis of the results indicate that the actions allow students to better understand the methodologies and assessment planned for the subjects, facilitate (self-) regulation and increase students’ involvement in the subjects.
Resumo:
Classical planning has been notably successful in synthesizing finite plans to achieve states where propositional goals hold. In the last few years, classical planning has also been extended to incorporate temporally extended goals, expressed in temporal logics such as LTL, to impose restrictions on the state sequences generated by finite plans. In this work, we take the next step and consider the computation of infinite plans for achieving arbitrary LTL goals. We show that infinite plans can also be obtained efficiently by calling a classical planner once over a classical planning encoding that represents and extends the composition of the planningdomain and the B¨uchi automaton representingthe goal. This compilation scheme has been implemented and a number of experiments are reported.
Resumo:
The aim of this research was to structure a conceptual model of hope and hopelessness based on dictionary definitions, and to verify this model on the basis of the experiences of the severely depressive and non-depressive elderly. This research has produced a substantive theory of hope and hopelessness which is based on the experiences of the depressive and non-depressive elderly, and on the concept analysis of hope and hopelessness based on English dictionary definitions. The patients who participated in the research were 65 years old and older men and women (n=22) who had been admitted to a psychiatric hospital because of major depression, and another group: the non-depressive elderly (n=21), who were recruited from the pensioners’ clubs. The data were collected in interviews using the Clinical Assessment Tool, developed by Farran, Salloway and Clark (1990) and Farran, Wilken and Popovich (1992), and it produced 553 pages of written text, which were analysed using the ATLAS/ti programme. ATLAS/ti is a tool for analysing qualitative data and is based on Grounded Theory. The medical and nursing records of the depressive elderly completed source triangulation. The concept analysis of hope and hopelessness was made on the basis of the definitions of English dictionaries (n=103), using semantic analysis and the ATLAS/ti programme. The most important hope-promoting factors were human relations, health and managing in everyday living. Autonomy, self-determination and feeling of security were highly appreciated among the elderly. Hopelessness, on the other hand, was most often associated with the same factors: human relations, health and everyday living. Especially, losses of significant others were experienced as strongly hope-diminishing. Old age had brought freedom from duties concerning others, but now, when you finally had an opportunity to enjoy yourself, you could not accomplish anything; you were clasped in the arms of total inability, depression had come. The most obvious difference in the life course of the depressive and nondepressive elderly was the abundance of traumatic experiences in the childhood and youth of the depressive elderly. The continuous circulation of fearful thoughts was almost touchable, and suicidality was described in connection with these thoughts. You were afraid to be awake and also to go to sleep. Managing day by day was the goal. The research produced the Basic Social Process (BSP) of hope: achieving - maintaining - losing, which expresses a continuous balancing between Being without and Being with. The importance of the object of hope was combined with the amount of hope and disappointment. The process of approaching defined the realisation of hope and the process of withdrawal that of losing. Joy and security versus grief and insecurity defined the Being with and Being without. Two core categories were found. The first one “If only I could�? reflects lack of energy, lack of knowledge, lack of courage and lack of ability. The other one “There is always a loophole�? reflects deliberate tracing of possibilities and the belief in finding solutions, and managing.
Resumo:
BACKGROUND: Complex foot and ankle fractures, such as calcaneum fractures or Lisfranc dislocations, are often associated with a poor outcome, especially in terms of gait capacity. Indeed, degenerative changes often lead to chronic pain and chronic functional limitations. Prescription footwear represents an important therapeutic tool during the rehabilitation process. Local Dynamic Stability (LDS) is the ability of locomotor system to maintain continuous walking by accommodating small perturbations that occur naturally during walking. Because it reflects the degree of control over the gait, LDS has been advocated as a relevant indicator for evaluating different conditions and pathologies. The aim of this study was to analyze changes in LDS induced by orthopaedic shoes in patients with persistent foot and ankle injuries. We hypothesised that footwear adaptation might help patients to improve gait control, which could lead to higher LDS: METHODS: Twenty-five middle-aged inpatients (5 females, 20 males) participated in the study. They were treated for chronic post-traumatic disabilities following ankle and/or foot fractures in a Swiss rehabilitation clinic. During their stay, included inpatients received orthopaedic shoes with custom-made orthoses (insoles). They performed two 30s walking trials with standard shoes and two 30s trials with orthopaedic shoes. A triaxial motion sensor recorded 3D accelerations at the lower back level. LDS was assessed by computing divergence exponents in the acceleration signals (maximal Lyapunov exponents). Pain was evaluated with Visual Analogue Scale (VAS). LDS and pain differences between the trials with standard shoes and the trials with orthopaedic shoes were assessed. RESULTS: Orthopaedic shoes significantly improved LDS in the three axes (medio-lateral: 10% relative change, paired t-test p < 0.001; vertical: 9%, p = 0.03; antero-posterior: 7%, p = 0.04). A significant decrease in pain level (VAS score -29%) was observed. CONCLUSIONS: Footwear adaptation led to pain relief and to improved foot & ankle proprioception. It is likely that that enhancement allows patients to better control foot placement. As a result, higher dynamic stability has been observed. LDS seems therefore a valuable index that could be used in early evaluation of footwear outcome in clinical settings.
Resumo:
Statistical computing when input/output is driven by a Graphical User Interface is considered. A proposal is made for automatic control ofcomputational flow to ensure that only strictly required computationsare actually carried on. The computational flow is modeled by a directed graph for implementation in any object-oriented programming language with symbolic manipulation capabilities. A complete implementation example is presented to compute and display frequency based piecewise linear density estimators such as histograms or frequency polygons.
Resumo:
This article starts a computational study of congruences of modular forms and modular Galoisrepresentations modulo prime powers. Algorithms are described that compute the maximum integermodulo which two monic coprime integral polynomials have a root in common in a sensethat is defined. These techniques are applied to the study of congruences of modular forms andmodular Galois representations modulo prime powers. Finally, some computational results withimplications on the (non-)liftability of modular forms modulo prime powers and possible generalisationsof level raising are presented.
Resumo:
En els últims anys, el món de la informàtica ha evolucionat d'una manera inimaginable, tan a nivell de Hardware com de Software. Aquesta evolució ha donat lloc a la creació de moltes empreses dedicades a la programació, on una de les seves principals feines ha estat la realització de programes de gestió d'empreses. Moltes vegades, però, els programes estàndards no poden satisfer el total de les necessitats dels clients, sinó algunes d’aquestes i realitzar un programa personalitzat té un cost elevat. En el cas de la Pastisseria Mas de Navàs, una empresa familiar, per poder realitzar les tasques administratives utilitzen fulls de càlcul, concretament el Microsoft Excel, que permet portar els comptes d'una manera més o menys senzilla, ja que només són necessàries unes nocions bàsiques d'informàtica. El mateix passa amb les dades dels proveïdors, que les guarden en una Base de Dades del tipus Microsoft Access. Una altra de les mancances és el tema dels encàrrecs que es fa de manera manual. Per tant, l’objectiu d’aquest projecte, és realitzar un programa que els hi faciliti la seva activitat. Aquest programa els permetrà gestionar les dades que utilitzen, com la informació sobre els clients, personal, comandes... També s’ha desenvolupat una web que permet obtenir informació sobre les comandes que s’han realitzat. Aquesta aplicació està dissenyada per funcionar en l’entorn Windows XP i s’ha desenvolupat amb el compilador de CodeGear Rad Studio, concretament el C++ Builder 2009. A nivell de base de dades, he utilitzat MySQL i en el cas de la pàgina web, PHP i lamateixa base de dades. L’anàlisi i el disseny ha estat fet en UML.
Resumo:
En termes generals, es pot definir l’Eficiència Energètica com la reducció del consum d’energia mantenint els mateixos serveis energètics, sense disminuir el nostre confort i qualitat de vida, protegint el medi ambient, assegurant el proveïment i fomentant un comportament Sostenible al seu ús. L’objectiu principal d’aquest treball, és reduir el consum d’energia i terme de potència contractat a la Universitat de Vic, aplicant un programa d’estalvi amb mesures correctores en el funcionament de les seves instal·lacions o espais. Per tal de poder arribar a aquest objectiu marcat, prèviament s’ha realitzat un estudi acurat, obtenint tota la informació necessària per poder aplicar les mesures correctores a la bossa més important de consum. Un cop trobada, dur a terme l’estudi de la viabilitat de la inversió de les mesures correctores més eficients, optimitzant els recursos destinats. L’espai on s’ha dut a terme l’estudi, ha estat a l’edifici F del Campus Miramarges, seguint les indicacions d’Arnau Bardolet (Cap de Manteniment de la UVIC). Aquest edifici consta d’un entresol, baixos i quatre plantes. L’equip de mesura que s’ha fet servir per realitzar l’estudi, és de la marca Circutor sèrie AR5-L, aquests equips són programables que mesuren, calculen i emmagatzemen en memòria els principals paràmetres elèctrics en xarxes trifàsiques. Els projectes futurs complementaris que es podrien realitzar a part d’aquest són: instal·lar sensors, instal·lar mòduls convertidors TCP/IP, aprofitar la xarxa intranet i crear un escada amb un sinòptic de control i gestió des d’un punt de treball. Aquest aplicatiu permet visualitzar en una pantalla d’un PC tots els estats dels elements controlats mitjançant un sinòptic (encendre/parar manualment l’enllumenat i endolls de les aules, estat d’enllumenat i endolls de les aules, consums instantanis/acumulats energètics, estat dels passadissos entre altres) i explotar les dades recollides a la base de dades. Cada espai tindria la seva lògica de funcionament automàtic específic. Entre les conclusions més rellevants obtingudes en aquest treball s’observa: · Que és pot reduir la potència contractada a la factura a l’estar per sota de la realment consumida. · Que no hi ha penalitzacions a la factura per consum de reactiva, ja que el compensador funciona correctament. · Que es pot reduir l’horari de l’inici del consum d’energia, ja que no correspon a l’activitat docent. · Els valors de la tensió i freqüència estan dintre de la normalitat. · Els harmònics estan al llindar màxim. Analitzant aquestes conclusions, voldria destacar les mesures correctores més importants que es poden dur a terme: canvi tecnològic a LED, temporitzar automàticament l’encesa i apagada dels fluorescents i equips informàtics de les aules “seguint calendari docent”, instal·lar sensors de moviment amb detecció lumínica als passadissos. Totes les conclusions extretes d’aquest treball, es poden aplicar a tots els edificis de la facultat, prèviament realitzant l’estudi individual de cadascuna, seguint els mateixos criteris per tal d’optimitzar la inversió.
Resumo:
OBJECTIVE: Before a patient can be connected to a mechanical ventilator, the controls of the apparatus need to be set up appropriately. Today, this is done by the intensive care professional. With the advent of closed loop controlled mechanical ventilation, methods will be needed to select appropriate start up settings automatically. The objective of our study was to test such a computerized method which could eventually be used as a start-up procedure (first 5-10 minutes of ventilation) for closed-loop controlled ventilation. DESIGN: Prospective Study. SETTINGS: ICU's in two adult and one children's hospital. PATIENTS: 25 critically ill adult patients (age > or = 15 y) and 17 critically ill children selected at random were studied. INTERVENTIONS: To stimulate 'initial connection', the patients were disconnected from their ventilator and transiently connected to a modified Hamilton AMADEUS ventilator for maximally one minute. During that time they were ventilated with a fixed and standardized breath pattern (Test Breaths) based on pressure controlled synchronized intermittent mandatory ventilation (PCSIMV). MEASUREMENTS AND MAIN RESULTS: Measurements of airway flow, airway pressure and instantaneous CO2 concentration using a mainstream CO2 analyzer were made at the mouth during application of the Test-Breaths. Test-Breaths were analyzed in terms of tidal volume, expiratory time constant and series dead space. Using this data an initial ventilation pattern consisting of respiratory frequency and tidal volume was calculated. This ventilation pattern was compared to the one measured prior to the onset of the study using a two-tailed paired t-test. Additionally, it was compared to a conventional method for setting up ventilators. The computer-proposed ventilation pattern did not differ significantly from the actual pattern (p > 0.05), while the conventional method did. However the scatter was large and in 6 cases deviations in the minute ventilation of more than 50% were observed. CONCLUSIONS: The analysis of standardized Test Breaths allows automatic determination of an initial ventilation pattern for intubated ICU patients. While this pattern does not seem to be superior to the one chosen by the conventional method, it is derived fully automatically and without need for manual patient data entry such as weight or height. This makes the method potentially useful as a start up procedure for closed-loop controlled ventilation.
Resumo:
For the last 2 decades, supertree reconstruction has been an active field of research and has seen the development of a large number of major algorithms. Because of the growing popularity of the supertree methods, it has become necessary to evaluate the performance of these algorithms to determine which are the best options (especially with regard to the supermatrix approach that is widely used). In this study, seven of the most commonly used supertree methods are investigated by using a large empirical data set (in terms of number of taxa and molecular markers) from the worldwide flowering plant family Sapindaceae. Supertree methods were evaluated using several criteria: similarity of the supertrees with the input trees, similarity between the supertrees and the total evidence tree, level of resolution of the supertree and computational time required by the algorithm. Additional analyses were also conducted on a reduced data set to test if the performance levels were affected by the heuristic searches rather than the algorithms themselves. Based on our results, two main groups of supertree methods were identified: on one hand, the matrix representation with parsimony (MRP), MinFlip, and MinCut methods performed well according to our criteria, whereas the average consensus, split fit, and most similar supertree methods showed a poorer performance or at least did not behave the same way as the total evidence tree. Results for the super distance matrix, that is, the most recent approach tested here, were promising with at least one derived method performing as well as MRP, MinFlip, and MinCut. The output of each method was only slightly improved when applied to the reduced data set, suggesting a correct behavior of the heuristic searches and a relatively low sensitivity of the algorithms to data set sizes and missing data. Results also showed that the MRP analyses could reach a high level of quality even when using a simple heuristic search strategy, with the exception of MRP with Purvis coding scheme and reversible parsimony. The future of supertrees lies in the implementation of a standardized heuristic search for all methods and the increase in computing power to handle large data sets. The latter would prove to be particularly useful for promising approaches such as the maximum quartet fit method that yet requires substantial computing power.
Resumo:
Intracardiac organization indices such as atrial fibrillation (AF) cycle length (AFCL) have been used to track the efficiency of stepwise catheter ablation (step-CA) of longstanding persistent AF, however with limited success. The morphology of AF activation waves reflects the underlying activation patterns. Its temporal evolution is a local organization indicator that could be potentially used for tracking the efficiency of step-CA. We report a new method for characterizing the structure of the temporal evolution of activation wave morphology. Using recurrence plots, novel organization indices are proposed. By computing their relative evolution during the first step of ablation vs baseline, we found that these new parameters are superior to AFCL to track the effect of step-CA "en route" to AF termination.
Resumo:
INTRODUCTION: Triple-negative breast cancers (TNBCs) are characterised by lack of expression of hormone receptors and epidermal growth factor receptor 2 (HER-2). As they frequently express epidermal growth factor receptors (EGFRs), anti-EGFR therapies are currently assessed for this breast cancer subtype as an alternative to treatments that target HER-2 or hormone receptors. Recently, EGFR-activating mutations have been reported in TNBC specimens in an East Asian population. Because variations in the frequency of EGFR-activating mutations in East Asians and other patients with lung cancer have been described, we evaluated the EGFR mutational profile in tumour samples from European patients with TNBC. METHODS: We selected from a DNA tumour bank 229 DNA samples isolated from frozen, histologically proven and macrodissected invasive TNBC specimens from European patients. PCR and high-resolution melting (HRM) analyses were used to detect mutations in exons 19 and 21 of EGFR. The results were then confirmed by bidirectional sequencing of all samples. RESULTS: HRM analysis allowed the detection of three EGFR exon 21 mutations, but no exon 19 mutations. There was 100% concordance between the HRM and sequencing results. The three patients with EGFR exon 21 abnormal HRM profiles harboured the rare R836R SNP, but no EGFR-activating mutation was identified. CONCLUSIONS: This study highlights variations in the prevalence of EGFR mutations in TNBC. These variations have crucial implications for the design of clinical trials involving anti-EGFR treatments in TNBC and for identifying the potential target population.