890 resultados para Knowledge-based Potentials
Resumo:
The life of humans and most living beings depend on sensation and perception for the best assessment of the surrounding world. Sensorial organs acquire a variety of stimuli that are interpreted and integrated in our brain for immediate use or stored in memory for later recall. Among the reasoning aspects, a person has to decide what to do with available information. Emotions are classifiers of collected information, assigning a personal meaning to objects, events and individuals, making part of our own identity. Emotions play a decisive role in cognitive processes as reasoning, decision and memory by assigning relevance to collected information. The access to pervasive computing devices, empowered by the ability to sense and perceive the world, provides new forms of acquiring and integrating information. But prior to data assessment on its usefulness, systems must capture and ensure that data is properly managed for diverse possible goals. Portable and wearable devices are now able to gather and store information, from the environment and from our body, using cloud based services and Internet connections. Systems limitations in handling sensorial data, compared with our sensorial capabilities constitute an identified problem. Another problem is the lack of interoperability between humans and devices, as they do not properly understand human’s emotional states and human needs. Addressing those problems is a motivation for the present research work. The mission hereby assumed is to include sensorial and physiological data into a Framework that will be able to manage collected data towards human cognitive functions, supported by a new data model. By learning from selected human functional and behavioural models and reasoning over collected data, the Framework aims at providing evaluation on a person’s emotional state, for empowering human centric applications, along with the capability of storing episodic information on a person’s life with physiologic indicators on emotional states to be used by new generation applications.
Resumo:
Abstract INTRODUCTION: This study investigated the knowledge of users of primary healthcare services living in Ribeirão Preto, Brazil, about dengue and its vector. METHODS: A cross-sectional survey of 605 people was conducted following a major dengue outbreak in 2013. RESULTS: Participants with higher levels of education were more likely to identify correctly the vector of the disease. CONCLUSIONS: The results emphasize the relevance of health education programs, the continuous promotion of educational campaigns in the media, the role of the television as a source of information, and the importance of motivating the population to control the vector.
Resumo:
In recent years a set of production paradigms were proposed in order to capacitate manufacturers to meet the new market requirements, such as the shift in demand for highly customized products resulting in a shorter product life cycle, rather than the traditional mass production standardized consumables. These new paradigms advocate solutions capable of facing these requirements, empowering manufacturing systems with a high capacity to adapt along with elevated flexibility and robustness in order to deal with disturbances, like unexpected orders or malfunctions. Evolvable Production Systems propose a solution based on the usage of modularity and self-organization with a fine granularity level, supporting pluggability and in this way allowing companies to add and/or remove components during execution without any extra re-programming effort. However, current monitoring software was not designed to fully support these characteristics, being commonly based on centralized SCADA systems, incapable of re-adapting during execution to the unexpected plugging/unplugging of devices nor changes in the entire system’s topology. Considering these aspects, the work developed for this thesis encompasses a fully distributed agent-based architecture, capable of performing knowledge extraction at different levels of abstraction without sacrificing the capacity to add and/or remove monitoring entities, responsible for data extraction and analysis, during runtime.
Resumo:
OBJECTIVE: The initial site of myocardial infarction (MI) may influence the prevalence of ventricular late potentials (VLP), high-frequency signals, due to the time course of ventricular activation. The prevalence of VLP in a period of more than 2 years after acute MI was assessed focusing on the initially injured wall . METHODS: The prevalence of VLP in a late phase after MI (median of 924 days) in anterior/antero-septal and inferior/infero-dorsal wall lesion was analyzed using signal-averaged electrocardiogram in time domain. The diagnostic performance of the filters employed for analysis on was tested at high-pass cut-off frequencies of 25 Hz, 40 Hz and 80 Hz. RESULTS: The duration of the ventricular activation and its terminal portion were larger in inferior than anterior infarction, at high-pass cut-off frequencies of 40 Hz and 80 Hz. In patients with ventricular tachycardia, these differences were more remarked. The prevalence of ventricular late potentials was three times greater in inferior than anterior infarction. CONCLUSION: Late after myocardial infarction, the prevalence and the duration of ventricular late potentials are greater in lesions of inferior/infero-dorsal than anterior/antero-septal wall confirming their temporal process, reflecting their high-frequency content.
Resumo:
First and second instar larvae of some Sarcophagidae (Diptera) of the tribe Raviniini are described on observations with a scanning electron microscope.
Resumo:
L1, L2 and L3 of Oxysarcodexia paulistanensis (Mattos), L3 of O. confusa Lopes, L2 of Ravinia belforti (Prado & Fonseca) and L2 of Oxyvinia excisa (Lopes) were described and figured using scanning electron microscope.
Resumo:
Gestures are the first forms of conventional communication that young children develop in order to intentionally convey a specific message. However, at first, infants rarely communicate successfully with their gestures, prompting caregivers to interpret them. Although the role of caregivers in early communication development has been examined, little is known about how caregivers attribute a specific communicative function to infants' gestures. In this study, we argue that caregivers rely on the knowledge about the referent that is shared with infants in order to interpret what communicative function infants wish to convey with their gestures. We videotaped interactions from six caregiver-infant dyads playing with toys when infants were 8, 10, 12, 14, and 16 months old. We coded infants' gesture production and we determined whether caregivers interpreted those gestures as conveying a clear communicative function or not; we also coded whether infants used objects according to their conventions of use as a measure of shared knowledge about the referent. Results revealed an association between infants' increasing knowledge of object use and maternal interpretations of infants' gestures as conveying a clear communicative function. Our findings emphasize the importance of shared knowledge in shaping infants' emergent communicative skills.
Resumo:
Introduction: Non-invasive brain imaging techniques often contrast experimental conditions across a cohort of participants, obfuscating distinctions in individual performance and brain mechanisms that are better characterised by the inter-trial variability. To overcome such limitations, we developed topographic analysis methods for single-trial EEG data [1]. So far this was typically based on time-frequency analysis of single-electrode data or single independent components. The method's efficacy is demonstrated for event-related responses to environmental sounds, hitherto studied at an average event-related potential (ERP) level. Methods: Nine healthy subjects participated to the experiment. Auditory meaningful sounds of common objects were used for a target detection task [2]. On each block, subjects were asked to discriminate target sounds, which were living or man-made auditory objects. Continuous 64-channel EEG was acquired during the task. Two datasets were considered for each subject including single-trial of the two conditions, living and man-made. The analysis comprised two steps. In the first part, a mixture of Gaussians analysis [3] provided representative topographies for each subject. In the second step, conditional probabilities for each Gaussian provided statistical inference on the structure of these topographies across trials, time, and experimental conditions. Similar analysis was conducted at group-level. Results: Results show that the occurrence of each map is structured in time and consistent across trials both at the single-subject and at group level. Conducting separate analyses of ERPs at single-subject and group levels, we could quantify the consistency of identified topographies and their time course of activation within and across participants as well as experimental conditions. A general agreement was found with previous analysis at average ERP level. Conclusions: This novel approach to single-trial analysis promises to have impact on several domains. In clinical research, it gives the possibility to statistically evaluate single-subject data, an essential tool for analysing patients with specific deficits and impairments and their deviation from normative standards. In cognitive neuroscience, it provides a novel tool for understanding behaviour and brain activity interdependencies at both single-subject and at group levels. In basic neurophysiology, it provides a new representation of ERPs and promises to cast light on the mechanisms of its generation and inter-individual variability.
Resumo:
The aim of this study is to perform a thorough comparison of quantitative susceptibility mapping (QSM) techniques and their dependence on the assumptions made. The compared methodologies were: two iterative single orientation methodologies minimizing the l2, l1TV norm of the prior knowledge of the edges of the object, one over-determined multiple orientation method (COSMOS) and anewly proposed modulated closed-form solution (MCF). The performance of these methods was compared using a numerical phantom and in-vivo high resolution (0.65mm isotropic) brain data acquired at 7T using a new coil combination method. For all QSM methods, the relevant regularization and prior-knowledge parameters were systematically changed in order to evaluate the optimal reconstruction in the presence and absence of a ground truth. Additionally, the QSM contrast was compared to conventional gradient recalled echo (GRE) magnitude and R2* maps obtained from the same dataset. The QSM reconstruction results of the single orientation methods show comparable performance. The MCF method has the highest correlation (corrMCF=0.95, r(2)MCF =0.97) with the state of the art method (COSMOS) with additional advantage of extreme fast computation time. The l-curve method gave the visually most satisfactory balance between reduction of streaking artifacts and over-regularization with the latter being overemphasized when the using the COSMOS susceptibility maps as ground-truth. R2* and susceptibility maps, when calculated from the same datasets, although based on distinct features of the data, have a comparable ability to distinguish deep gray matter structures.
Resumo:
Report for the scientific sojourn at the University of California at Berkeley, USA, from september 2007 until july 2008. Communities of Learning Practice is an innovative paradigm focused on providing appropriate technological support to both formal and especially informal learning groups who are chiefly formed by non-technical people and who lack of the necessary resources to acquire such systems. Typically, students who are often separated by geography and/or time have the need to meet each other after classes in small study groups to carry out specific learning activities assigned during the formal learning process. However, the lack of suitable and available groupware applications makes it difficult for these groups of learners to collaborate and achieve their specific learning goals. In addition, the lack of democratic decision-making mechanisms is a main handicap to substitute the central authority of knowledge presented in formal learning.
Resumo:
Aims and objectives This study aimed to determine the discriminant validity and the test-retest reliability of a questionnaire testing the impact of evidence-based medicine (EBM) training on doctors' knowledge and skills. Methods Questionnaires were sent electronically to all doctors working as residents and chief residents in two French speaking hospital networks in Switzerland. Participants completed the questionnaire twice, within a 4-week interval. The discriminant validity was examined in comparing doctors' performance according to their reported EBM previous training. Proportion of agreement between both sessions of the questionnaire, Cohen's kappa and 'uniform kappa' determined its test-retest reliability. Results The participation rate was 9.8%/7.1% to first/second session. Performance increased according to the level of doctors' previous training in EBM. The observed proportion of agreement between both sessions was over 70% for 14/19 questions, and the 'uniform kappa' was superior to 0.60 for 15/19 questions. Conclusion The discriminant validity and test-retest reliability of the questionnaire were satisfying. The low participation rate did not prevent the study from achieving its aims.
Resumo:
The pace of development of new healthcare technologies and related knowledge is very fast. Implementation of high quality evidence-based knowledge is thus mandatory to warrant an effective healthcare system and patient safety. However, even though only a small fraction of the approximate 2500 scientific publication indexed daily in Medline is actually useful to clinical practice, the amountof the new information is much too large to allow busy healthcare professionals to stay aware of possibly important evidence-based information.
Resumo:
Abstract Since its creation, the Internet has permeated our daily life. The web is omnipresent for communication, research and organization. This exploitation has resulted in the rapid development of the Internet. Nowadays, the Internet is the biggest container of resources. Information databases such as Wikipedia, Dmoz and the open data available on the net are a great informational potentiality for mankind. The easy and free web access is one of the major feature characterizing the Internet culture. Ten years earlier, the web was completely dominated by English. Today, the web community is no longer only English speaking but it is becoming a genuinely multilingual community. The availability of content is intertwined with the availability of logical organizations (ontologies) for which multilinguality plays a fundamental role. In this work we introduce a very high-level logical organization fully based on semiotic assumptions. We thus present the theoretical foundations as well as the ontology itself, named Linguistic Meta-Model. The most important feature of Linguistic Meta-Model is its ability to support the representation of different knowledge sources developed according to different underlying semiotic theories. This is possible because mast knowledge representation schemata, either formal or informal, can be put into the context of the so-called semiotic triangle. In order to show the main characteristics of Linguistic Meta-Model from a practical paint of view, we developed VIKI (Virtual Intelligence for Knowledge Induction). VIKI is a work-in-progress system aiming at exploiting the Linguistic Meta-Model structure for knowledge expansion. It is a modular system in which each module accomplishes a natural language processing task, from terminology extraction to knowledge retrieval. VIKI is a supporting system to Linguistic Meta-Model and its main task is to give some empirical evidence regarding the use of Linguistic Meta-Model without claiming to be thorough.
Resumo:
Cape Verde is considered part of Sahelian Africa, where drought and desertification are common occurrences. The main activity of the rural population is rain-fed agriculture, which over time has been increasingly challenged by high temporal and spatial rainfall variability, lack of inputs, limited land area, fragmentation of land, steep slopes, pests, lack of mechanization and loss of top soil by water erosion. Human activities, largely through poor farming practices and deforestation (Gomez, 1989) have accelerated natural erosion processes, shifting the balance between soil erosion and soil formation (Norton, 1987). According to previous studies, vegetation cover is one of the most important factors in controlling soil loss (Cyr et al., 1995; Hupy, 2004; Zhang et al., 2004; Zhou et al., 2006). For this reason, reforestation is a touchstone of the Cape Verdean policy to combat desertification. After Independence in 1975, the Cape Verde government had pressing and closely entangled environmental and socio-economic issues to address, as long-term desertification had resulted in a lack of soil cover, severe soil erosion and a scarcity of water resources and fuel wood. Across the archipelago, desertification was resulting from a variety of processes including poor farming practices, soil erosion by water and wind, soil and water salinity in coastal areas due to over pumping and seawater intrusion, drought and unplanned urbanization (DGA-MAAP, 2004). All these issues directly affected socio-economic vulnerability in rural areas, where about 70% of people depended directly or indirectly on agriculture in 1975. By becoming part of the Inter- State Committee for the Fight against Drought in the Sahel in 1975, the government of Cape Verde gained structured support to address these issues more efficiently. Presentday policies and strategies were defined on the basis of rational use of resources and human efforts and were incorporated into three subsequent national plans: the National Action Plan for Development (NDP) (1982–1986), the NDP (1986–1990) and the NDP (1991–1995) (Carvalho
Resumo:
PURPOSE: Neurophysiological monitoring aims to improve the safety of pedicle screw placement, but few quantitative studies assess specificity and sensitivity. In this study, screw placement within the pedicle is measured (post-op CT scan, horizontal and vertical distance from the screw edge to the surface of the pedicle) and correlated with intraoperative neurophysiological stimulation thresholds. METHODS: A single surgeon placed 68 thoracic and 136 lumbar screws in 30 consecutive patients during instrumented fusion under EMG control. The female to male ratio was 1.6 and the average age was 61.3 years (SD 17.7). Radiological measurements, blinded to stimulation threshold, were done on reformatted CT reconstructions using OsiriX software. A standard deviation of the screw position of 2.8 mm was determined from pilot measurements, and a 1 mm of screw-pedicle edge distance was considered as a difference of interest (standardised difference of 0.35) leading to a power of the study of 75 % (significance level 0.05). RESULTS: Correct placement and stimulation thresholds above 10 mA were found in 71 % of screws. Twenty-two percent of screws caused cortical breach, 80 % of these had stimulation thresholds above 10 mA (sensitivity 20 %, specificity 90 %). True prediction of correct position of the screw was more frequent for lumbar than for thoracic screws. CONCLUSION: A screw stimulation threshold of >10 mA does not indicate correct pedicle screw placement. A hypothesised gradual decrease of screw stimulation thresholds was not observed as screw placement approaches the nerve root. Aside from a robust threshold of 2 mA indicating direct contact with nervous tissue, a secondary threshold appears to depend on patients' pathology and surgical conditions.