872 resultados para integration of care
Resumo:
The direction of care delivery goes from the action to the being; a process built from professional experience, which gains special characteristics when the service is delivered by telephone. The goal of this research was to understand the interaction between professionals and users in a remote care service; to do so, a research is presented, using Grounded Theory and Symbolic Interactionism as theoretical references. Data were collected through eight interviews with professionals who deliver care by telephone. The theoretical understanding permitted the creation of the theoretical model of the Imaginative Construction of Care, which shows the interaction processes the professional experiences when delivering care by telephone. In this model, individual and social facts are added, showing the link between the concepts, with special emphasis on uncertainty, sensitivity and professional responsibility, as essential components of this experience.
Resumo:
Introduction: Human T-cell lymphotropic virus type 1 (HTLV-1) infection is intractable and endemic in many countries. Although a few individuals have severe symptoms, most patients remain asymptomatic throughout their lives and their infections may be unknown to many health professionals. HTLV-1 can be considered a neglected public health problem and there are not many studies specifically on patients' needs and emotional experiences. Objective: To better understand how women and men living with HTLV-1 experience the disease and what issues exist in their healthcare processes. Methods: A qualitative study using participant observation and life story interview methods was conducted with 13 symptomatic and asymptomatic patients, at the outpatient clinic of the Emilio Ribas Infectious Diseases Institute, in Sao Paulo, Brazil. Results and Discussion: The interviewees stated that HTLV-1 is a largely unknown infection to society and health professionals. Counseling is rare, but when it occurs, focuses on the low probability of developing HTLV-1 related diseases without adequately addressing the risk of infection transmission or reproductive decisions. The diagnosis of HTLV-1 can remain a stigmatized secret as patients deny their situations. As a consequence, the disease remains invisible and there are potentially negative implications for patient self-care and the identification of infected relatives. This perception seems to be shared by some health professionals who do not appear to understand the importance of preventing new infections. Conclusions: Patients and medical staff referred that the main focus was the illness risk, but not the identification of infected relatives to prevent new infections. This biomedical model of care makes prevention difficult, contributes to the lack of care in public health for HTLV-1, and further perpetuates the infection among populations. Thus, HTLV-1 patients experience an "invisibility" of their complex demands and feel that their rights as citizens are ignored.
Resumo:
Objective: to analyze the impact and burden of care on the Health-Related Quality of Life (HRQOL) of caregivers of individuals with a spinal cord injury (SCI). Method: cross-sectional observational study carried out by reviewing medical records and applying questionnaires. The scale Short Form 36 (SF-36) was used to assess HRQOL and the Caregiver Burden Scale (CBScale) for care burden. Results were analyzed quantitatively. Most patients with SCIs were male, aged 35.4 years old on average, with a predominance of thoracic injuries followed by cervical injuries. Most caregivers were female aged 44.8 years old on average. Results: tetraplegia and secondary complications stand out among the clinical characteristics that contributed to greater care burden and worse HRQOL. Association between care burden with HRQOL revealed that the greater the burden the worse the HRQOL. Conclusion: Preventing care burden through strategies that prepare patients for hospital discharge, integrating the support network, and enabling access to health care services are interventions that could minimize the effects arising from care burden and contribute to improving HRQOL.
Resumo:
Produced water in oil fields is one of the main sources of wastewater generated in the industry. It contains several organic compounds, such as benzene, toluene, ethyl benzene and xylene (BTEX), whose disposal is regulated by law. The aim of this study is to investigate a treatment of produced water integrating two processes, i.e., induced air flotation (IAF) and photo-Fenton. The experiments were conducted in a column flotation and annular lamp reactor for flotation and photodegradation steps, respectively. The first order kinetic constant of IAF for the wastewater studied was determined to be 0.1765 min(-1) for the surfactant EO 7. Degradation efficiencies of organic loading were assessed using factorial planning. Statistical data analysis shows that H2O2 concentration is a determining factor in process efficiency. Degradations above 90% were reached in all cases after 90 min of reaction, attaining 100% mineralization in the optimized concentrations of Fenton reagents. Process integration was adequate with 100% organic load removal in 20 min. The results of the integration of the IAF with the photo-Fenton allowed to meet the effluent limits established by Brazilian legislation for disposal. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Background The use of the knowledge produced by sciences to promote human health is the main goal of translational medicine. To make it feasible we need computational methods to handle the large amount of information that arises from bench to bedside and to deal with its heterogeneity. A computational challenge that must be faced is to promote the integration of clinical, socio-demographic and biological data. In this effort, ontologies play an essential role as a powerful artifact for knowledge representation. Chado is a modular ontology-oriented database model that gained popularity due to its robustness and flexibility as a generic platform to store biological data; however it lacks supporting representation of clinical and socio-demographic information. Results We have implemented an extension of Chado – the Clinical Module - to allow the representation of this kind of information. Our approach consists of a framework for data integration through the use of a common reference ontology. The design of this framework has four levels: data level, to store the data; semantic level, to integrate and standardize the data by the use of ontologies; application level, to manage clinical databases, ontologies and data integration process; and web interface level, to allow interaction between the user and the system. The clinical module was built based on the Entity-Attribute-Value (EAV) model. We also proposed a methodology to migrate data from legacy clinical databases to the integrative framework. A Chado instance was initialized using a relational database management system. The Clinical Module was implemented and the framework was loaded using data from a factual clinical research database. Clinical and demographic data as well as biomaterial data were obtained from patients with tumors of head and neck. We implemented the IPTrans tool that is a complete environment for data migration, which comprises: the construction of a model to describe the legacy clinical data, based on an ontology; the Extraction, Transformation and Load (ETL) process to extract the data from the source clinical database and load it in the Clinical Module of Chado; the development of a web tool and a Bridge Layer to adapt the web tool to Chado, as well as other applications. Conclusions Open-source computational solutions currently available for translational science does not have a model to represent biomolecular information and also are not integrated with the existing bioinformatics tools. On the other hand, existing genomic data models do not represent clinical patient data. A framework was developed to support translational research by integrating biomolecular information coming from different “omics” technologies with patient’s clinical and socio-demographic data. This framework should present some features: flexibility, compression and robustness. The experiments accomplished from a use case demonstrated that the proposed system meets requirements of flexibility and robustness, leading to the desired integration. The Clinical Module can be accessed in http://dcm.ffclrp.usp.br/caib/pg=iptrans webcite.
Resumo:
Abstract Background The study and analysis of gene expression measurements is the primary focus of functional genomics. Once expression data is available, biologists are faced with the task of extracting (new) knowledge associated to the underlying biological phenomenon. Most often, in order to perform this task, biologists execute a number of analysis activities on the available gene expression dataset rather than a single analysis activity. The integration of heteregeneous tools and data sources to create an integrated analysis environment represents a challenging and error-prone task. Semantic integration enables the assignment of unambiguous meanings to data shared among different applications in an integrated environment, allowing the exchange of data in a semantically consistent and meaningful way. This work aims at developing an ontology-based methodology for the semantic integration of gene expression analysis tools and data sources. The proposed methodology relies on software connectors to support not only the access to heterogeneous data sources but also the definition of transformation rules on exchanged data. Results We have studied the different challenges involved in the integration of computer systems and the role software connectors play in this task. We have also studied a number of gene expression technologies, analysis tools and related ontologies in order to devise basic integration scenarios and propose a reference ontology for the gene expression domain. Then, we have defined a number of activities and associated guidelines to prescribe how the development of connectors should be carried out. Finally, we have applied the proposed methodology in the construction of three different integration scenarios involving the use of different tools for the analysis of different types of gene expression data. Conclusions The proposed methodology facilitates the development of connectors capable of semantically integrating different gene expression analysis tools and data sources. The methodology can be used in the development of connectors supporting both simple and nontrivial processing requirements, thus assuring accurate data exchange and information interpretation from exchanged data.
Resumo:
Nowadays, in Ubiquitous computing scenarios users more and more require to exploit online contents and services by means of any device at hand, no matter their physical location, and by personalizing and tailoring content and service access to their own requirements. The coordinated provisioning of content tailored to user context and preferences, and the support for mobile multimodal and multichannel interactions are of paramount importance in providing users with a truly effective Ubiquitous support. However, so far the intrinsic heterogeneity and the lack of an integrated approach led to several either too vertical, or practically unusable proposals, thus resulting in poor and non-versatile support platforms for Ubiquitous computing. This work investigates and promotes design principles to help cope with these ever-changing and inherently dynamic scenarios. By following the outlined principles, we have designed and implemented a middleware support platform to support the provisioning of Ubiquitous mobile services and contents. To prove the viability of our approach, we have realized and stressed on top of our support platform a number of different, extremely complex and heterogeneous content and service provisioning scenarios. The encouraging results obtained are pushing our research work further, in order to provide a dynamic platform that is able to not only dynamically support novel Ubiquitous applicative scenarios by tailoring extremely diverse services and contents to heterogeneous user needs, but is also able to reconfigure and adapt itself in order to provide a truly optimized and tailored support for Ubiquitous service provisioning.
Resumo:
In dieser interdisziplinären, translationswissenschaftlichen Studie wird die Integration von Curriculum und Evaluierung in der Dolmetscherausbildung theoretisch fundiert und im Rahmen einer Fallstudie empirisch untersucht. Dolmetschkompetenz wird als ein durch zweckgerechte und messgenaue (valid and reliable) Bewertungsmethoden dokumentiertes Ergebnis der Curriculumanwendung betrachtet. Definitionen, Grundlagen, Ansätze, Ausbildungs- und Lernziele werden anhand der Curriculumtheorie und Dolmetschwissenschaft beschrieben. Traditionelle und alternative Evaluierungsmethoden werden hinsichtlich ihrer Anwendbarkeit in der Dolmetscherausbildung erprobt. In der Fallstudie werden die Prüfungsergebnisse zweier Master-Studiengänge-MA Konferenzdolmetschen und MA Dolmetschen und Übersetzen-quantitativ analysiert. Die zur Dokumentation der Prüfungsergebnisse eingesetzte Bewertungsmethodik wird qualitativ untersucht und zur quantitativen Analyse in Bezug gesetzt. Die Fallstudie besteht aus 1) einer chi-square-Analyse der Abschlussprüfungsnoten getrennt nach Sprachkombination und Prüfungskategorie (n=260), 2) einer Umfrage unter den Jurymitgliedern hinsichtlich der Evaluierungsansätze, -verfahren, und -kriterien (n = 45; 62.22% Rücklaufrate); und 3) einer Analyse des ausgangssprachlichen Prüfungsmaterials ebenfalls nach Sprachkombination und Prüfungskategorie. Es wird nachgewiesen, dass Studierende im MA Dolmetschen und Übersetzen tendenziell schlechtere Prüfungsleistungen erbringen als Studierende im MA Konferenzdolmetschen. Die Analyseergebnisse werden jedoch als aussageschwach betrachtet aufgrund mangelnder Evaluierungsvalidität. Schritte zur Curriculum- und Evaluierungsoptimierung sowie ein effizienteres Curriculummodell werden aus den theoretischen Ansätzen abgeleitet. Auf die Rolle der Ethik in der Evaluierungsmethodik wird hingewiesen.
Resumo:
The relationship between emotion and cognition is a topic that raises great interest in research. Recently, a view of these two processes as interactive and mutually influencing each other has become predominant. This dissertation investigates the reciprocal influences of emotion and cognition, both at behavioral and neural level, in two specific fields, such as attention and decision-making. Experimental evidence on how emotional responses may affect perceptual and attentional processes has been reported. In addition, the impact of three factors, such as personality traits, motivational needs and social context, in modulating the influence that emotion exerts on perception and attention has been investigated. Moreover, the influence of cognition on emotional responses in decision-making has been demonstrated. The current experimental evidence showed that cognitive brain regions such as the dorsolateral prefrontal cortex are causally implicated in regulation of emotional responses and that this has an effect at both pre and post decisional stages. There are two main conclusions of this dissertation: firstly, emotion exerts a strong influence on perceptual and attentional processes but, at the same time, this influence may also be modulated by other factors internal and external to the individuals. Secondly, cognitive processes may modulate emotional prepotent responses, by serving a regulative function critical to driving and shaping human behavior in line with current goals.
Resumo:
Abstract Originalsprache (englisch) Visual perception relies on a two-dimensional projection of the viewed scene on the retinas of both eyes. Thus, visual depth has to be reconstructed from a number of different cues that are subsequently integrated to obtain robust depth percepts. Existing models of sensory integration are mainly based on the reliabilities of individual cues and disregard potential cue interactions. In the current study, an extended Bayesian model is proposed that takes into account both cue reliability and consistency. Four experiments were carried out to test this model's predictions. Observers had to judge visual displays of hemi-cylinders with an elliptical cross section, which were constructed to allow for an orthogonal variation of several competing depth cues. In Experiment 1 and 2, observers estimated the cylinder's depth as defined by shading, texture, and motion gradients. The degree of consistency among these cues was systematically varied. It turned out that the extended Bayesian model provided a better fit to the empirical data compared to the traditional model which disregards covariations among cues. To circumvent the potentially problematic assessment of single-cue reliabilities, Experiment 3 used a multiple-observation task, which allowed for estimating perceptual weights from multiple-cue stimuli. Using the same multiple-observation task, the integration of stereoscopic disparity, shading, and texture gradients was examined in Experiment 4. It turned out that less reliable cues were downweighted in the combined percept. Moreover, a specific influence of cue consistency was revealed. Shading and disparity seemed to be processed interactively while other cue combinations could be well described by additive integration rules. These results suggest that cue combination in visual depth perception is highly flexible and depends on single-cue properties as well as on interrelations among cues. The extension of the traditional cue combination model is defended in terms of the necessity for robust perception in ecologically valid environments and the current findings are discussed in the light of emerging computational theories and neuroscientific approaches.
Resumo:
MFA and LCA methodologies were applied to analyse the anthropogenic aluminium cycle in Italy with focus on historical evolution of stocks and flows of the metal, embodied GHG emissions, and potentials from recycling to provide key features to Italy for prioritizing industrial policy toward low-carbon technologies and materials. Historical trend series were collected from 1947 to 2009 and balanced with data from production, manufacturing and waste management of aluminium-containing products, using a ‘top-down’ approach to quantify the contemporary in-use stock of the metal, and helping to identify ‘applications where aluminium is not yet being recycled to its full potential and to identify present and future recycling flows’. The MFA results were used as a basis for the LCA aimed at evaluating the carbon footprint evolution, from primary and electrical energy, the smelting process and the transportation, embodied in the Italian aluminium. A discussion about how the main factors, according to the Kaya Identity equation, they did influence the Italian GHG emissions pattern over time, and which are the levers to mitigate it, it has been also reported. The contemporary anthropogenic reservoirs of aluminium was estimated at about 320 kg per capita, mainly embedded within the transportation and building and construction sectors. Cumulative in-use stock represents approximately 11 years of supply at current usage rates (about 20 Mt versus 1.7 Mt/year), and it would imply a potential of about 160 Mt of CO2eq emissions savings. A discussion of criticality related to aluminium waste recovery from the transportation and the containers and packaging sectors was also included in the study, providing an example for how MFA and LCA may support decision-making at sectorial or regional level. The research constitutes the first attempt of an integrated approach between MFA and LCA applied to the aluminium cycle in Italy.
Resumo:
In my PhD work I concentrated on three elementary questions that are essential to understand the interactions between the different neuronal cell populations in the developing neocortex. The questions regarded the identity of Cajal-Retzius (CR) cells, the ubiquitous expression of glycine receptors in all major cell populations of the immature neocortex, and the role of taurine in the modulation of immature neocortical network activity.rnTo unravel whether CR cells of different ontogenetic origin have divergent functions I investigated the electrophysiological properties of YFP+ (derived from the septum and borders of the pallium) and YFP− CR cells (derived from other neocortical origins). This study demonstrated that the passive and active electrophysiological properties as well as features of GABAergic PSCs and glutamatergic currents are similar between both CR cell populations. These findings suggest that CR cells of different origins most probably support similar functions within the neuronal networks of the early postnatal cerebral cortex.rnTo elucidate whether glycine receptors are expressed in all major cell populations of the developing neocortex I analyzed the functional expression of glycine receptors on subplate (SP) cells. Activation of glycine receptors by glycine, -alanine and taurine elicited membrane responses that could be blocked by the selective glycinergic antagonist strychnine. Pharmacological experiments suggest that SP cells express functional heteromeric glycine receptors that do not contain 1 subunits. The activation of glycine receptors by glycine and taurine induced a membrane depolarization, which mediated excitatory effects. Considering the key role of SP cells in immature cortical networks and the development of thalamocortical connections, this glycinergic excitation may influence the properties of early cortical networks and the formation of cortical circuits.rnIn the third part of my project I demonstrated that tonic taurine application induced a massive increase in the frequency of PSCs. Based on their reversal potential and their pharmacological properties these taurine-induced PSCs are exclusively transmitted via GABAA receptors to the pyramidal neurons, while both GABAA and glycine receptors were implicated in the generation of the presynaptic activity. Accordingly, whole-cell and cell-attached recordings from genetically labeled interneurons revealed the expression of glycine and GABAA receptors, which mediated an excitatory action on these cells. These findings suggest that low taurine concentrations can tonically activate exclusively GABAergic networks. The activity level maintained by this GABAergic activity in the immature nervous system may contribute to network properties and can facilitate the activity dependent formation of adequate synaptic projections.rnIn summary, the results of my studies complemented the knowledge about neuronal interactions in the immature neocortex and improve our understanding of cellular processes that guide neuronal development and thus shape the brain.rn
Resumo:
The full blood cell (FBC) count is the most common indicator of diseases. At present hematology analyzers are used for the blood cell characterization, but, recently, there has been interest in using techniques that take advantage of microscale devices and intrinsic properties of cells for increased automation and decreased cost. Microfluidic technologies offer solutions to handling and processing small volumes of blood (2-50 uL taken by finger prick) for point-of-care(PoC) applications. Several PoC blood analyzers are in use and may have applications in the fields of telemedicine, out patient monitoring and medical care in resource limited settings. They have the advantage to be easy to move and much cheaper than traditional analyzers, which require bulky instruments and consume large amount of reagents. The development of miniaturized point-of-care diagnostic tests may be enabled by chip-based technologies for cell separation and sorting. Many current diagnostic tests depend on fractionated blood components: plasma, red blood cells (RBCs), white blood cells (WBCs), and platelets. Specifically, white blood cell differentiation and counting provide valuable information for diagnostic purposes. For example, a low number of WBCs, called leukopenia, may be an indicator of bone marrow deficiency or failure, collagen- vascular diseases, disease of the liver or spleen. The leukocytosis, a high number of WBCs, may be due to anemia, infectious diseases, leukemia or tissue damage. In the laboratory of hybrid biodevices, at the University of Southampton,it was developed a functioning micro impedance cytometer technology for WBC differentiation and counting. It is capable to classify cells and particles on the base of their dielectric properties, in addition to their size, without the need of labeling, in a flow format similar to that of a traditional flow cytometer. It was demonstrated that the micro impedance cytometer system can detect and differentiate monocytes, neutrophils and lymphocytes, which are the three major human leukocyte populations. The simplicity and portability of the microfluidic impedance chip offer a range of potential applications in cell analysis including point-of-care diagnostic systems. The microfluidic device has been integrated into a sample preparation cartridge that semi-automatically performs erythrocyte lysis before leukocyte analysis. Generally erythrocytes are manually lysed according to a specific chemical lysis protocol, but this process has been automated in the cartridge. In this research work the chemical lysis protocol, defined in the patent US 5155044 A, was optimized in order to improve white blood cell differentiation and count performed by the integrated cartridge.