879 resultados para Image Processing in Molecular Biology Research


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this paper is to explore a number of tensions arising in the presentation of autoethnographical research. The paper provides a reflexive autoethnographical account of undertaking and publically presenting autoethnographical research. The paper problematises the extent and form of disclosure; the voice and representation of the researcher; the difficulties in dealing with sensitive subjects; conflicts between public and private domains; questions of validity; the extent and form of theorisation of autoethnographical narratives; and emotion and performativity in presenting autoethnographical research. The paper provides an analysis of the potential of autoethnography, while exploring the presentational and performative context of academia. © 2011, Emerald Group Publishing Limited.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The structural characteristics of liposomes have been widely investigated and there is certainly a strong understanding of their morphological characteristics. Imaging of these systems, using techniques such as freeze-fracturing methods, transmission electron microscopy, and cryo-electron imaging, has allowed us to appreciate their bilayer structures and factors which can influence this. However, there are few methods which all us to study these systems in their natural hydrated state; commonly the liposomes are visualized after drying, staining, and/or fixation of the vesicles. Environmental Scanning Electron Microscopy (ESEM) offers the ability to image a liposome in its hydrated state without the need for prior sample preparation. Within our studies we were the first to use ESEM to study liposomes and niosomes and we have been able to dynamically follow the hydration of lipid films and changes in liposome suspensions as water condenses on to, or evaporates from, the sample in real time. This provides insight into the resistance of liposomes to coalescence during dehydration, thereby providing an alternative assay of liposome formulation and stability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Event-related potentials (ERP) have been proposed to improve the differential diagnosis of non-responsive patients. We investigated the potential of the P300 as a reliable marker of conscious processing in patients with locked-in syndrome (LIS). Eleven chronic LIS patients and 10 healthy subjects (HS) listened to a complex-tone auditory oddball paradigm, first in a passive condition (listen to the sounds) and then in an active condition (counting the deviant tones). Seven out of nine HS displayed a P300 waveform in the passive condition and all in the active condition. HS showed statistically significant changes in peak and area amplitude between conditions. Three out of seven LIS patients showed the P3 waveform in the passive condition and five of seven in the active condition. No changes in peak amplitude and only a significant difference at one electrode in area amplitude were observed in this group between conditions. We conclude that, in spite of keeping full consciousness and intact or nearly intact cortical functions, compared to HS, LIS patients present less reliable results when testing with ERP, specifically in the passive condition. We thus strongly recommend applying ERP paradigms in an active condition when evaluating consciousness in non-responsive patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Healthcare systems have assimilated information and communication technologies in order to improve the quality of healthcare and patient's experience at reduced costs. The increasing digitalization of people's health information raises however new threats regarding information security and privacy. Accidental or deliberate data breaches of health data may lead to societal pressures, embarrassment and discrimination. Information security and privacy are paramount to achieve high quality healthcare services, and further, to not harm individuals when providing care. With that in mind, we give special attention to the category of Mobile Health (mHealth) systems. That is, the use of mobile devices (e.g., mobile phones, sensors, PDAs) to support medical and public health. Such systems, have been particularly successful in developing countries, taking advantage of the flourishing mobile market and the need to expand the coverage of primary healthcare programs. Many mHealth initiatives, however, fail to address security and privacy issues. This, coupled with the lack of specific legislation for privacy and data protection in these countries, increases the risk of harm to individuals. The overall objective of this thesis is to enhance knowledge regarding the design of security and privacy technologies for mHealth systems. In particular, we deal with mHealth Data Collection Systems (MDCSs), which consists of mobile devices for collecting and reporting health-related data, replacing paper-based approaches for health surveys and surveillance. This thesis consists of publications contributing to mHealth security and privacy in various ways: with a comprehensive literature review about mHealth in Brazil; with the design of a security framework for MDCSs (SecourHealth); with the design of a MDCS (GeoHealth); with the design of Privacy Impact Assessment template for MDCSs; and with the study of ontology-based obfuscation and anonymisation functions for health data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work of knowledge organization requires a particular set of tools. For instance we need standards of content description like Anglo-American Cataloging Rules Edition 2, Resource Description and Access (RDA), Cataloging Cultural Objects, and Describing Archives: A Content Standard. When we intellectualize the process of knowledge organization – that is when we do basic theoretical research in knowledge organization we need another set of tools. For this latter exercise we need constructs. Constructs are ideas with many conceptual elements, largely considered subjective. They allow us to be inventive as well as allow us to see a particular point of view in knowledge organization. For example, Patrick Wilson’s ideas of exploitative control and descriptive control, or S. R. Ranganathan’s fundamental categories are constructs. They allow us to identify functional requirements or operationalizations of functional requirements, or at least come close to them for our systems and schemes. They also allow us to carry out meaningful evaluation.What is even more interesting, from a research point of view, is that constructs once offered to the community can be contested and reinterpreted and this has an affect on how we view knowledge organization systems and processes. Fundamental categories are again a good example in that some members of the Classification Research Group (CRG) argued against Ranganathan’s point of view. The CRG posited more fundamental categories than Ranganathan’s five, Personality, Matter, Energy, Space, and Time (Ranganathan, 1967). The CRG needed significantly more fundamental categories for their work.1 And these are just two voices in this space we can also consider the fundamental categories of Johannes Kaiser (1911), Shera and Egan, Barbara Kyle (Vickery, 1960), and Eric de Grolier (1962). We can also reference contemporary work that continues comparison and analysis of fundamental categories (e.g., Dousa, 2011).In all these cases we are discussing a construct. The fundamental category is not discovered; it is constructed by a classificationist. This is done because it is useful in engaging in the act of classification. And while we are accustomed to using constructs or debating their merit in one knowledge organization activity or another, we have not analyzed their structure, nor have we created a typology. In an effort to probe the epistemological dimension of knowledge organization, we think it would be a fruitful exercise to do this. This is because we might benefit from clarity around not only our terminology, but the manner in which we talk about our terminology. We are all creative workers examining what is available to us, but doing so through particular lenses (constructs) identifying particular constructs. And by knowing these and being able to refer to these we would consider a core competency for knowledge organization researchers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Resumen Revisa el sesgo eurocentrista reflejado en la mayoría de los estudios histórico-genealógicos en Hispanoamérica, y presenta las nuevas tendencias en la investigación genealógica, relacionadas con los avances de la biología molecular y la relevancia de los estudios interdisciplinarios. Abstract The author discusses the Eurocentric bias reflected in the majority of historical-genealogical studies in Spanish America, and discusses new tendencies in genealogical research related to advances in molecular biology and the relevance of interdisciplinary studies

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Molecular radiotherapy (MRT) is a fast developing and promising treatment for metastasised neuroendocrine tumours. Efficacy of MRT is based on the capability to selectively "deliver" radiation to tumour cells, minimizing administered dose to normal tissues. Outcome of MRT depends on the individual patient characteristics. For that reason, personalized treatment planning is important to improve outcomes of therapy. Dosimetry plays a key role in this setting, as it is the main physical quantity related to radiation effects on cells. Dosimetry in MRT consists in a complex series of procedures ranging from imaging quantification to dose calculation. This doctoral thesis focused on several aspects concerning the clinical implementation of absorbed dose calculations in MRT. Accuracy of SPECT/CT quantification was assessed in order to determine the optimal reconstruction parameters. A model of PVE correction was developed in order to improve the activity quantification in small volume, such us lesions in clinical patterns. Advanced dosimetric methods were compared with the aim of defining the most accurate modality, applicable in clinical routine. Also, for the first time on a large number of clinical cases, the overall uncertainty of tumour dose calculation was assessed. As part of the MRTDosimetry project, protocols for calibration of SPECT/CT systems and implementation of dosimetry were drawn up in order to provide standard guidelines to the clinics offering MRT. To estimate the risk of experiencing radio-toxicity side effects and the chance of inducing damage on neoplastic cells is crucial for patient selection and treatment planning. In this thesis, the NTCP and TCP models were derived based on clinical data as help to clinicians to decide the pharmaceutical dosage in relation to the therapy control and the limitation of damage to healthy tissues. Moreover, a model for tumour response prediction based on Machine Learning analysis was developed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ill-conditioned inverse problems frequently arise in life sciences, particularly in the context of image deblurring and medical image reconstruction. These problems have been addressed through iterative variational algorithms, which regularize the reconstruction by adding prior knowledge about the problem's solution. Despite the theoretical reliability of these methods, their practical utility is constrained by the time required to converge. Recently, the advent of neural networks allowed the development of reconstruction algorithms that can compute highly accurate solutions with minimal time demands. Regrettably, it is well-known that neural networks are sensitive to unexpected noise, and the quality of their reconstructions quickly deteriorates when the input is slightly perturbed. Modern efforts to address this challenge have led to the creation of massive neural network architectures, but this approach is unsustainable from both ecological and economic standpoints. The recently introduced GreenAI paradigm argues that developing sustainable neural network models is essential for practical applications. In this thesis, we aim to bridge the gap between theory and practice by introducing a novel framework that combines the reliability of model-based iterative algorithms with the speed and accuracy of end-to-end neural networks. Additionally, we demonstrate that our framework yields results comparable to state-of-the-art methods while using relatively small, sustainable models. In the first part of this thesis, we discuss the proposed framework from a theoretical perspective. We provide an extension of classical regularization theory, applicable in scenarios where neural networks are employed to solve inverse problems, and we show there exists a trade-off between accuracy and stability. Furthermore, we demonstrate the effectiveness of our methods in common life science-related scenarios. In the second part of the thesis, we initiate an exploration extending the proposed method into the probabilistic domain. We analyze some properties of deep generative models, revealing their potential applicability in addressing ill-posed inverse problems.