8 resultados para Refinement of (SOR1NM2)
em CORA - Cork Open Research Archive - University College Cork - Ireland
Resumo:
The healthcare industry is beginning to appreciate the benefits which can be obtained from using Mobile Health Systems (MHS) at the point-of-care. As a result, healthcare organisations are investing heavily in mobile health initiatives with the expectation that users will employ the system to enhance performance. Despite widespread endorsement and support for the implementation of MHS, empirical evidence surrounding the benefits of MHS remains to be fully established. For MHS to be truly valuable, it is argued that the technological tool be infused within healthcare practitioners work practices and used to its full potential in post-adoptive scenarios. Yet, there is a paucity of research focusing on the infusion of MHS by healthcare practitioners. In order to address this gap in the literature, the objective of this study is to explore the determinants and outcomes of MHS infusion by healthcare practitioners. This research study adopts a post-positivist theory building approach to MHS infusion. Existing literature is utilised to develop a conceptual model by which the research objective is explored. Employing a mixed-method approach, this conceptual model is first advanced through a case study in the UK whereby propositions established from the literature are refined into testable hypotheses. The final phase of this research study involves the collection of empirical data from a Canadian hospital which supports the refined model and its associated hypotheses. The results from both phases of data collection are employed to develop a model of MHS infusion. The study contributes to IS theory and practice by: (1) developing a model with six determinants (Availability, MHS Self-Efficacy, Time-Criticality, Habit, Technology Trust, and Task Behaviour) and individual performance-related outcomes of MHS infusion (Effectiveness, Efficiency, and Learning), (2) examining undocumented determinants and relationships, (3) identifying prerequisite conditions that both healthcare practitioners and organisations can employ to assist with MHS infusion, (4) developing a taxonomy that provides conceptual refinement of IT infusion, and (5) informing healthcare organisations and vendors as to the performance of MHS in post-adoptive scenarios.
Resumo:
Modern neuroscience relies heavily on sophisticated tools that allow us to visualize and manipulate cells with precise spatial and temporal control. Transgenic mouse models, for example, can be used to manipulate cellular activity in order to draw conclusions about the molecular events responsible for the development, maintenance and refinement of healthy and/or diseased neuronal circuits. Although it is fairly well established that circuits respond to activity-dependent competition between neurons, we have yet to understand either the mechanisms underlying these events or the higher-order plasticity that synchronizes entire circuits. In this thesis we aimed to develop and characterize transgenic mouse models that can be used to directly address these outstanding biological questions in different ways. We present SLICK-H, a Cre-expressing mouse line that can achieve drug-inducible, widespread, neuron-specific manipulations in vivo. This model is a clear improvement over existing models because of its particularly strong, widespread, and even distribution pattern that can be tightly controlled in the absence of drug induction. We also present SLICK-V::Ptox, a mouse line that, through expression of the tetanus toxin light chain, allows long-term inhibition of neurotransmission in a small subset (<1%) of fluorescently labeled pyramidal cells. This model, which can be used to study how a silenced cell performs in a wildtype environment, greatly facilitates the in vivo study of activity-dependent competition in the mammalian brain. As an initial application we used this model to show that tetanus toxin-expressing CA1 neurons experience a 15% - 19% decrease in apical dendritic spine density. Finally, we also describe the attempt to create additional Cre-driven mouse lines that would allow conditional alteration of neuronal activity either by hyperpolarization or inhibition of neurotransmission. Overall, the models characterized in this thesis expand upon the wealth of tools available that aim to dissect neuronal circuitry by genetically manipulating neurons in vivo.
Resumo:
Prior work of our research group, that quantified the alarming levels of radiation dose to patients with Crohn’s disease from medical imaging and the notable shift towards CT imaging making these patients an at risk group, provided context for this work. CT delivers some of the highest doses of ionising radiation in diagnostic radiology. Once a medical imaging examination is deemed justified, there is an onus on the imaging team to endeavour to produce diagnostic quality CT images at the lowest possible radiation dose to that patient. The fundamental limitation with conventional CT raw data reconstruction was the inherent coupling of administered radiation dose with observed image noise – the lower the radiation dose, the noisier the image. The renaissance, rediscovery and refinement of iterative reconstruction removes this limitation allowing either an improvement in image quality without increasing radiation dose or maintenance of image quality at a lower radiation dose compared with traditional image reconstruction. This thesis is fundamentally an exercise in optimisation in clinical CT practice with the objectives of assessment of iterative reconstruction as a method for improvement of image quality in CT, exploration of the associated potential for radiation dose reduction, and development of a new split dose CT protocol with the aim of achieving and validating diagnostic quality submillisiever t CT imaging in patients with Crohn’s disease. In this study, we investigated the interplay of user-selected parameters on radiation dose and image quality in phantoms and cadavers, comparing traditional filtered back projection (FBP) with iterative reconstruction algorithms. This resulted in the development of an optimised, refined and appropriate split dose protocol for CT of the abdomen and pelvis in clinical patients with Crohn’s disease allowing contemporaneous acquisition of both modified and conventional dose CT studies. This novel algorithm was then applied to 50 patients with a suspected acute complication of known Crohn’s disease and the raw data reconstructed with FBP, adaptive statistical iterative reconstruction (ASiR) and model based iterative reconstruction (MBIR). Conventional dose CT images with FBP reconstruction were used as the reference standard with which the modified dose CT images were compared in terms of radiation dose, diagnostic findings and image quality indices. As there are multiple possible user-selected strengths of ASiR available, these were compared in terms of image quality to determine the optimal strength for this modified dose CT protocol. Modified dose CT images with MBIR were also compared with contemporaneous abdominal radiograph, where performed, in terms of diagnostic yield and radiation dose. Finally, attenuation measurements in organs, tissues, etc. with each reconstruction algorithm were compared to assess for preservation of tissue characterisation capabilities. In the phantom and cadaveric models, both forms of iterative reconstruction examined (ASiR and MBIR) were superior to FBP across a wide variety of imaging protocols, with MBIR superior to ASiR in all areas other than reconstruction speed. We established that ASiR appears to work to a target percentage noise reduction whilst MBIR works to a target residual level of absolute noise in the image. Modified dose CT images reconstructed with both ASiR and MBIR were non-inferior to conventional dose CT with FBP in terms of diagnostic findings, despite reduced subjective and objective indices of image quality. Mean dose reductions of 72.9-73.5% were achieved with the modified dose protocol with a mean effective dose of 1.26mSv. MBIR was again demonstrated superior to ASiR in terms of image quality. The overall optimal ASiR strength for the modified dose protocol used in this work is ASiR 80%, as this provides the most favourable balance of peak subjective image quality indices with less objective image noise than the corresponding conventional dose CT images reconstructed with FBP. Despite guidelines to the contrary, abdominal radiographs are still often used in the initial imaging of patients with a suspected complication of Crohn’s disease. We confirmed the superiority of modified dose CT with MBIR over abdominal radiographs at comparable doses in detection of Crohn’s disease and non-Crohn’s disease related findings. Finally, we demonstrated (in phantoms, cadavers and in vivo) that attenuation values do not change significantly across reconstruction algorithms meaning preserved tissue characterisation capabilities with iterative reconstruction. Both adaptive statistical and model based iterative reconstruction algorithms represent feasible methods of facilitating acquisition diagnostic quality CT images of the abdomen and pelvis in patients with Crohn’s disease at markedly reduced radiation doses. Our modified dose CT protocol allows dose savings of up to 73.5% compared with conventional dose CT, meaning submillisievert imaging is possible in many of these patients.
Resumo:
Background: The Early Development Instrument (EDI) is a population-level measure of five developmental domains at school-entry age. The overall aim of this thesis was to explore the potential of the EDI as an indicator of early development in Ireland. Methods: A cross-sectional study was conducted in 47 primary schools in 2011 using the EDI and a linked parental questionnaire. EDI (teacher completed) scores were calculated for 1,344 children in their first year of full-time education. Those scoring in the lowest 10% of the sample population in one or more domains were deemed to be 'developmentally vulnerable'. Scores were correlated with contextual data from the parental questionnaire and with indicators of area and school-level deprivation. Rasch analysis was used to determine the validity of the EDI. Results: Over one quarter (27.5%) of all children in the study were developmentally vulnerable. Individual characteristics associated with increased risk of vulnerability were being male; under 5 years old; and having English as a second language. Adjusted for these demographics, low birth weight, poor parent/child interaction and mother’s lower level of education showed the most significant odds ratios for developmental vulnerability. Vulnerability did not follow the area-level deprivation gradient as measured by a composite index of material deprivation. Children considered by the teacher to be in need of assessment also had lower scores, which were not significantly different from those of children with a clinical diagnosis of special needs. all domains showed at least reasonable fit to the Rasch model supporting the validity of the instrument. However, there was a need for further refinement of the instrument in the Irish context. Conclusion: This thesis provides a unique snapshot of early development in Ireland. The EDI and linked parental questionnaires are promising indicators of the extent, distribution and determinants of developmental vulnerability.
Resumo:
The organisational decision making environment is complex, and decision makers must deal with uncertainty and ambiguity on a continuous basis. Managing and handling decision problems and implementing a solution, requires an understanding of the complexity of the decision domain to the point where the problem and its complexity, as well as the requirements for supporting decision makers, can be described. Research in the Decision Support Systems domain has been extensive over the last thirty years with an emphasis on the development of further technology and better applications on the one hand, and on the other hand, a social approach focusing on understanding what decision making is about and how developers and users should interact. This research project considers a combined approach that endeavours to understand the thinking behind managers’ decision making, as well as their informational and decisional guidance and decision support requirements. This research utilises a cognitive framework, developed in 1985 by Humphreys and Berkeley that juxtaposes the mental processes and ideas of decision problem definition and problem solution that are developed in tandem through cognitive refinement of the problem, based on the analysis and judgement of the decision maker. The framework facilitates the separation of what is essentially a continuous process, into five distinct levels of abstraction of manager’s thinking, and suggests a structure for the underlying cognitive activities. Alter (2004) argues that decision support provides a richer basis than decision support systems, in both practice and research. The constituent literature on decision support, especially in regard to modern high profile systems, including Business Intelligence and Business analytics, can give the impression that all ‘smart’ organisations utilise decision support and data analytics capabilities for all of their key decision making activities. However this empirical investigation indicates a very different reality.
Resumo:
The observation chart is for many health professionals (HPs) the primary source of objective information relating to the health of a patient. Information Systems (IS) research has demonstrated the positive impact of good interface design on decision making and it is logical that good observation chart design can positively impact healthcare decision making. Despite the potential for good observation chart design, there is a paucity of observation chart design literature, with the primary source of literature leveraging Human Computer Interaction (HCI) literature to design better charts. While this approach has been successful, this design approach introduces a gap between understanding of the tasks performed by HPs when using charts and the design features implemented in the chart. Good IS allow for the collection and manipulation of data so that it can be presented in a timely manner that support specific tasks. Good interface design should therefore consider the specific tasks being performed prior to designing the interface. This research adopts a Design Science Research (DSR) approach to formalise a framework of design principles that incorporates knowledge of the tasks performed by HPs when using observation charts and knowledge pertaining to visual representations of data and semiology of graphics. This research is presented in three phases, the initial two phases seek to discover and formalise design knowledge embedded in two situated observation charts: the paper-based NEWS chart developed by the Health Service Executive in Ireland and the electronically generated eNEWS chart developed by the Health Information Systems Research Centre in University College Cork. A comparative evaluation of each chart is also presented in the respective phases. Throughout each of these phases, tentative versions of a design framework for electronic vital sign observation charts are presented, with each subsequent iteration of the framework (versions Alpha, Beta, V0.1 and V1.0) representing a refinement of the design knowledge. The design framework will be named the framework for the Retrospective Evaluation of Vital Sign Information from Early Warning Systems (REVIEWS). Phase 3 of the research presents the deductive process for designing and implementing V0.1 of the framework, with evaluation of the instantiation allowing for the final iteration V1.0 of the framework. This study makes a number of contributions to academic research. First the research demonstrates that the cognitive tasks performed by nurses during clinical reasoning can be supported through good observation chart design. Secondly the research establishes the utility of electronic vital sign observation charts in terms of supporting the cognitive tasks performed by nurses during clinical reasoning. Third the framework for REVIEWS represents a comprehensive set of design principles which if applied to chart design will improve the usefulness of the chart in terms of supporting clinical reasoning. Fourth the electronic observation chart that emerges from this research is demonstrated to be significantly more useful than previously designed charts and represents a significant contribution to practice. Finally the research presents a research design that employs a combination of inductive and deductive design activities to iterate on the design of situated artefacts.
Development of large-scale colloidal crystallisation methods for the production of photonic crystals
Resumo:
Colloidal photonic crystals have potential light manipulation applications including; fabrication of efficient lasers and LEDs, improved optical sensors and interconnects, and improving photovoltaic efficiencies. One road-block of colloidal selfassembly is their inherent defects; however, they can be manufactured cost effectively into large area films compared to micro-fabrication methods. This thesis investigates production of ‘large-area’ colloidal photonic crystals by sonication, under oil co-crystallization and controlled evaporation, with a view to reducing cracking and other defects. A simple monotonic Stöber particle synthesis method was developed producing silica particles in the range of 80 to 600nm in a single step. An analytical method assesses the quality of surface particle ordering in a semiquantitative manner was developed. Using fast Fourier transform (FFT) spot intensities, a grey scale symmetry area method, has been used to quantify the FFT profiles. Adding ultrasonic vibrations during film formation demonstrated large areas could be assembled rapidly, however film ordering suffered as a result. Under oil cocrystallisation results in the particles being bound together during film formation. While having potential to form large areas, it requires further refinement to be established as a production technique. Achieving high quality photonic crystals bonded with low concentrations (<5%) of polymeric adhesives while maintaining refractive index contrast, proved difficult and degraded the film’s uniformity. A controlled evaporation method, using a mixed solvent suspension, represents the most promising method to produce high quality films over large areas, 75mm x 25mm. During this mixed solvent approach, the film is kept in the wet state longer, thus reducing cracks developing during the drying stage. These films are crack-free up to a critical thickness, and show very large domains, which are visible in low magnification SEM images as Moiré fringe patterns. Higher magnification reveals separation between alternate fringe patterns are domain boundaries between individual crystalline growth fronts.
Resumo:
The original solution to the high failure rate of software development projects was the imposition of an engineering approach to software development, with processes aimed at providing a repeatable structure to maintain a consistency in the ‘production process’. Despite these attempts at addressing the crisis in software development, others have argued that the rigid processes of an engineering approach did not provide the solution. The Agile approach to software development strives to change how software is developed. It does this primarily by relying on empowered teams of developers who are trusted to manage the necessary tasks, and who accept that change is a necessary part of a development project. The use of, and interest in, Agile methods in software development projects has expanded greatly, yet this has been predominantly practitioner driven. There is a paucity of scientific research on Agile methods and how they are adopted and managed. This study aims at addressing this paucity by examining the adoption of Agile through a theoretical lens. The lens used in this research is that of double loop learning theory. The behaviours required in an Agile team are the same behaviours required in double loop learning; therefore, a transition to double loop learning is required for a successful Agile adoption. The theory of triple loop learning highlights that power factors (or power mechanisms in this research) can inhibit the attainment of double loop learning. This study identifies the negative behaviours - potential power mechanisms - that can inhibit the double loop learning inherent in an Agile adoption, to determine how the Agile processes and behaviours can create these power mechanisms, and how these power mechanisms impact on double loop learning and the Agile adoption. This is a critical realist study, which acknowledges that the real world is a complex one, hierarchically structured into layers. An a priori framework is created to represent these layers, which are categorised as: the Agile context, the power mechanisms, and double loop learning. The aim of the framework is to explain how the Agile processes and behaviours, through the teams of developers and project managers, can ultimately impact on the double loop learning behaviours required in an Agile adoption. Four case studies provide further refinement to the framework, with changes required due to observations which were often different to what existing literature would have predicted. The study concludes by explaining how the teams of developers, the individual developers, and the project managers, working with the Agile processes and required behaviours, can inhibit the double loop learning required in an Agile adoption. A solution is then proposed to mitigate these negative impacts. Additionally, two new research processes are introduced to add to the Information Systems research toolkit.