818 resultados para Hiking -- Tools and equipment
Resumo:
The training and ongoing education of medical practitioners has undergone major changes in an incremental fashion over the past 15 years. These changes have been driven by patient safety, educational, economic and legislative/regulatory factors. In the near future, training in procedural skills will undergo a paradigm shift to proficiency based progression with associated requirements for competence-based programmes, valid, reliable assessment tools and simulation technology. Before training begins, the learning outcomes require clear definition; any form of assessment applied should include measurement of these outcomes. Currently training in a procedural skill often takes place on an ad hoc basis. The number of attempts necessary to attain a defined degree of proficiency varies from procedure to procedure. Convincing evidence exists that simulation training helps trainees to acquire skills more efficiently rather than relying on opportunities in their clinical practice. Simulation provides a safe, stress free environment for trainees for skill acquisition, generalization and transfer via deliberate practice. The work described in this thesis contributes to a greater understanding of how medical procedures can be performed more safely and effectively through education. The effect of feedback, provided to novices in a standardized setting on a bench model, based on knowledge of performance was associated with an increase in the speed of skill acquisition and a decrease in error rate during initial learning. The timing of feedback was also associated with effective learning of skill. A marked attrition of skills (independent of the type of feedback provided) was demonstrable 24 hrs after they have first been learned. Using the principles of feedback as described above, when studying the effect of an intense training program on novices of varied years of experience in anaesthesia (i.e. the present training programmes / courses of an intense training day for one or more procedures). There was a marked attrition of skill at 24 hours with a significant correlation with increasing years of experience; there also appeared to be an inverse relationship between years of experience in anaesthesia and performance. The greater the number of years of practice experience, the longer it required a learner to acquire a new skill. The findings of the studies described in this thesis may have important implications for the trainers, trainees and training bodies in the design and implementation of training courses and the formats of delivery of changing curricula. Both curricula and training modalities will need to take account of characteristics of individual learners and the dynamic nature of procedural healthcare.
Resumo:
While bonobos and chimpanzees are both genetically and behaviorally very similar, they also differ in significant ways. Bonobos are more cautious and socially tolerant while chimpanzees are more dependent on extractive foraging, which requires tools. The similarities suggest the two species should be cognitively similar while the behavioral differences predict where the two species should differ cognitively. We compared both species on a wide range of cognitive problems testing their understanding of the physical and social world. Bonobos were more skilled at solving tasks related to theory of mind or an understanding of social causality, while chimpanzees were more skilled at tasks requiring the use of tools and an understanding of physical causality. These species differences support the role of ecological and socio-ecological pressures in shaping cognitive skills over relatively short periods of evolutionary time.
Resumo:
It is increasingly evident that evolutionary processes play a role in how ecological communities are assembled. However the extend to which evolution influences how plants respond to spatial and environmental gradients and interact with each other is less clear. In this dissertation I leverage evolutionary tools and thinking to understand how space and environment affect community composition and patterns of gene flow in a unique system of Atlantic rainforest and restinga (sandy coastal plains) habitats in Southeastern Brazil.
In chapter one I investigate how space and environment affect the population genetic structure and gene flow of Aechmea nudicaulis, a bromeliad species that co-occurs in forest and restinga habitats. I genotyped seven microsatellite loci and sequenced one chloroplast DNA region for individuals collected in 7 pairs of forest / restinga sites. Bayesian genetic clustering analyses show that populations of A. nudicaulis are geographically structured in northern and southern populations, a pattern consistent with broader scale phylogeographic dynamics of the Atlantic rainforest. On the other hand, explicit migration models based on the coalescent estimate that inter-habitat gene flow is less common than gene flow between populations in the same habitat type, despite their geographic discontinuity. I conclude that there is evidence for repeated colonization of the restingas from forest populations even though the steep environmental gradient between habitats is a stronger barrier to gene flow than geographic distance.
In chapter two I use data on 2800 individual plants finely mapped in a restinga plot and on first-year survival of 500 seedlings to understand the roles of phylogeny, functional traits and abiotic conditions in the spatial structuring of that community. I demonstrate that phylogeny is a poor predictor of functional traits in and that convergence in these traits is pervasive. In general, the community is not phylogenetically structured, with at best 14% of the plots deviating significantly from the null model. The functional traits SLA, leaf dry matter content (LDMC), and maximum height also showed no clear pattern of spatial structuring. On the other hand, leaf area is strongly overdispersed across all spatial scales. Although leaf area overdispersion would be generally taken as evidence of competition, I argue that interpretation is probably misleading. Finally, I show that seedling survival is dramatically increased when they grow shaded by an adult individual, suggesting that seedlings are being facilitated. Phylogenetic distance to their adult neighbor has no influence on rates of survival though. Taken together, these results indicate that phylogeny has very limited influence on the fine scale assembly of restinga communities.
Resumo:
Histopathology is the clinical standard for tissue diagnosis. However, histopathology has several limitations including that it requires tissue processing, which can take 30 minutes or more, and requires a highly trained pathologist to diagnose the tissue. Additionally, the diagnosis is qualitative, and the lack of quantitation leads to possible observer-specific diagnosis. Taken together, it is difficult to diagnose tissue at the point of care using histopathology.
Several clinical situations could benefit from more rapid and automated histological processing, which could reduce the time and the number of steps required between obtaining a fresh tissue specimen and rendering a diagnosis. For example, there is need for rapid detection of residual cancer on the surface of tumor resection specimens during excisional surgeries, which is known as intraoperative tumor margin assessment. Additionally, rapid assessment of biopsy specimens at the point-of-care could enable clinicians to confirm that a suspicious lesion is successfully sampled, thus preventing an unnecessary repeat biopsy procedure. Rapid and low cost histological processing could also be potentially useful in settings lacking the human resources and equipment necessary to perform standard histologic assessment. Lastly, automated interpretation of tissue samples could potentially reduce inter-observer error, particularly in the diagnosis of borderline lesions.
To address these needs, high quality microscopic images of the tissue must be obtained in rapid timeframes, in order for a pathologic assessment to be useful for guiding the intervention. Optical microscopy is a powerful technique to obtain high-resolution images of tissue morphology in real-time at the point of care, without the need for tissue processing. In particular, a number of groups have combined fluorescence microscopy with vital fluorescent stains to visualize micro-anatomical features of thick (i.e. unsectioned or unprocessed) tissue. However, robust methods for segmentation and quantitative analysis of heterogeneous images are essential to enable automated diagnosis. Thus, the goal of this work was to obtain high resolution imaging of tissue morphology through employing fluorescence microscopy and vital fluorescent stains and to develop a quantitative strategy to segment and quantify tissue features in heterogeneous images, such as nuclei and the surrounding stroma, which will enable automated diagnosis of thick tissues.
To achieve these goals, three specific aims were proposed. The first aim was to develop an image processing method that can differentiate nuclei from background tissue heterogeneity and enable automated diagnosis of thick tissue at the point of care. A computational technique called sparse component analysis (SCA) was adapted to isolate features of interest, such as nuclei, from the background. SCA has been used previously in the image processing community for image compression, enhancement, and restoration, but has never been applied to separate distinct tissue types in a heterogeneous image. In combination with a high resolution fluorescence microendoscope (HRME) and a contrast agent acriflavine, the utility of this technique was demonstrated through imaging preclinical sarcoma tumor margins. Acriflavine localizes to the nuclei of cells where it reversibly associates with RNA and DNA. Additionally, acriflavine shows some affinity for collagen and muscle. SCA was adapted to isolate acriflavine positive features or APFs (which correspond to RNA and DNA) from background tissue heterogeneity. The circle transform (CT) was applied to the SCA output to quantify the size and density of overlapping APFs. The sensitivity of the SCA+CT approach to variations in APF size, density and background heterogeneity was demonstrated through simulations. Specifically, SCA+CT achieved the lowest errors for higher contrast ratios and larger APF sizes. When applied to tissue images of excised sarcoma margins, SCA+CT correctly isolated APFs and showed consistently increased density in tumor and tumor + muscle images compared to images containing muscle. Next, variables were quantified from images of resected primary sarcomas and used to optimize a multivariate model. The sensitivity and specificity for differentiating positive from negative ex vivo resected tumor margins was 82% and 75%. The utility of this approach was further tested by imaging the in vivo tumor cavities from 34 mice after resection of a sarcoma with local recurrence as a bench mark. When applied prospectively to images from the tumor cavity, the sensitivity and specificity for differentiating local recurrence was 78% and 82%. The results indicate that SCA+CT can accurately delineate APFs in heterogeneous tissue, which is essential to enable automated and rapid surveillance of tissue pathology.
Two primary challenges were identified in the work in aim 1. First, while SCA can be used to isolate features, such as APFs, from heterogeneous images, its performance is limited by the contrast between APFs and the background. Second, while it is feasible to create mosaics by scanning a sarcoma tumor bed in a mouse, which is on the order of 3-7 mm in any one dimension, it is not feasible to evaluate an entire human surgical margin. Thus, improvements to the microscopic imaging system were made to (1) improve image contrast through rejecting out-of-focus background fluorescence and to (2) increase the field of view (FOV) while maintaining the sub-cellular resolution needed for delineation of nuclei. To address these challenges, a technique called structured illumination microscopy (SIM) was employed in which the entire FOV is illuminated with a defined spatial pattern rather than scanning a focal spot, such as in confocal microscopy.
Thus, the second aim was to improve image contrast and increase the FOV through employing wide-field, non-contact structured illumination microscopy and optimize the segmentation algorithm for new imaging modality. Both image contrast and FOV were increased through the development of a wide-field fluorescence SIM system. Clear improvement in image contrast was seen in structured illumination images compared to uniform illumination images. Additionally, the FOV is over 13X larger than the fluorescence microendoscope used in aim 1. Initial segmentation results of SIM images revealed that SCA is unable to segment large numbers of APFs in the tumor images. Because the FOV of the SIM system is over 13X larger than the FOV of the fluorescence microendoscope, dense collections of APFs commonly seen in tumor images could no longer be sparsely represented, and the fundamental sparsity assumption associated with SCA was no longer met. Thus, an algorithm called maximally stable extremal regions (MSER) was investigated as an alternative approach for APF segmentation in SIM images. MSER was able to accurately segment large numbers of APFs in SIM images of tumor tissue. In addition to optimizing MSER for SIM image segmentation, an optimal frequency of the illumination pattern used in SIM was carefully selected because the image signal to noise ratio (SNR) is dependent on the grid frequency. A grid frequency of 31.7 mm-1 led to the highest SNR and lowest percent error associated with MSER segmentation.
Once MSER was optimized for SIM image segmentation and the optimal grid frequency was selected, a quantitative model was developed to diagnose mouse sarcoma tumor margins that were imaged ex vivo with SIM. Tumor margins were stained with acridine orange (AO) in aim 2 because AO was found to stain the sarcoma tissue more brightly than acriflavine. Both acriflavine and AO are intravital dyes, which have been shown to stain nuclei, skeletal muscle, and collagenous stroma. A tissue-type classification model was developed to differentiate localized regions (75x75 µm) of tumor from skeletal muscle and adipose tissue based on the MSER segmentation output. Specifically, a logistic regression model was used to classify each localized region. The logistic regression model yielded an output in terms of probability (0-100%) that tumor was located within each 75x75 µm region. The model performance was tested using a receiver operator characteristic (ROC) curve analysis that revealed 77% sensitivity and 81% specificity. For margin classification, the whole margin image was divided into localized regions and this tissue-type classification model was applied. In a subset of 6 margins (3 negative, 3 positive), it was shown that with a tumor probability threshold of 50%, 8% of all regions from negative margins exceeded this threshold, while over 17% of all regions exceeded the threshold in the positive margins. Thus, 8% of regions in negative margins were considered false positives. These false positive regions are likely due to the high density of APFs present in normal tissues, which clearly demonstrates a challenge in implementing this automatic algorithm based on AO staining alone.
Thus, the third aim was to improve the specificity of the diagnostic model through leveraging other sources of contrast. Modifications were made to the SIM system to enable fluorescence imaging at a variety of wavelengths. Specifically, the SIM system was modified to enabling imaging of red fluorescent protein (RFP) expressing sarcomas, which were used to delineate the location of tumor cells within each image. Initial analysis of AO stained panels confirmed that there was room for improvement in tumor detection, particularly in regards to false positive regions that were negative for RFP. One approach for improving the specificity of the diagnostic model was to investigate using a fluorophore that was more specific to staining tumor. Specifically, tetracycline was selected because it appeared to specifically stain freshly excised tumor tissue in a matter of minutes, and was non-toxic and stable in solution. Results indicated that tetracycline staining has promise for increasing the specificity of tumor detection in SIM images of a preclinical sarcoma model and further investigation is warranted.
In conclusion, this work presents the development of a combination of tools that is capable of automated segmentation and quantification of micro-anatomical images of thick tissue. When compared to the fluorescence microendoscope, wide-field multispectral fluorescence SIM imaging provided improved image contrast, a larger FOV with comparable resolution, and the ability to image a variety of fluorophores. MSER was an appropriate and rapid approach to segment dense collections of APFs from wide-field SIM images. Variables that reflect the morphology of the tissue, such as the density, size, and shape of nuclei and nucleoli, can be used to automatically diagnose SIM images. The clinical utility of SIM imaging and MSER segmentation to detect microscopic residual disease has been demonstrated by imaging excised preclinical sarcoma margins. Ultimately, this work demonstrates that fluorescence imaging of tissue micro-anatomy combined with a specialized algorithm for delineation and quantification of features is a means for rapid, non-destructive and automated detection of microscopic disease, which could improve cancer management in a variety of clinical scenarios.
Resumo:
BACKGROUND/AIMS: The obesity epidemic has spread to young adults, and obesity is a significant risk factor for cardiovascular disease. The prominence and increasing functionality of mobile phones may provide an opportunity to deliver longitudinal and scalable weight management interventions in young adults. The aim of this article is to describe the design and development of the intervention tested in the Cell Phone Intervention for You study and to highlight the importance of adaptive intervention design that made it possible. The Cell Phone Intervention for You study was a National Heart, Lung, and Blood Institute-sponsored, controlled, 24-month randomized clinical trial comparing two active interventions to a usual-care control group. Participants were 365 overweight or obese (body mass index≥25 kg/m2) young adults. METHODS: Both active interventions were designed based on social cognitive theory and incorporated techniques for behavioral self-management and motivational enhancement. Initial intervention development occurred during a 1-year formative phase utilizing focus groups and iterative, participatory design. During the intervention testing, adaptive intervention design, where an intervention is updated or extended throughout a trial while assuring the delivery of exactly the same intervention to each cohort, was employed. The adaptive intervention design strategy distributed technical work and allowed introduction of novel components in phases intended to help promote and sustain participant engagement. Adaptive intervention design was made possible by exploiting the mobile phone's remote data capabilities so that adoption of particular application components could be continuously monitored and components subsequently added or updated remotely. RESULTS: The cell phone intervention was delivered almost entirely via cell phone and was always-present, proactive, and interactive-providing passive and active reminders, frequent opportunities for knowledge dissemination, and multiple tools for self-tracking and receiving tailored feedback. The intervention changed over 2 years to promote and sustain engagement. The personal coaching intervention, alternatively, was primarily personal coaching with trained coaches based on a proven intervention, enhanced with a mobile application, but where all interactions with the technology were participant-initiated. CONCLUSION: The complexity and length of the technology-based randomized clinical trial created challenges in engagement and technology adaptation, which were generally discovered using novel remote monitoring technology and addressed using the adaptive intervention design. Investigators should plan to develop tools and procedures that explicitly support continuous remote monitoring of interventions to support adaptive intervention design in long-term, technology-based studies, as well as developing the interventions themselves.
Resumo:
With increasing recognition of the roles RNA molecules and RNA/protein complexes play in an unexpected variety of biological processes, understanding of RNA structure-function relationships is of high current importance. To make clean biological interpretations from three-dimensional structures, it is imperative to have high-quality, accurate RNA crystal structures available, and the community has thoroughly embraced that goal. However, due to the many degrees of freedom inherent in RNA structure (especially for the backbone), it is a significant challenge to succeed in building accurate experimental models for RNA structures. This chapter describes the tools and techniques our research group and our collaborators have developed over the years to help RNA structural biologists both evaluate and achieve better accuracy. Expert analysis of large, high-resolution, quality-conscious RNA datasets provides the fundamental information that enables automated methods for robust and efficient error diagnosis in validating RNA structures at all resolutions. The even more crucial goal of correcting the diagnosed outliers has steadily developed toward highly effective, computationally based techniques. Automation enables solving complex issues in large RNA structures, but cannot circumvent the need for thoughtful examination of local details, and so we also provide some guidance for interpreting and acting on the results of current structure validation for RNA.
Resumo:
Despite the apparent simplicity of the OpenMP directive shared memory programming model and the sophisticated dependence analysis and code generation capabilities of the ParaWise/CAPO tools, experience shows that a level of expertise is required to produce efficient parallel code. In a real world application the investigation of a single loop in a generated parallel code can soon become an in-depth inspection of numerous dependencies in many routines. The additional understanding of dependencies is also needed to effectively interpret the information provided and supply the required feedback. The ParaWise Expert Assistant has been developed to automate this investigation and present questions to the user about, and in the context of, their application code. In this paper, we demonstrate that knowledge of dependence information and OpenMP are no longer essential to produce efficient parallel code with the Expert Assistant. It is hoped that this will enable a far wider audience to use the tools and subsequently, exploit the benefits of large parallel systems.
Resumo:
Problems in the preservation of the quality of granular material products are complex and arise from a series of sources during transport and storage. In either designing a new plant or, more likely, analysing problems that give rise to product quality degradation in existing operations, practical measurement and simulation tools and technologies are required to support the process engineer. These technologies are required to help in both identifying the source of such problems and then designing them out. As part of a major research programme on quality in particulate manufacturing computational models have been developed for segregation in silos, degradation in pneumatic conveyors, and the development of caking during storage, which use where possible, micro-mechanical relationships to characterize the behaviour of granular materials. The objective of the work presented here is to demonstrate the use of these computational models of unit processes involved in the analysis of large-scale processes involving the handling of granular materials. This paper presents a set of simulations of a complete large-scale granular materials handling operation, involving the discharge of the materials from a silo, its transport through a dilute-phase pneumatic conveyor, and the material storage in a big bag under varying environmental temperature and humidity conditions. Conclusions are drawn on the capability of the computational models to represent key granular processes, including particle size segregation, degradation, and moisture migration caking.
Resumo:
Today most of the IC and board designs are undertaken using two-dimensional graphics tools and rule checks. System-in-package is driving three-dimensional design concepts and this is posing a number of challenges for electronic design automation (EDA) software vendors. System-in-package requires three-dimensional EDA tools and design collaboration systems with appropriate manufacturing and assembly rules for these expanding technologies. Simulation and Analysis tools today focus on one aspect of the design requirement, for example, thermal, electrical or mechanical. System-in-Package requires analysis and simulation tools that can easily capture the complex three dimensional structures and provided integrated fast solutions to issues such as thermal management, reliability, electromagnetic interference, etc. This paper discusses some of the challenges faced by the design and analysis community in providing appropriate tools to engineers for System-in-Package design
Resumo:
Fuel-only algal systems are not economically feasible because yields are too low and costs too high for producing microalgal biomass compared to using agricultural residues e.g. straw. Biorefineries which integrate biomass conversion processes and equipment to produce fuels, power and chemicals from biomass, offer a solution. The CO2 microalgae biorefinery (D-Factory) is a 10 million Euro FP7-funded project which will cultivate the microalga Dunaliella in highly saline non-potable waters in photobioreactors and open raceways and apply biorefinery concepts and European innovations in biomass processing technologies to develop a basket of compounds from Dunaliella biomass, including the high value nutraceutical, β-carotene, and glycerol. Glycerol now finds markets both as a green chemical intermediate and as a biofuel in CHP applications as a result of novel combustion technology. Driving down costs by recovering the entire biomass of Dunaliella cells from saline cultivation water poses one of the many challenges for the D-Factory because Dunaliella cells are both motile, and do not possess an external cell wall, making them highly susceptible to cell rupture. Controlling expression of desired metabolic pathways to deliver the desired portfolio of compounds flexibly and sustainably to meet market demand is another. The first prototype D-Factory in Europe will be operational in 48 months, and will serve as a robust manifestation of the business case for global investment in algae biorefineries and in large-scale production of microalgae.
Resumo:
Study conducted to evaluate the effectiveness of four assistive technology (AT) tools on literacy: (1) speech synthesis, (2) spellchecker, (3) homophone tool, and (4) dictionary. All four of these programs are featured in TextHelp’s Read&Write Gold software package. A total of 93 secondary-level students with reading disabilities participated in the study. The participants completed a number of computer-based literacy tests after being assigned to a Read&Write group or a control group that utilized Microsoft Word. The results indicated that improvements in the following areas for the Read&Write group: (1) reading comprehension, (2) homophone error detection, (3) spelling error detection, and (4) word meanings. The Microsoft Word group also improved in the areas of word meanings and error detection, though performed worse on homophone error detection. The authors contend that these results indicate that speech synthesis, spell checkers, homophone tools, and dictionary programs have a positive effect on literacy among students with reading disabilities. This study was conducted by researchers at the Queen’s University in Belfast, Ireland.
Resumo:
A method extending narrative analysis with grounded theory analysis is proposed to bridge the gap between breadth and depth in IS narrative research. The purpose of the method is not to develop a theory but to make narrative analysis more accessible, transparent and accountable; and the resultant narrative more contextually grounded. The method is aimed particularly at inexperienced narrative researchers who currently lack guidance through the complexity of narrative analysis, but may also benefit experienced narrative researchers who may not be familiar with the applicability of grounded theory tools and techniques in this area.
Resumo:
Exam timetabling is one of the most important administrative activities that takes place in academic institutions. In this paper we present a critical discussion of the research on exam timetabling in the last decade or so. This last ten years has seen an increased level of attention on this important topic. There has been a range of significant contributions to the scientific literature both in terms of theoretical andpractical aspects. The main aim of this survey is to highlight the new trends and key research achievements that have been carried out in the last decade.We also aim to outline a range of relevant important research issues and challenges that have been generated by this body of work.
We first define the problem and review previous survey papers. Algorithmic approaches are then classified and discussed. These include early techniques (e.g. graph heuristics) and state-of-the-art approaches including meta-heuristics, constraint based methods, multi-criteria techniques, hybridisations, and recent new trends concerning neighbourhood structures, which are motivated by raising the generality of the approaches. Summarising tables are presented to provide an overall view of these techniques. We discuss some issues on decomposition techniques, system tools and languages, models and complexity. We also present and discuss some important issues which have come to light concerning the public benchmark exam timetabling data. Different versions of problem datasetswith the same name have been circulating in the scientific community in the last ten years which has generated a significant amount of confusion. We clarify the situation and present a re-naming of the widely studied datasets to avoid future confusion. We also highlight which research papershave dealt with which dataset. Finally, we draw upon our discussion of the literature to present a (non-exhaustive) range of potential future research directions and open issues in exam timetabling research.
Resumo:
Functional and non-functional concerns require different programming effort, different techniques and different methodologies when attempting to program efficient parallel/distributed applications. In this work we present a "programmer oriented" methodology based on formal tools that permits reasoning about parallel/distributed program development and refinement. The proposed methodology is semi-formal in that it does not require the exploitation of highly formal tools and techniques, while providing a palatable and effective support to programmers developing parallel/distributed applications, in particular when handling non-functional concerns.
Resumo:
Background For families of children diagnosed with autism spectrum disorder (ASD) getting a diagnosis is a traumatic experience on which future care and education plans for the child depend. In this paper parental experiences of diagnosis and forward planning for children with ASD are reported. Method This paper is part of a large cross-sectional study conducted in Northern Ireland and the Republic of Ireland that assessed the needs and experiences of parents of children diagnosed with ASD. Questionnaires were designed and completed by 95 parents, reporting on 100 children, as well as 67 multi-disciplinary professionals. Results Findings confirm that diagnostic and planning processes are extremely stressful for parents, that statutory diagnosis takes a long time, that care and education plans do not include full parental participation, and that reviews of plans do not consistently include intervention data. Conclusion Policy and practice implications of these findings are important for future revisions of diagnostic tools and manuals.