882 resultados para Methods: data analysis
Resumo:
Objective To understand differences in the managerial ethical decision-making styles of Australian healthcare managers through the exploratory use of the Managerial Ethical Profiles (MEP) Scale. Background Healthcare managers (doctors, nurses, allied health practitioners and non-clinically trained professionals) are faced with a raft of variables when making decisions within the workplace. In the absence of clear protocols and policies healthcare managers rely on a range of personal experiences, personal ethical philosophies, personal factors and organizational factors to arrive at a decision. Understanding the dominant approaches to managerial ethical decision-making, particularly for clinically trained healthcare managers, is a fundamental step in both increasing awareness of the importance of how managers make decisions, but also as a basis for ongoing development of healthcare managers. Design Cross-sectional. Methods The study adopts a taxonomic approach that simultaneously considers multiple ethical factors that potentially influence managerial ethical decision-making. These factors are used as inputs into cluster analysis to identify distinct patterns of influence on managerial ethical decision-making. Results Data analysis from the participants (n=441) showed a similar spread of the five managerial ethical profiles (Knights, Guardian Angels, Duty Followers, Defenders and Chameleons) across clinically trained and non-clinically trained healthcare managers. There was no substantial statistical difference between the two manager types (clinical and non-clinical) across the five profiles. Conclusion This paper demonstrated that managers that came from clinical backgrounds have similar ethical decision-making profiles to non-clinically trained managers. This is an important finding in terms of manager development and how organisations understand the various approaches of managerial decision-making across the different ethical profiles.
Resumo:
Standardised time series of fishery catch rates require collations of fishing power data on vessel characteristics. Linear mixed models were used to quantify fishing power trends and study the effect of missing data encountered when relying on commercial logbooks. For this, Australian eastern king prawn (Melicertus plebejus) harvests were analysed with historical (from vessel surveys) and current (from commercial logbooks) vessel data. Between 1989 and 2010, fishing power increased up to 76%. To date, both forward-filling and, alternatively, omitting records with missing vessel information from commercial logbooks produce broadly similar fishing power increases and standardised catch rates, due to the strong influence of years with complete vessel data (16 out of 23 years of data). However, if gaps in vessel information had not originated randomly and skippers from the most efficient vessels were the most diligent at filling in logbooks, considerable errors would be introduced. Also, the buffering effect of complete years would be short lived as years with missing data accumulate. Given ongoing changes in fleet profile with high-catching vessels fishing proportionately more of the fleet’s effort, compliance with logbook completion, or alternatively ongoing vessel gear surveys, is required for generating accurate estimates of fishing power and standardised catch rates.
Resumo:
Patterns of movement in aquatic animals reflect ecologically important behaviours. Cyclical changes in the abiotic environment influence these movements, but when multiple processes occur simultaneously, identifying which is responsible for the observed movement can be complex. Here we used acoustic telemetry and signal processing to define the abiotic processes responsible for movement patterns in freshwater whiprays (Himantura dalyensis). Acoustic transmitters were implanted into the whiprays and their movements detected over 12 months by an array of passive acoustic receivers, deployed throughout 64 km of the Wenlock River, Qld, Australia. The time of an individual's arrival and departure from each receiver detection field was used to estimate whipray location continuously throughout the study. This created a linear-movement-waveform for each whipray and signal processing revealed periodic components within the waveform. Correlation of movement periodograms with those from abiotic processes categorically illustrated that the diel cycle dominated the pattern of whipray movement during the wet season, whereas tidal and lunar cycles dominated during the dry season. The study methodology represents a valuable tool for objectively defining the relationship between abiotic processes and the movement patterns of free-ranging aquatic animals and is particularly expedient when periods of no detection exist within the animal location data.
ssSNPer: identifying statistically similar SNPs to aid interpretation of genetic association studies
Resumo:
ssSNPer is a novel user-friendly web interface that provides easy determination of the number and location of untested HapMap SNPs, in the region surrounding a tested HapMap SNP, which are statistically similar and would thus produce comparable and perhaps more significant association results. Identification of ssSNPs can have crucial implications for the interpretation of the initial association results and the design of follow-up studies. AVAILABILITY: http://fraser.qimr.edu.au/general/daleN/ssSNPer/
Resumo:
It has been known for decades that particles can cause adverse health effects as they are deposited within the respiratory system. Atmospheric aerosol particles influence climate by scattering solar radiation but aerosol particles act also as the nuclei around which cloud droplets form. The principal objectives of this thesis were to investigate the chemical composition and the sources of fine particles in different environments (traffic, urban background, remote) as well as during some specific air pollution situations. Quantifying the climate and health effects of atmospheric aerosols is not possible without detailed information of the aerosol chemical composition. Aerosol measurements were carried out at nine sites in six countries (Finland, Germany, Czech, Netherlands, Greece and Italy). Several different instruments were used in order to measure both the particulate matter (PM) mass and its chemical composition. In the off-line measurements the samples were collected first on a substrate or filter and gravimetric and chemical analysis were conducted in the laboratory. In the on-line measurements the sampling and analysis were either a combined procedure or performed successively within the same instrument. Results from the impactor samples were analyzed by the statistical methods. This thesis comprises also a work where a method for the determination carbonaceous matter size distribution by using a multistage impactor was developed. It was found that the chemistry of PM has usually strong spatial, temporal and size-dependent variability. In the Finnish sites most of the fine PM consisted of organic matter. However, in Greece sulfate dominated the fine PM and in Italy nitrate made the largest contribution to the fine PM. Regarding the size-dependent chemical composition, organic components were likely to be enriched in smaller particles than inorganic ions. Data analysis showed that organic carbon (OC) had four major sources in Helsinki. Secondary production was the major source in Helsinki during spring, summer and fall, whereas in winter biomass combustion dominated OC. The significant impact of biomass combustion on OC concentrations was also observed in the measurements performed in Central Europe. In this thesis aerosol samples were collected mainly by the conventional filter and impactor methods which suffered from the long integration time. However, by filter and impactor measurements chemical mass closure was achieved accurately, and a simple filter sampling was found to be useful in order to explain the sources of PM on the seasonal basis. The online instruments gave additional information related to the temporal variations of the sources and the atmospheric mixing conditions.
Resumo:
Pressurised hot water extraction (PHWE) exploits the unique temperature-dependent solvent properties of water minimising the use of harmful organic solvents. Water is environmentally friendly, cheap and easily available extraction medium. The effects of temperature, pressure and extraction time in PHWE have often been studied, but here the emphasis was on other parameters important for the extraction, most notably the dimensions of the extraction vessel and the stability and solubility of the analytes to be extracted. Non-linear data analysis and self-organising maps were employed in the data analysis to obtain correlations between the parameters studied, recoveries and relative errors. First, pressurised hot water extraction (PHWE) was combined on-line with liquid chromatography-gas chromatography (LC-GC), and the system was applied to the extraction and analysis of polycyclic aromatic hydrocarbons (PAHs) in sediment. The method is of superior sensitivity compared with the traditional methods, and only a small 10 mg sample was required for analysis. The commercial extraction vessels were replaced by laboratory-made stainless steel vessels because of some problems that arose. The performance of the laboratory-made vessels was comparable to that of the commercial ones. In an investigation of the effect of thermal desorption in PHWE, it was found that at lower temperatures (200ºC and 250ºC) the effect of thermal desorption is smaller than the effect of the solvating property of hot water. At 300ºC, however, thermal desorption is the main mechanism. The effect of the geometry of the extraction vessel on recoveries was studied with five specially constructed extraction vessels. In addition to the extraction vessel geometry, the sediment packing style and the direction of water flow through the vessel were investigated. The geometry of the vessel was found to have only minor effect on the recoveries, and the same was true of the sediment packing style and the direction of water flow through the vessel. These are good results because these parameters do not have to be carefully optimised before the start of extractions. Liquid-liquid extraction (LLE) and solid-phase extraction (SPE) were compared as trapping techniques for PHWE. LLE was more robust than SPE and it provided better recoveries and repeatabilities than did SPE. Problems related to blocking of the Tenax trap and unrepeatable trapping of the analytes were encountered in SPE. Thus, although LLE is more labour intensive, it can be recommended over SPE. The stabilities of the PAHs in aqueous solutions were measured using a batch-type reaction vessel. Degradation was observed at 300ºC even with the shortest heating time. Ketones and quinones and other oxidation products were observed. Although the conditions of the stability studies differed considerably from the extraction conditions in PHWE, the results indicate that the risk of analyte degradation must be taken into account in PHWE. The aqueous solubilities of acenaphthene, anthracene and pyrene were measured, first below and then above the melting point of the analytes. Measurements below the melting point were made to check that the equipment was working, and the results were compared with those obtained earlier. Good agreement was found between the measured and literature values. A new saturation cell was constructed for the solubility measurements above the melting point of the analytes because the flow-through saturation cell could not be used above the melting point. An exponential relationship was found between the solubilities measured for pyrene and anthracene and temperature.
Resumo:
Comprehensive two-dimensional gas chromatography (GC×GC) offers enhanced separation efficiency, reliability in qualitative and quantitative analysis, capability to detect low quantities, and information on the whole sample and its components. These features are essential in the analysis of complex samples, in which the number of compounds may be large or the analytes of interest are present at trace level. This study involved the development of instrumentation, data analysis programs and methodologies for GC×GC and their application in studies on qualitative and quantitative aspects of GC×GC analysis. Environmental samples were used as model samples. Instrumental development comprised the construction of three versions of a semi-rotating cryogenic modulator in which modulation was based on two-step cryogenic trapping with continuously flowing carbon dioxide as coolant. Two-step trapping was achieved by rotating the nozzle spraying the carbon dioxide with a motor. The fastest rotation and highest modulation frequency were achieved with a permanent magnetic motor, and modulation was most accurate when the motor was controlled with a microcontroller containing a quartz crystal. Heated wire resistors were unnecessary for the desorption step when liquid carbon dioxide was used as coolant. With use of the modulators developed in this study, the narrowest peaks were 75 ms at base. Three data analysis programs were developed allowing basic, comparison and identification operations. Basic operations enabled the visualisation of two-dimensional plots and the determination of retention times, peak heights and volumes. The overlaying feature in the comparison program allowed easy comparison of 2D plots. An automated identification procedure based on mass spectra and retention parameters allowed the qualitative analysis of data obtained by GC×GC and time-of-flight mass spectrometry. In the methodological development, sample preparation (extraction and clean-up) and GC×GC methods were developed for the analysis of atmospheric aerosol and sediment samples. Dynamic sonication assisted extraction was well suited for atmospheric aerosols collected on a filter. A clean-up procedure utilising normal phase liquid chromatography with ultra violet detection worked well in the removal of aliphatic hydrocarbons from a sediment extract. GC×GC with flame ionisation detection or quadrupole mass spectrometry provided good reliability in the qualitative analysis of target analytes. However, GC×GC with time-of-flight mass spectrometry was needed in the analysis of unknowns. The automated identification procedure that was developed was efficient in the analysis of large data files, but manual search and analyst knowledge are invaluable as well. Quantitative analysis was examined in terms of calibration procedures and the effect of matrix compounds on GC×GC separation. In addition to calibration in GC×GC with summed peak areas or peak volumes, simplified area calibration based on normal GC signal can be used to quantify compounds in samples analysed by GC×GC so long as certain qualitative and quantitative prerequisites are met. In a study of the effect of matrix compounds on GC×GC separation, it was shown that quality of the separation of PAHs is not significantly disturbed by the amount of matrix and quantitativeness suffers only slightly in the presence of matrix and when the amount of target compounds is low. The benefits of GC×GC in the analysis of complex samples easily overcome some minor drawbacks of the technique. The developed instrumentation and methodologies performed well for environmental samples, but they could also be applied for other complex samples.
Resumo:
Being at the crossroads of the Old World continents, Western Asia has a unique position through which the dispersal and migration of mammals and the interaction of faunal bioprovinces occurred. Despite its critical position, the record of Miocene mammals in Western Asia is sporadic and there are large spatial and temporal gaps between the known fossil localities. Although the development of the mammalian faunas in the Miocene of the Old World is well known and there is ample evidence for environmental shifts in this epoch, efforts toward quantification of habitat changes and development of chronofaunas based on faunal compositions were mostly neglected. Advancement of chronological, paleoclimatological, and paleogeographical reconstruction tools and techniques and increased numbers of new discoveries in recent decades have brought the need for updating and modification of our level of understanding. We under took fieldwork and systematic study of mammalian trace and body fossils from the northwestern parts of Iran along with analysis of large mammal data from the NOW database. The data analysis was used to study the provinciality, relative abundance, and distribution history of the closed- and open-adapted taxa and chronofaunas in the Miocene of the Old World and Western Asia. The provinciality analysis was carried out, using locality clustering, and the relative abundance of the closed- and open-adapted taxa was surveyed at the family level. The distribution history of the chronofaunas was studied, using faunal resemblance indices and new mapping techniques, together with humidity analysis based on mean ordinated hypsodonty. Paleoichnological studies revealed the abundance of mammalian footprints in several parts of the basins studied, which are normally not fossiliferous in terms of body fossils. The systematic study and biochronology of the newly discovered mammalian fossils in northwestern Iran indicates their close affinities with middle Turolian faunas. Large cranial remains of hipparionine horses, previously unknown in Iran and Western Asia, are among the material studied. The initiation of a new field project in the famous Maragheh locality also brings new opportunities to address questions regarding the chronology and paleoenvironment of this classical site. Provinciality analysis modified our previous level of understandings, indicating the interaction of four provinces in Western Asia. The development of these provinces was apparently due to the presence of high mountain ranges in the area, which affected the dispersal of mammals and also climatic patterns. Higher temperatures and possibly higher co2 levels in the Middle Miocene Climatic Optimum apparently favored the development of the closed forested environments that supported the dominance of the closed-adapted taxa. The increased seasonality and the progressive cooling and drying of the midlatitudes toward the Late Miocene maintained the dominance of open-adapted faunas. It appears that the late Middle Miocene was the time of transition from a more forested to a less forested world. The distribution history of the closed- and open-adapted chronofaunas shows the presence of cosmopolitan and endemic faunas in Western Asia. The closed-adapted faunas, such as the Arabian chronofauna of the late Early‒early Middle Miocene, demonstrated a rapid buildup and gradual decline. The open-adapted chronofaunas, such as the Late Miocene Maraghean fauna, climaxed gradually by filling the opening environments and moving in response to changes in humidity patterns. They abruptly declined due to demise of their favored environments. The Siwalikan chronofauna of the early Late Miocene remained endemic and restricted through all its history. This study highlights the importance of field investigations and indicates that new surveys in the vast areas of Western Asia, which are poorly sampled in terms of fossil mammal localities, can still be promising. Clustering of the localities supports the consistency of formerly known patterns and augments them. Although the quantitative approach to relative abundance history of the closed- and open-adapted mammals harks back to more than half a century ago, it is a novel technique providing robust results. Tracking the history of the chronofaunas in space and time by means of new computational and illustration methods is also a new practice that can be expanded to new areas and time spans.
Resumo:
Background: The development of a horse vaccine against Hendra virus has been hailed as a good example of a One Health approach to the control of human disease. Although there is little doubt that this is true, it is clear from the underwhelming uptake of the vaccine by horse owners to date (approximately 10%) that realisation of a One Health approach requires more than just a scientific solution. As emerging infectious diseases may often be linked to the development and implementation of novel vaccines this presentation will discuss factors influencing their uptake; using Hendra virus in Australia as a case study. Methods: This presentation will draw on data collected from the Horse owners and Hendra virus: A Longitudinal cohort study To Evaluate Risk (HHALTER) study. The HHALTER study is a mixed methods research study comprising a two-year survey-based longitudinal cohort study and qualitative interview study with horse owners in Australia. The HHALTER study has investigated and tracked changes in a broad range of issues around early uptake of vaccination, horse owner uptake of other recommended disease risk mitigation strategies, and attitudes to government policy and disease response. Interviews provide further insights into attitudes towards risk and decision-making in relation to vaccine uptake. A combination of quantitative and qualitative data analysis will be reported. Results: Data collected from more than 1100 horse owners shortly after vaccine introduction indicated that vaccine uptake and intention to vaccinate was associated with a number of risk perception factors and financial cost factors. In addition, concerns about side effects and veterinarians refusing to treat unvaccinated horses were linked to uptake. Across the study period vaccine uptake in the study cohort increased to more than 50%, however, concerns around side effects, equine performance and breeding impacts, delays to full vaccine approvals, and attempts to mandate vaccination by horse associations and event organisers have all impacted acceptance. Conclusion: Despite being provided with a safe and effective vaccine for Hendra virus that can protect horses and break the transmission cycle of the virus to humans, Australian horse owners have been reluctant to commit to it. General issues pertinent to novel vaccines, combined with challenges in the implementation of the vaccine have led to issues of mistrust and misconception with some horse owners. Moreover, factors such as cost, booster dose schedules, complexities around perceived risk, and ulterior motives attributed to veterinarians have only served to polarise attitudes to vaccine acceptance.
Resumo:
This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.
Resumo:
This thesis presents a highly sensitive genome wide search method for recessive mutations. The method is suitable for distantly related samples that are divided into phenotype positives and negatives. High throughput genotype arrays are used to identify and compare homozygous regions between the cohorts. The method is demonstrated by comparing colorectal cancer patients against unaffected references. The objective is to find homozygous regions and alleles that are more common in cancer patients. We have designed and implemented software tools to automate the data analysis from genotypes to lists of candidate genes and to their properties. The programs have been designed in respect to a pipeline architecture that allows their integration to other programs such as biological databases and copy number analysis tools. The integration of the tools is crucial as the genome wide analysis of the cohort differences produces many candidate regions not related to the studied phenotype. CohortComparator is a genotype comparison tool that detects homozygous regions and compares their loci and allele constitutions between two sets of samples. The data is visualised in chromosome specific graphs illustrating the homozygous regions and alleles of each sample. The genomic regions that may harbour recessive mutations are emphasised with different colours and a scoring scheme is given for these regions. The detection of homozygous regions, cohort comparisons and result annotations are all subjected to presumptions many of which have been parameterized in our programs. The effect of these parameters and the suitable scope of the methods have been evaluated. Samples with different resolutions can be balanced with the genotype estimates of their haplotypes and they can be used within the same study.
Resumo:
Many educational researchers conducting studies in non-English speaking settings attempt to report on their project in English to boost their scholarly impact. It requires preparing and presenting translations of data collected from interviews and observations. This paper discusses the process and ethical considerations involved in this invisible methodological phase. The process includes activities prior to data analysis and to its presentation to be undertaken by the bilingual researcher as translator in order to convey participants’ original meanings as well as to establish and fulfil translation ethics. This paper offers strategies to address such issues; the most appropriate translation method for qualitative study; and approaches to address political issues when presenting such data.
Resumo:
Background Despite potential benefits, some patients decide not to use their custom-made orthopaedic shoes (OS). Factors are known in the domains ‘usability’, ‘communication and service’, and ‘opinion of others’ that influence a patient’s decision to use OS. However, the interplay between these factors has never been investigated. The aim of this study was to explore the interplay between factors concerning OS, and the influences thereof on a patient’s decision to use OS. Methods A mixed-methods design was used, combining qualitative and quantitative data by means of sequential data analysis and triangulation. Priority was given to the qualitative part. Qualitative data was gathered with a semi-structured interview covering the three domains. Data was analysed using the framework approach. Quantitative data concerned the interplay between factors and determining a rank-order for the importance of factors of ‘usability’. Results A patient’s decision to use OS was influenced by various factors indicated as being important and by acceptance of their OS. Factors of ‘usability’ were more important than factors of ‘communication’; the ‘opinion of others’ was of limited importance. An improvement of walking was indicated as the most important factor of ‘usability’. The importance of other factors (cosmetic appearance and ease of use) was determined by reaching a compromise between these factors and an improvement of walking. Conclusions A patient’s decision to use OS is influenced by various factors indicated as being important and by acceptance of their OS. An improvement of walking is the most important factor of ‘usability’, the importance of other factors (cosmetic appearance and ease of use) is determined by reaching compromises between these factors and an improvement of walking. Communication is essential to gain insight in a patient’s acceptance and in the compromises they are willing to reach. This makes communication the key for clinicians to influence a patient’s decision to use OS.
Resumo:
Compositional data analysis usually deals with relative information between parts where the total (abundances, mass, amount, etc.) is unknown or uninformative. This article addresses the question of what to do when the total is known and is of interest. Tools used in this case are reviewed and analysed, in particular the relationship between the positive orthant of D-dimensional real space, the product space of the real line times the D-part simplex, and their Euclidean space structures. The first alternative corresponds to data analysis taking logarithms on each component, and the second one to treat a log-transformed total jointly with a composition describing the distribution of component amounts. Real data about total abundances of phytoplankton in an Australian river motivated the present study and are used for illustration.
Resumo:
Background Around the world, guidelines and clinical practice for the prevention of complications associated with central venous catheters (CVC) vary greatly. To prevent occlusion, most institutions recommend the use of heparin when the CVC is not in use. However, there is debate regarding the need for heparin and evidence to suggest normal saline may be as effective. The use of heparin is not without risk, may be unnecessary and is also associated with increased costs. Objectives To assess the clinical effects (benefits and harms) of heparin versus normal saline to prevent occlusion in long-term central venous catheters in infants, children and adolescents. Design A Cochrane systematic review of randomised controlled trials was undertaken. - Data sources: The Cochrane Vascular Group Specialised Register (including MEDLINE, CINAHL, EMBASE and AMED) and the Cochrane Register of Studies were searched. Hand searching of relevant journals and reference lists of retrieved articles was also undertaken. - Review Methods: Data were extracted and appraisal undertaken. We included studies that compared the efficacy of normal saline with heparin to prevent occlusion. We excluded temporary CVCs and peripherally inserted central catheters. Rate ratios per 1000 catheter days were calculated for two outcomes, occlusion of the CVC, and CVC-associated blood stream infection. Results Three trials with a total of 245 participants were included in this review. The three trials directly compared the use of normal saline and heparin. However, between studies, all used different protocols with various concentrations of heparin and frequency of flushes. The quality of the evidence ranged from low to very low. The estimated rate ratio for CVC occlusion per 1000 catheter days between the normal saline and heparin group was 0.75 (95% CI 0.10 to 5.51, two studies, 229 participants, very low quality evidence). The estimated rate ratio for CVC-associated blood stream infection was 1.48 (95% CI 0.24 to 9.37, two studies, 231 participants; low quality evidence). Conclusions It remains unclear whether heparin is necessary for CVC maintenance. More well-designed studies are required to understand this relatively simple, but clinically important question. Ultimately, if this evidence were available, the development of evidenced-based clinical practice guidelines and consistency of practice would be facilitated.