897 resultados para Simulation-based methods
Resumo:
The purpose of this research to explore the use of modelling in the field of Purchasing and Supply Management (P/SM). We are particularly interested in identifying the specific areas of P/SM where there are opportunities for the use of modelling based methods. The paper starts with an overview of main types of modelling and also provides a categorisation of the main P/SM research themes. Our research shows that there are many opportunities for using descriptive, predictive and prescriptive modelling approaches in all areas of P/SM research from the ones with a focus on the actual function from a purely operational and execution perspective (e.g. purchasing processes and behaviour) to the ones with a focus on the organisational level from a more strategic perspective (e.g. strategy and policy). We conclude that future P/SM research needs to explore the value of modelling not just at the functional or operational level, but also at the organisation and strategic level respectively. We also acknowledge that while using empirical results to inform and improve models has advantages, there are also drawbacks, which relate to the value, the practical relevance and the generalisability of the modelling based approaches.
Resumo:
Marine mammals exploit the efficiency of sound propagation in the marine environment for essential activities like communication and navigation. For this reason, passive acoustics has particularly high potential for marine mammal studies, especially those aimed at population management and conservation. Despite the rapid realization of this potential through a growing number of studies, much crucial information remains unknown or poorly understood. This research attempts to address two key knowledge gaps, using the well-studied bottlenose dolphin (Tursiops truncatus) as a model species, and underwater acoustic recordings collected on four fixed autonomous sensors deployed at multiple locations in Sarasota Bay, Florida, between September 2012 and August 2013. Underwater noise can hinder dolphin communication. The ability of these animals to overcome this obstacle was examined using recorded noise and dolphin whistles. I found that bottlenose dolphins are able to compensate for increased noise in their environment using a wide range of strategies employed in a singular fashion or in various combinations, depending on the frequency content of the noise, noise source, and time of day. These strategies include modifying whistle frequency characteristics, increasing whistle duration, and increasing whistle redundancy. Recordings were also used to evaluate the performance of six recently developed passive acoustic abundance estimation methods, by comparing their results to the true abundance of animals, obtained via a census conducted within the same area and time period. The methods employed were broadly divided into two categories – those involving direct counts of animals, and those involving counts of cues (signature whistles). The animal-based methods were traditional capture-recapture, spatially explicit capture-recapture (SECR), and an approach that blends the “snapshot” method and mark-recapture distance sampling, referred to here as (SMRDS). The cue-based methods were conventional distance sampling (CDS), an acoustic modeling approach involving the use of the passive sonar equation, and SECR. In the latter approach, detection probability was modelled as a function of sound transmission loss, rather than the Euclidean distance typically used. Of these methods, while SMRDS produced the most accurate estimate, SECR demonstrated the greatest potential for broad applicability to other species and locations, with minimal to no auxiliary data, such as distance from sound source to detector(s), which is often difficult to obtain. This was especially true when this method was compared to traditional capture-recapture results, which greatly underestimated abundance, despite attempts to account for major unmodelled heterogeneity. Furthermore, the incorporation of non-Euclidean distance significantly improved model accuracy. The acoustic modelling approach performed similarly to CDS, but both methods also strongly underestimated abundance. In particular, CDS proved to be inefficient. This approach requires at least 3 sensors for localization at a single point. It was also difficult to obtain accurate distances, and the sample size was greatly reduced by the failure to detect some whistles on all three recorders. As a result, this approach is not recommended for marine mammal abundance estimation when few recorders are available, or in high sound attenuation environments with relatively low sample sizes. It is hoped that these results lead to more informed management decisions, and therefore, more effective species conservation.
Resumo:
X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].
Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.
As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.
More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.
With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.
Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.
With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.
Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.
Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.
Resumo:
The microbially mediated anaerobic oxidation of methane (AOM) is the major biological sink of the greenhouse gas methane in marine sediments (doi:10.1007/978-94-009-0213-8_44) and serves as an important control for emission of methane into the hydrosphere. The AOM metabolic process is assumed to be a reversal of methanogenesis coupled to the reduction of sulfate to sulfide involving methanotrophic archaea (ANME) and sulfate-reducing bacteria (SRB) as syntrophic partners which were describes amongst others in Boetius et al. (2000; doi:10.1038/35036572). In this study, 16S rRNA-based methods were used to investigate the distribution and biomass of archaea in samples from sediments above outcropping methane hydrate at Hydrate Ridge (Cascadia margin off Oregon) and (ii) massive microbial mats enclosing carbonate reefs (Crimea area, Black Sea). Sediment samples from Hydrate Ridge were obtained during R/V SONNE cruises SO143-2 in August 1999 and SO148-1 in August 2000 at the crest of southern Hydrate Ridge at the Cascadia convergent margin off the coast of Oregon. The second study area is located in the Black Sea and represents a field in which there is active seepage of free gas on the slope of the northwestern Crimea area. Here, a field of conspicuous microbial reefs forming chimney-like structures was discovered at a water depth of 230 m in anoxic waters. The microbial mats were sampled by using the manned submersible JAGO during the R/V Prof. LOGACHEV cruise in July 2001. At Hydrate Ridge the surface sediments were dominated by aggregates consisting of ANME-2 and members of the Desulfosarcina-Desulfococcus branch (DSS) (ANME-2/DSS aggregates), which accounted for >90% of the total cell biomass. The numbers of ANME-1 cells increased strongly with depth; these cells accounted 1% of all single cells at the surface and more than 30% of all single cells (5% of the total cells) in 7- to 10-cm sediment horizons that were directly above layers of gas hydrate. In the Black Sea microbial mats ANME-1 accounted for about 50% of all cells. ANME-2/DSS aggregates occurred in microenvironments within the mat but accounted for only 1% of the total cells. FISH probes for the ANME-2a and ANME-2c subclusters were designed based on a comparative 16S rRNA analysis. In Hydrate Ridge sediments ANME-2a/DSS and ANME-2c/DSS aggregates differed significantly in morphology and abundance. The relative abundance values for these subgroups were remarkably different at Beggiatoa sites (80% ANME-2a, 20% ANME-2c) and Calyptogena sites (20% ANME-2a, 80% ANME-2c), indicating that there was preferential selection of the groups in the two habitats.
Resumo:
We calculate net community production (NCP) during summer 2005-2006 and spring 2006 in the Ross Sea using multiple approaches to determine the magnitude and consistency of rates. Water column carbon and nutrient inventories and surface ocean O2/Ar data are compared to satellite-derived primary productivity (PP) estimates and 14C uptake experiments. In spring, NCP was related to stratification proximal to upper ocean fronts. In summer, the most intense C drawdown was in shallow mixed layers affected by ice melt; depth-integrated C drawdown, however, increased with mixing depth. Delta O2/Ar-based methods, relying on gas exchange reconstructions, underestimate NCP due to seasonal variations in surface Delta O2/Ar and NCP rates. Mixed layer Delta O2/Ar requires approximately 60 days to reach steady state, starting from early spring. Additionally, cold temperatures prolong the sensitivity of gas exchange reconstructions to past NCP variability. Complex vertical structure, in addition to the seasonal cycle, affects interpretations of surface-based observations, including those made from satellites. During both spring and summer, substantial fractions of NCP were below the mixed layer. Satellite-derived estimates tended to overestimate PP relative to 14C-based estimates, most severely in locations of stronger upper water column stratification. Biases notwithstanding, NCP-PP comparisons indicated that community respiration was of similar magnitude to NCP. We observed that a substantial portion of NCP remained as suspended particulate matter in the upper water column, demonstrating a lag between production and export. Resolving the dynamic physical processes that structure variance in NCP and its fate will enhance the understanding of the carbon cycling in highly productive Antarctic environments.
Resumo:
Abstract Molecular probe-based methods (Fluorescent in-situ hybridisation or FISH, Next Generation Sequencing or NGS) have proved successful in improving both the efficiency and accuracy of the identification of microorganisms, especially those that lack distinct morphological features, such as picoplankton. However, FISH methods have the major drawback that they can only identify one or just a few species at a time because of the reduced number of available fluorochromes that can be added to the probe. Although the length of sequence that can be obtained is continually improving, NGS still requires a great deal of handling time, its analysis time is still months and with a PCR step it will always be sensitive to natural enzyme inhibitors. With the use of DNA microarrays, it is possible to identify large numbers of taxa on a single-glass slide, the so-called phylochip, which can be semi-quantitative. This review details the major steps in probe design, design and production of a phylochip and validation of the array. Finally, major microarray studies in the phytoplankton community are reviewed to demonstrate the scope of the method.
Resumo:
Abstract Molecular probe-based methods (Fluorescent in-situ hybridisation or FISH, Next Generation Sequencing or NGS) have proved successful in improving both the efficiency and accuracy of the identification of microorganisms, especially those that lack distinct morphological features, such as picoplankton. However, FISH methods have the major drawback that they can only identify one or just a few species at a time because of the reduced number of available fluorochromes that can be added to the probe. Although the length of sequence that can be obtained is continually improving, NGS still requires a great deal of handling time, its analysis time is still months and with a PCR step it will always be sensitive to natural enzyme inhibitors. With the use of DNA microarrays, it is possible to identify large numbers of taxa on a single-glass slide, the so-called phylochip, which can be semi-quantitative. This review details the major steps in probe design, design and production of a phylochip and validation of the array. Finally, major microarray studies in the phytoplankton community are reviewed to demonstrate the scope of the method.
Resumo:
The article presents a study of a CEFR B2-level reading subtest that is part of the Slovenian national secondary school leaving examination in English as a foreign language, and compares the test-taker actual performance (objective difficulty) with the test-taker and expert perceptions of item difficulty (subjective difficulty). The study also analyses the test-takers’ comments on item difficulty obtained from a while-reading questionnaire. The results are discussed in the framework of the existing research in the fields of (the assessment of) reading comprehension, and are addressed with regard to their implications for item-writing, FL teaching and curriculum development.
Resumo:
In the past decade, several major food safety crises originated from problems with feed. Consequently, there is an urgent need for early detection of fraudulent adulteration and contamination in the feed chain. Strategies are presented for two specific cases, viz. adulterations of (i) soybean meal with melamine and other types of adulterants/contaminants and (ii) vegetable oils with mineral oil, transformer oil or other oils. These strategies comprise screening at the feed mill or port of entry with non-destructive spectroscopic methods (NIRS and Raman), followed by post-screening and confirmation in the laboratory with MS-based methods. The spectroscopic techniques are suitable for on-site and on-line applications. Currently they are suited to detect fraudulent adulteration at relatively high levels but not to detect low level contamination. The potential use of the strategies for non-targeted analysis is demonstrated.
Resumo:
Background: Esophageal adenocarcinoma (EA) is one of the fastest rising cancers in western countries. Barrett’s Esophagus (BE) is the premalignant precursor of EA. However, only a subset of BE patients develop EA, which complicates the clinical management in the absence of valid predictors. Genetic risk factors for BE and EA are incompletely understood. This study aimed to identify novel genetic risk factors for BE and EA.Methods: Within an international consortium of groups involved in the genetics of BE/EA, we performed the first meta-analysis of all genome-wide association studies (GWAS) available, involving 6,167 BE patients, 4,112 EA patients, and 17,159 representative controls, all of European ancestry, genotyped on Illumina high-density SNP-arrays, collected from four separate studies within North America, Europe, and Australia. Meta-analysis was conducted using the fixed-effects inverse variance-weighting approach. We used the standard genome-wide significant threshold of 5×10-8 for this study. We also conducted an association analysis following reweighting of loci using an approach that investigates annotation enrichment among the genome-wide significant loci. The entire GWAS-data set was also analyzed using bioinformatics approaches including functional annotation databases as well as gene-based and pathway-based methods in order to identify pathophysiologically relevant cellular pathways.Findings: We identified eight new associated risk loci for BE and EA, within or near the CFTR (rs17451754, P=4·8×10-10), MSRA (rs17749155, P=5·2×10-10), BLK (rs10108511, P=2·1×10-9), KHDRBS2 (rs62423175, P=3·0×10-9), TPPP/CEP72 (rs9918259, P=3·2×10-9), TMOD1 (rs7852462, P=1·5×10-8), SATB2 (rs139606545, P=2·0×10-8), and HTR3C/ABCC5 genes (rs9823696, P=1·6×10-8). A further novel risk locus at LPA (rs12207195, posteriori probability=0·925) was identified after re-weighting using significantly enriched annotations. This study thereby doubled the number of known risk loci. The strongest disease pathways identified (P<10-6) belong to muscle cell differentiation and to mesenchyme development/differentiation, which fit with current pathophysiological BE/EA concepts. To our knowledge, this study identified for the first time an EA-specific association (rs9823696, P=1·6×10-8) near HTR3C/ABCC5 which is independent of BE development (P=0·45).Interpretation: The identified disease loci and pathways reveal new insights into the etiology of BE and EA. Furthermore, the EA-specific association at HTR3C/ABCC5 may constitute a novel genetic marker for the prediction of transition from BE to EA. Mutations in CFTR, one of the new risk loci identified in this study, cause cystic fibrosis (CF), the most common recessive disorder in Europeans. Gastroesophageal reflux (GER) belongs to the phenotypic CF-spectrum and represents the main risk factor for BE/EA. Thus, the CFTR locus may trigger a common GER-mediated pathophysiology.
Resumo:
Estimates of HIV prevalence are important for policy in order to establish the health status of a country's population and to evaluate the effectiveness of population-based interventions and campaigns. However, participation rates in testing for surveillance conducted as part of household surveys, on which many of these estimates are based, can be low. HIV positive individuals may be less likely to participate because they fear disclosure, in which case estimates obtained using conventional approaches to deal with missing data, such as imputation-based methods, will be biased. We develop a Heckman-type simultaneous equation approach which accounts for non-ignorable selection, but unlike previous implementations, allows for spatial dependence and does not impose a homogeneous selection process on all respondents. In addition, our framework addresses the issue of separation, where for instance some factors are severely unbalanced and highly predictive of the response, which would ordinarily prevent model convergence. Estimation is carried out within a penalized likelihood framework where smoothing is achieved using a parametrization of the smoothing criterion which makes estimation more stable and efficient. We provide the software for straightforward implementation of the proposed approach, and apply our methodology to estimating national and sub-national HIV prevalence in Swaziland, Zimbabwe and Zambia.
Resumo:
Face-to-face interviews are a fundamental research tool in qualitative research. Whilst this form of data collection can provide many valuable insights, it can often fall short of providing a complete picture of a research subject's experiences. Point of view (PoV) interviewing is an elicitation technique used in the social sciences as a means of enriching data obtained from research interviews. Recording research subjects' first person perspectives, for example by wearing digital video glasses, can afford deeper insights into their experiences. PoV interviewing can promote making visible the unverbalizable and does not rely as much on memory as the traditional interview. The use of such relatively inexpensive technology is gaining interest in health profession educational research and pedagogy, such as dynamic simulation-based learning and research activities. In this interview, Dr Gerry Gormley (a medical education researcher) talks to Dr Jonathan Skinner (an anthropologist with an interest in PoV interviewing), exploring some of the many crossover implications with PoV interviewing for medical education research and practice.
Resumo:
BACKGROUND AND OBJECTIVE: The main difficulty of PCR-based clonality studies for B-cell lymphoproliferative disorders (B-LPD) is discrimination between monoclonal and polyclonal PCR products, especially when there is a high background of polyclonal B cells in the tumor sample. Actually, PCR-based methods for clonality assessment require additional analysis of the PCR products in order to discern between monoclonal and polyclonal samples. Heteroduplex analysis represents an attractive approach since it is easy to perform and avoids the use of radioactive substrates or expensive equipment. DESIGN AND METHODS: We studied the sensitivity and specificity of heteroduplex PCR analysis for monoclonal detection in samples from 90 B-cell non Hodgkin's lymphoma (B-NHL) patients and in 28 individuals without neoplastic B-cell disorders (negative controls). Furthermore, in 42 B-NHL and in the same 28 negative controls, we compared heteroduplex analysis vs the classical PCR technique. We also compared ethidium bromide (EtBr) vs. silver nitrate (AgNO(3)) staining as well as agarose vs. polyacrylamide gel electrophoresis (PAGE). RESULTS: Using two pair consensus primers sited at VH (FR3 and FR2) and at JH, 91% of B-NHL samples displayed monoclonal products after heteroduplex PCR analysis using PAGE and AgNO(3) staining. Moreover, no polyclonal sample showed a monoclonal PCR product. By contrast, false positive results were obtained when using agarose (5/28) and PAGE without heteroduplex analysis: 2/28 and 8/28 with EtBr and AgNO(3) staining, respectively. In addition, false negative results only appeared with EtBr staining: 13/42 in agarose, 4/42 in PAGE without heteroduplex analysis and 7/42 in PAGE after heteroduplex analysis. INTERPRETATION AND CONCLUSIONS: We conclude that AgNO(3) stained PAGE after heteroduplex analysis is the most suitable strategy for detecting monoclonal rearrangements in B-NHL samples because it does not produce false-positive results and the risk of false-negative results is very low.
Resumo:
[EN]Enabling natural human-robot interaction using computer vision based applications requires fast and accurate hand detection. However, previous works in this field assume different constraints, like a limitation in the number of detected gestures, because hands are highly complex objects difficult to locate. This paper presents an approach which integrates temporal coherence cues and hand detection based on wrists using a cascade classifier. With this approach, we introduce three main contributions: (1) a transparent initialization mechanism without user participation for segmenting hands independently of their gesture, (2) a larger number of detected gestures as well as a faster training phase than previous cascade classifier based methods and (3) near real-time performance for hand pose detection in video streams.
Resumo:
BACKGROUND: Post-abortion contraceptive use in India is low and the use of modern methods of contraception is rare, especially in rural areas. This study primarily compares contraceptive use among women whose abortion outcome was assessed in-clinic with women who assessed their abortion outcome at home, in a low-resource, primary health care setting. Moreover, it investigates how background characteristics and abortion service provision influences contraceptive use post-abortion. METHODS: A randomized controlled, non-inferiority, trial (RCT) compared clinic follow-up with home-assessment of abortion outcome at 2 weeks post-abortion. Additionally, contraceptive-use at 3 months post-abortion was investigated through a cross-sectional follow-up interview with a largely urban sub-sample of women from the RCT. Women seeking abortion with a gestational age of up to 9 weeks and who agreed to a 2-week follow-up were included (n = 731). Women with known contraindications to medical abortions, Hb < 85 mg/l and aged below 18 were excluded. Data were collected between April 2013 and August 2014 in six primary health-care clinics in Rajasthan. A computerised random number generator created the randomisation sequence (1:1) in blocks of six. Contraceptive use was measured at 2 weeks among women successfully followed-up (n = 623) and 3 months in the sub-set of women who were included if they were recruited at one of the urban study sites, owned a phone and agreed to a 3-month follow-up (n = 114). RESULTS: There were no differences between contraceptive use and continuation between study groups at 3 months (76 % clinic follow-up, 77 % home-assessment), however women in the clinic follow-up group were most likely to adopt a contraceptive method at 2 weeks (62 ± 12 %), while women in the home-assessment group were most likely to adopt a method after next menstruation (60 ± 13 %). Fifty-two per cent of women who initiated a method at 2 weeks chose the 3-month injection or the copper intrauterine device. Only 4 % of women preferred sterilization. Caste, educational attainment, or type of residence did not influence contraceptive use. CONCLUSIONS: Simplified follow-up after early medical abortion will not change women's opportunities to access contraception in a low-resource setting, if contraceptive services are provided as intra-abortion services as early as on day one. Women's postabortion contraceptive use at 3 months is unlikely to be affected by mode of followup after medical abortion, also in a low-resource setting. Clinical guidelines need to encourage intra-abortion contraception, offering the full spectrum of evidence-based methods, especially long-acting reversible methods. TRIAL REGISTRATION: Clinicaltrials.gov NCT01827995.