34 resultados para semi-empirical methods
em Université de Lausanne, Switzerland
Resumo:
Background: Despite the increasing incidences of the publication of assessment frameworks intending to establish the "standards" of the quality of qualitative research, the research conducted using such empirical methods are still facing difficulties in being published or recognised by funding agencies. Methods: We conducted a thematic content analysis of eight frameworks from psychology/psychiatry and general medicine. The frameworks and their criteria are then compared against each other. Findings: The results illustrated the difficulties in reaching consensus on the definition of quality criteria. This showed the differences between the frameworks from the point of views of the underlying epistemology and the criteria suggested. Discussion: The aforementioned differences reflect the diversity of paradigms implicitly referred to by the authors of the frameworks, although rarely explicitly mentioned in text. We conclude that the increase in qualitative research and publications has failed to overcome the difficulties in establishing shared criteria and the great heterogeneity of concepts raises methodological and epistemological problems.
Resumo:
The n-octanol/water partition coefficient (log Po/w) is a key physicochemical parameter for drug discovery, design, and development. Here, we present a physics-based approach that shows a strong linear correlation between the computed solvation free energy in implicit solvents and the experimental log Po/w on a cleansed data set of more than 17,500 molecules. After internal validation by five-fold cross-validation and data randomization, the predictive power of the most interesting multiple linear model, based on two GB/SA parameters solely, was tested on two different external sets of molecules. On the Martel druglike test set, the predictive power of the best model (N = 706, r = 0.64, MAE = 1.18, and RMSE = 1.40) is similar to six well-established empirical methods. On the 17-drug test set, our model outperformed all compared empirical methodologies (N = 17, r = 0.94, MAE = 0.38, and RMSE = 0.52). The physical basis of our original GB/SA approach together with its predictive capacity, computational efficiency (1 to 2 s per molecule), and tridimensional molecular graphics capability lay the foundations for a promising predictor, the implicit log P method (iLOGP), to complement the portfolio of drug design tools developed and provided by the SIB Swiss Institute of Bioinformatics.
Resumo:
Object The goal of this study was to establish whether clear patterns of initial pain freedom could be identified when treating patients with classic trigeminal neuralgia (TN) by using Gamma Knife surgery (GKS). The authors compared hypesthesia and pain recurrence rates to see if statistically significant differences could be found. Methods Between July 1992 and November 2010, 737 patients presenting with TN underwent GKS and prospective evaluation at Timone University Hospital in Marseille, France. In this study the authors analyzed the cases of 497 of these patients, who participated in follow-up longer than 1 year, did not have megadolichobasilar artery- or multiple sclerosis-related TN, and underwent GKS only once; in other words, the focus was on cases of classic TN with a single radiosurgical treatment. Radiosurgery was performed with a Leksell Gamma Knife (model B, C, or Perfexion) using both MR and CT imaging targeting. A single 4-mm isocenter was positioned in the cisternal portion of the trigeminal nerve at a median distance of 7.8 mm (range 4.5-14 mm) anterior to the emergence of the nerve. A median maximum dose of 85 Gy (range 70-90 Gy) was delivered. Using empirical methods and assisted by a chart with clear cut-off periods of pain free distribution, the authors were able to divide patients who experienced freedom from pain into 3 separate groups: patients who became pain free within the first 48 hours post-GKS; those who became pain free between 48 hours and 30 days post-GKS; and those who became pain free more than 30 days after GKS. Results The median age in the 497 patients was 68.3 years (range 28.1-93.2 years). The median follow-up period was 43.75 months (range 12-174.41 months). Four hundred fifty-four patients (91.34%) were initially pain free within a median time of 10 days (range 1-459 days) after GKS. One hundred sixty-nine patients (37.2%) became pain free within the first 48 hours (Group PF(≤ 48 hours)), 194 patients (42.8%) between posttreatment Day 3 and Day 30 (Group PF((>48 hours, ≤ 30 days))), and 91 patients (20%) after 30 days post-GKS (Group PF(>30 days)). Differences in postoperative hypesthesia were found: in Group PF(≤ 48 hours) 18 patients (13.7%) developed postoperative hypesthesia, compared with 30 patients (19%) in Group PF((>48 hours, ≤ 30 days)) and 22 patients (30.6%) in Group PF(>30 days) (p = 0.014). One hundred fifty-seven patients (34.4%) who initially became free from pain experienced a recurrence of pain with a median delay of 24 months (range 0.62-150.06 months). There were no statistically significant differences between the patient groups with respect to pain recurrence: 66 patients (39%) in Group PF(≤ 48 hours) experienced pain recurrence, compared with 71 patients (36.6%) in Group PF((>48 hours, ≤ 30 days)) and 27 patients (29.7%) in Group PF(>30 days) (p = 0.515). Conclusions A substantial number of patients (169 cases, 37.2%) became pain free within the first 48 hours. The rate of hypesthesia was higher in patients who became pain free more than 30 days after GKS, with a statistically significant difference between patient groups (p = 0.014).
Resumo:
In occupational exposure assessment of airborne contaminants, exposure levels can either be estimated through repeated measurements of the pollutant concentration in air, expert judgment or through exposure models that use information on the conditions of exposure as input. In this report, we propose an empirical hierarchical Bayesian model to unify these approaches. Prior to any measurement, the hygienist conducts an assessment to generate prior distributions of exposure determinants. Monte-Carlo samples from these distributions feed two level-2 models: a physical, two-compartment model, and a non-parametric, neural network model trained with existing exposure data. The outputs of these two models are weighted according to the expert's assessment of their relevance to yield predictive distributions of the long-term geometric mean and geometric standard deviation of the worker's exposure profile (level-1 model). Bayesian inferences are then drawn iteratively from subsequent measurements of worker exposure. Any traditional decision strategy based on a comparison with occupational exposure limits (e.g. mean exposure, exceedance strategies) can then be applied. Data on 82 workers exposed to 18 contaminants in 14 companies were used to validate the model with cross-validation techniques. A user-friendly program running the model is available upon request.
Multimodel inference and multimodel averaging in empirical modeling of occupational exposure levels.
Resumo:
Empirical modeling of exposure levels has been popular for identifying exposure determinants in occupational hygiene. Traditional data-driven methods used to choose a model on which to base inferences have typically not accounted for the uncertainty linked to the process of selecting the final model. Several new approaches propose making statistical inferences from a set of plausible models rather than from a single model regarded as 'best'. This paper introduces the multimodel averaging approach described in the monograph by Burnham and Anderson. In their approach, a set of plausible models are defined a priori by taking into account the sample size and previous knowledge of variables influent on exposure levels. The Akaike information criterion is then calculated to evaluate the relative support of the data for each model, expressed as Akaike weight, to be interpreted as the probability of the model being the best approximating model given the model set. The model weights can then be used to rank models, quantify the evidence favoring one over another, perform multimodel prediction, estimate the relative influence of the potential predictors and estimate multimodel-averaged effects of determinants. The whole approach is illustrated with the analysis of a data set of 1500 volatile organic compound exposure levels collected by the Institute for work and health (Lausanne, Switzerland) over 20 years, each concentration having been divided by the relevant Swiss occupational exposure limit and log-transformed before analysis. Multimodel inference represents a promising procedure for modeling exposure levels that incorporates the notion that several models can be supported by the data and permits to evaluate to a certain extent model selection uncertainty, which is seldom mentioned in current practice.
Resumo:
Ultrasound segmentation is a challenging problem due to the inherent speckle and some artifacts like shadows, attenuation and signal dropout. Existing methods need to include strong priors like shape priors or analytical intensity models to succeed in the segmentation. However, such priors tend to limit these methods to a specific target or imaging settings, and they are not always applicable to pathological cases. This work introduces a semi-supervised segmentation framework for ultrasound imaging that alleviates the limitation of fully automatic segmentation, that is, it is applicable to any kind of target and imaging settings. Our methodology uses a graph of image patches to represent the ultrasound image and user-assisted initialization with labels, which acts as soft priors. The segmentation problem is formulated as a continuous minimum cut problem and solved with an efficient optimization algorithm. We validate our segmentation framework on clinical ultrasound imaging (prostate, fetus, and tumors of the liver and eye). We obtain high similarity agreement with the ground truth provided by medical expert delineations in all applications (94% DICE values in average) and the proposed algorithm performs favorably with the literature.
Resumo:
This dissertation aims to investigate empirical evidence on the importance and influence of attractiveness of nations in global competition. The notion of country attractiveness, which has been widely developed in the research areas of international business, tourism and migration, is a multi-dimensional construct to measure a country's characteristics with regard to its market or destination that attract international investors, tourists and migrants. This analytical concept provides an account of the mechanism as to how potential stakeholders evaluate more attractive countries based on certain criteria. Thus, in the field of international sport-event bidding, do international sport event owners also have specific country attractiveness for their sport event hosts? The dissertation attempts to address this research question by statistically assessing the effects of country attractiveness on the success of strategy for hosting international sports events. Based on theories of signaling and soft power, country attractiveness is defined and measured as the three dimensions of sustainable development: economic, social, and environmental attractiveness. This thesis proceeds to examine the concept of sport-event-hosting strategy and explore multi-level factors affecting the success in international sport-event bidding. By exploring past history of the Olympic Movement from theoretical perspectives, the thesis proposes and tests the hypotheses that economic, social and environmental attractiveness of a country may be correlated with its bid wins or the success of sport-event-hosting strategy. Quantitative analytical methods with various robustness checks are employed with using collected data on bidding results of major events in Olympic sports during the period from 1990 to 2012. The analysis results reveal that event owners of international Olympic sports are likely to prefer countries that have higher economic, social, and environmental attractiveness. The empirical assessment of this thesis suggests that high country attractiveness can be an essential element of prerequisites for a city/country to secure in order to bid with an increased chance of success.
Resumo:
This article uses a mixed methods design to investigate the effects of social influence on family formation in a sample of eastern and western German young adults at an early stage of their family formation. Theoretical propositions on the importance of informal interaction for fertility and family behavior are still rarely supported by systematic empirical evidence. Major problems are the correct identification of salient relationships and the comparability of social networks across population subgroups. This article addresses the two issues through a combination of qualitative and quantitative data collection and analysis. In-depth interviewing, network charts, and network grids are used to map individual personal relationships and their influence on family formation decisions. In addition, an analysis of friendship dyads is provided.
Resumo:
OBJECTIVE: The principal aim of this study was to develop a Swiss Food Frequency Questionnaire (FFQ) for the elderly population for use in a study to investigate the influence of nutritional factors on bone health. The secondary aim was to assess its validity and both short-term and long-term reproducibility. DESIGN: A 4-day weighed record (4 d WR) was applied to 51 randomly selected women of a mean age of 80.3 years. Subsequently, a detailed FFQ was developed, cross-validated against a further 44 4-d WR, and the short- (1 month, n = 15) and long-term (12 months, n = 14) reproducibility examined. SETTING: French speaking part of Switzerland. SUBJECTS: The subjects were randomly selected women recruited from the Swiss Evaluation of the Methods of Measurement of Osteoporotic Fracture cohort study. RESULTS: Mean energy intakes by 4-d WR and FFQ showed no significant difference [1564.9 kcal (SD 351.1); 1641.3 kcal (SD 523.2) respectively]. Mean crude nutrient intakes were also similar (with nonsignifcant P-values examining the differences in intake) and ranged from 0.13 (potassium) to 0.48 (magnesium). Similar results were found in the reproducibility studies. CONCLUSION: These findings provide evidence that this FFQ adequately estimates nutrient intakes and can be used to rank individuals within distributions of intake in specific populations.
Resumo:
Background: In patients with cervical spine injury, a cervical collar may prevent cervical spine movements but renders tracheal intubation with a standard laryngoscope difficult if not impossible. We hypothesized that despite the presence of a semi-rigid cervical collar and with the patient's head taped to the trolley, we would be able to intubate all patients with the GlideScopeR and its dedicated stylet. Methods: 50 adult patients (ASA 1 or 2, BMI ≤35 kg/m2) scheduled for elective surgical procedures requiring tracheal intubation were included. After standardized induction of general anesthesia and neuromuscular blockade, the neck was immobilized with an appropriately sized semi-rigid Philadelphia Patriot® cervical collar, the head was taped to the trolley. Laryngoscopy was attempted using a Macintosh laryngoscope blade 4 and the modified Cormack Lehane grade was noted. Subsequently, laryngoscopy with the GlideScopeR was graded and followed by oro-tracheal intubation. Results: All patients were successfully intubated with the GlideScopeR and its dedicated stylet. The median intubation time was 50 sec [43; 61]. The modified Cormack Lehane grade was 3 or 4 at direct laryngoscopy. It was significantly reduced with the GlideScopeR (p <0.0001), reaching 2a in most of patients. Maximal mouth opening was significantly reduced with the cervical collar applied, 4.5 cm [4.5; 5.0] vs. 2.0 cm [1.8; 2.0] (p <0.0001). Conclusions: The GlideScope® allows oro-tracheal intubation in patients having their cervical spine immobilized by a semi-rigid collar and their head taped to the trolley. It furthermore decreases significantly the modified Cormack Lehane grade.
Resumo:
This review paper reports the consensus of a technical workshop hosted by the European network, NanoImpactNet (NIN). The workshop aimed to review the collective experience of working at the bench with manufactured nanomaterials (MNMs), and to recommend modifications to existing experimental methods and OECD protocols. Current procedures for cleaning glassware are appropriate for most MNMs, although interference with electrodes may occur. Maintaining exposure is more difficult with MNMs compared to conventional chemicals. A metal salt control is recommended for experiments with metallic MNMs that may release free metal ions. Dispersing agents should be avoided, but if they must be used, then natural or synthetic dispersing agents are possible, and dispersion controls essential. Time constraints and technology gaps indicate that full characterisation of test media during ecotoxicity tests is currently not practical. Details of electron microscopy, dark-field microscopy, a range of spectroscopic methods (EDX, XRD, XANES, EXAFS), light scattering techniques (DLS, SLS) and chromatography are discussed. The development of user-friendly software to predict particle behaviour in test media according to DLVO theory is in progress, and simple optical methods are available to estimate the settling behaviour of suspensions during experiments. However, for soil matrices such simple approaches may not be applicable. Alternatively, a Critical Body Residue approach may be taken in which body concentrations in organisms are related to effects, and toxicity thresholds derived. For microbial assays, the cell wall is a formidable barrier to MNMs and end points that rely on the test substance penetrating the cell may be insensitive. Instead assays based on the cell envelope should be developed for MNMs. In algal growth tests, the abiotic factors that promote particle aggregation in the media (e.g. ionic strength) are also important in providing nutrients, and manipulation of the media to control the dispersion may also inhibit growth. Controls to quantify shading effects, and precise details of lighting regimes, shaking or mixing should be reported in algal tests. Photosynthesis may be more sensitive than traditional growth end points for algae and plants. Tests with invertebrates should consider non-chemical toxicity from particle adherence to the organisms. The use of semi-static exposure methods with fish can reduce the logistical issues of waste water disposal and facilitate aspects of animal husbandry relevant to MMNs. There are concerns that the existing bioaccumulation tests are conceptually flawed for MNMs and that new test(s) are required. In vitro testing strategies, as exemplified by genotoxicity assays, can be modified for MNMs, but the risk of false negatives in some assays is highlighted. In conclusion, most protocols will require some modifications and recommendations are made to aid the researcher at the bench. [Authors]
Resumo:
Aim Recently developed parametric methods in historical biogeography allow researchers to integrate temporal and palaeogeographical information into the reconstruction of biogeographical scenarios, thus overcoming a known bias of parsimony-based approaches. Here, we compare a parametric method, dispersal-extinction-cladogenesis (DEC), against a parsimony-based method, dispersal-vicariance analysis (DIVA), which does not incorporate branch lengths but accounts for phylogenetic uncertainty through a Bayesian empirical approach (Bayes-DIVA). We analyse the benefits and limitations of each method using the cosmopolitan plant family Sapindaceae as a case study.Location World-wide.Methods Phylogenetic relationships were estimated by Bayesian inference on a large dataset representing generic diversity within Sapindaceae. Lineage divergence times were estimated by penalized likelihood over a sample of trees from the posterior distribution of the phylogeny to account for dating uncertainty in biogeographical reconstructions. We compared biogeographical scenarios between Bayes-DIVA and two different DEC models: one with no geological constraints and another that employed a stratified palaeogeographical model in which dispersal rates were scaled according to area connectivity across four time slices, reflecting the changing continental configuration over the last 110 million years.Results Despite differences in the underlying biogeographical model, Bayes-DIVA and DEC inferred similar biogeographical scenarios. The main differences were: (1) in the timing of dispersal events - which in Bayes-DIVA sometimes conflicts with palaeogeographical information, and (2) in the lower frequency of terminal dispersal events inferred by DEC. Uncertainty in divergence time estimations influenced both the inference of ancestral ranges and the decisiveness with which an area can be assigned to a node.Main conclusions By considering lineage divergence times, the DEC method gives more accurate reconstructions that are in agreement with palaeogeographical evidence. In contrast, Bayes-DIVA showed the highest decisiveness in unequivocally reconstructing ancestral ranges, probably reflecting its ability to integrate phylogenetic uncertainty. Care should be taken in defining the palaeogeographical model in DEC because of the possibility of overestimating the frequency of extinction events, or of inferring ancestral ranges that are outside the extant species ranges, owing to dispersal constraints enforced by the model. The wide-spanning spatial and temporal model proposed here could prove useful for testing large-scale biogeographical patterns in plants.
Resumo:
Segmenting ultrasound images is a challenging problemwhere standard unsupervised segmentation methods such asthe well-known Chan-Vese method fail. We propose in thispaper an efficient segmentation method for this class ofimages. Our proposed algorithm is based on asemi-supervised approach (user labels) and the use ofimage patches as data features. We also consider thePearson distance between patches, which has been shown tobe robust w.r.t speckle noise present in ultrasoundimages. Our results on phantom and clinical data show avery high similarity agreement with the ground truthprovided by a medical expert.
Resumo:
A semisupervised support vector machine is presented for the classification of remote sensing images. The method exploits the wealth of unlabeled samples for regularizing the training kernel representation locally by means of cluster kernels. The method learns a suitable kernel directly from the image and thus avoids assuming a priori signal relations by using a predefined kernel structure. Good results are obtained in image classification examples when few labeled samples are available. The method scales almost linearly with the number of unlabeled samples and provides out-of-sample predictions.
Resumo:
Fluvial deposits are a challenge for modelling flow in sub-surface reservoirs. Connectivity and continuity of permeable bodies have a major impact on fluid flow in porous media. Contemporary object-based and multipoint statistics methods face a problem of robust representation of connected structures. An alternative approach to model petrophysical properties is based on machine learning algorithm ? Support Vector Regression (SVR). Semi-supervised SVR is able to establish spatial connectivity taking into account the prior knowledge on natural similarities. SVR as a learning algorithm is robust to noise and captures dependencies from all available data. Semi-supervised SVR applied to a synthetic fluvial reservoir demonstrated robust results, which are well matched to the flow performance