914 resultados para Using Lean tools


Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To better understand the structure of the Patient Assessment of Chronic Illness Care (PACIC) instrument. More specifically to test all published validation models, using one single data set and appropriate statistical tools. DESIGN: Validation study using data from cross-sectional survey. PARTICIPANTS: A population-based sample of non-institutionalized adults with diabetes residing in Switzerland (canton of Vaud). MAIN OUTCOME MEASURE: French version of the 20-items PACIC instrument (5-point response scale). We conducted validation analyses using confirmatory factor analysis (CFA). The original five-dimension model and other published models were tested with three types of CFA: based on (i) a Pearson estimator of variance-covariance matrix, (ii) a polychoric correlation matrix and (iii) a likelihood estimation with a multinomial distribution for the manifest variables. All models were assessed using loadings and goodness-of-fit measures. RESULTS: The analytical sample included 406 patients. Mean age was 64.4 years and 59% were men. Median of item responses varied between 1 and 4 (range 1-5), and range of missing values was between 5.7 and 12.3%. Strong floor and ceiling effects were present. Even though loadings of the tested models were relatively high, the only model showing acceptable fit was the 11-item single-dimension model. PACIC was associated with the expected variables of the field. CONCLUSIONS: Our results showed that the model considering 11 items in a single dimension exhibited the best fit for our data. A single score, in complement to the consideration of single-item results, might be used instead of the five dimensions usually described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ligands and receptors of the TNF superfamily are therapeutically relevant targets in a wide range of human diseases. This chapter describes assays based on ELISA, immunoprecipitation, FACS, and reporter cell lines to monitor interactions of tagged receptors and ligands in both soluble and membrane-bound forms using unified detection techniques. A reporter cell assay that is sensitive to ligand oligomerization can identify ligands with high probability of being active on endogenous receptors. Several assays are also suitable to measure the activity of agonist or antagonist antibodies, or to detect interactions with proteoglycans. Finally, self-interaction of membrane-bound receptors can be evidenced using a FRET-based assay. This panel of methods provides a large degree of flexibility to address questions related to the specificity, activation, or inhibition of TNF-TNF receptor interactions in independent assay systems, but does not substitute for further tests in physiologically relevant conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Identification of post-translational modifications of proteins in biological samples often requires access to preanalytical purification and concentration methods. In the purification step high or low molecular weight substances can be removed by size exclusion filters, and high abundant proteins can be removed, or low abundant proteins can be enriched, by specific capturing tools. In this paper is described the experience and results obtained with a recently emerged and easy-to-use affinity purification kit for enrichment of the low amounts of EPO found in urine and plasma specimens. The kit can be used as a pre-step in the EPO doping control procedure, as an alternative to the commonly used ultrafiltration, for detecting aberrantly glycosylated isoforms. The commercially available affinity purification kit contains small disposable anti-EPO monolith columns (6 ?L volume, Ø7 mm, length 0.15 mm) together with all required buffers. A 24-channel vacuum manifold was used for simultaneous processing of samples. The column concentrated EPO from 20 mL urine down to 55 ?L eluate with a concentration factor of 240 times, while roughly 99.7% of non-relevant urine proteins were removed. The recoveries of Neorecormon (epoetin beta), and the EPO analogues Aranesp and Mircera applied to buffer were high, 76%, 67% and 57%, respectively. The recovery of endogenous EPO from human urine was 65%. High recoveries were also obtained when purifying human, mouse and equine EPO from serum, and human EPO from cerebrospinal fluid. Evaluation with the accredited EPO doping control method based on isoelectric focusing (IEF) showed that the affinity purification procedure did not change the isoform distribution for rhEPO, Aranesp, Mircera or endogenous EPO. The kit should be particularly useful for applications in which it is essential to avoid carry-over effects, a problem commonly encountered with conventional particle-based affinity columns. The encouraging results with EPO propose that similar affinity monoliths, with the appropriate antibodies, should constitute useful tools for general applications in sample preparation, not only for doping control of EPO and other hormones such as growth hormone and insulin but also for the study of post-translational modifications of other low abundance proteins in biological and clinical research, and for sample preparation prior to in vitro diagnostics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe an original case of disseminated infection with Histoplasma capsulatum (Hc) var. duboisii in an African patient with AIDS who migrated to Switzerland. The diagnosis of histoplasmosis was suggested using direct examination of tissues and confirmed in 24 h with a panfungal polymerase chain reaction assay. The variety duboisii of Hc was established using DNA sequencing of the polymorphic genomic region OLE. Molecular tools allow diagnosis of histoplasmosis in 24 h, which is drastically shorter than culture procedures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MHC-peptide tetramers have become essential tools for T-cell analysis, but few MHC class II tetramers incorporating peptides from human tumor and self-antigens have been developed. Among limiting factors are the high polymorphism of class II molecules and the low binding capacity of the peptides. Here, we report the generation of molecularly defined tetramers using His-tagged peptides and isolation of folded MHC/peptide monomers by affinity purification. Using this strategy we generated tetramers of DR52b (DRB3*0202), an allele expressed by approximately half of Caucasians, incorporating an epitope from the tumor antigen NY-ESO-1. Molecularly defined tetramers avidly and stably bound to specific CD4(+) T cells with negligible background on nonspecific cells. Using molecularly defined DR52b/NY-ESO-1 tetramers, we could demonstrate that in DR52b(+) cancer patients immunized with a recombinant NY-ESO-1 vaccine, vaccine-induced tetramer-positive cells represent ex vivo in average 1:5,000 circulating CD4(+) T cells, include central and transitional memory polyfunctional populations, and do not include CD4(+)CD25(+)CD127(-) regulatory T cells. This approach may significantly accelerate the development of reliable MHC class II tetramers to monitor immune responses to tumor and self-antigens.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Protein-protein interactions encode the wiring diagram of cellular signaling pathways and their deregulations underlie a variety of diseases, such as cancer. Inhibiting protein-protein interactions with peptide derivatives is a promising way to develop new biological and therapeutic tools. Here, we develop a general framework to computationally handle hundreds of non-natural amino acid sidechains and predict the effect of inserting them into peptides or proteins. We first generate all structural files (pdb and mol2), as well as parameters and topologies for standard molecular mechanics software (CHARMM and Gromacs). Accurate predictions of rotamer probabilities are provided using a novel combined knowledge and physics based strategy. Non-natural sidechains are useful to increase peptide ligand binding affinity. Our results obtained on non-natural mutants of a BCL9 peptide targeting beta-catenin show very good correlation between predicted and experimental binding free-energies, indicating that such predictions can be used to design new inhibitors. Data generated in this work, as well as PyMOL and UCSF Chimera plug-ins for user-friendly visualization of non-natural sidechains, are all available at http://www.swisssidechain.ch. Our results enable researchers to rapidly and efficiently work with hundreds of non-natural sidechains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The n-octanol/water partition coefficient (log Po/w) is a key physicochemical parameter for drug discovery, design, and development. Here, we present a physics-based approach that shows a strong linear correlation between the computed solvation free energy in implicit solvents and the experimental log Po/w on a cleansed data set of more than 17,500 molecules. After internal validation by five-fold cross-validation and data randomization, the predictive power of the most interesting multiple linear model, based on two GB/SA parameters solely, was tested on two different external sets of molecules. On the Martel druglike test set, the predictive power of the best model (N = 706, r = 0.64, MAE = 1.18, and RMSE = 1.40) is similar to six well-established empirical methods. On the 17-drug test set, our model outperformed all compared empirical methodologies (N = 17, r = 0.94, MAE = 0.38, and RMSE = 0.52). The physical basis of our original GB/SA approach together with its predictive capacity, computational efficiency (1 to 2 s per molecule), and tridimensional molecular graphics capability lay the foundations for a promising predictor, the implicit log P method (iLOGP), to complement the portfolio of drug design tools developed and provided by the SIB Swiss Institute of Bioinformatics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hazard mapping in mountainous areas at the regional scale has greatly changed since the 1990s thanks to improved digital elevation models (DEM). It is now possible to model slope mass movement and floods with a high level of detail in order to improve geomorphologic mapping. We present examples of regional multi-hazard susceptibility mapping through two Swiss case studies, including landslides, rockfall, debris flows, snow avalanches and floods, in addition to several original methods and software tools. The aim of these recent developments is to take advantage of the availability of high resolution DEM (HRDEM) for better mass movement modeling. Our results indicate a good correspondence between inventories of hazardous zones based on historical events and model predictions. This paper demonstrates that by adapting tools and methods issued from modern technologies, it is possible to obtain reliable documents for land planning purposes over large areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The calculation of elasticity parameters by sonic and ultra sonic wave propagation in saturated soils using Biot's theory needs the following variables : forpiation density and porosity (p, ø), compressional and shear wave velocities (Vp, Vs), fluid density, viscosity and compressibility (Pfi Ilfi Ki), matrix density and compressibility (p" K), The first four parameters can be determined in situ using logging probes. Because fluid and matrix characteristics are not modified during core extraction, they can be obtained through laboratory measurements. All parameters necessitate precise calibrations in various environments and for specific range of values encountered in soils. The slim diameter of boreholes in shallow geophysics and the high cost of petroleum equipment demand the use of specific probes, which usually only give qualitative results. The measurement 'of density is done with a gamma-gamma probe and the measurement of hydrogen index, in relation to porosity, by a neutron probe. The first step of this work has been carried out in synthetic formations in the laboratory using homogeneous media of known density and porosity. To establish borehole corrections different casings have been used. Finally a comparison between laboratory and in situ data in cored holes of known geometry and casing has been performed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this project is to get used to another kind of programming. Since now, I used very complex programming languages to develop applications or even to program microcontrollers, but PicoCricket system is the evidence that we don’t need so complex development tools to get functional devices. PicoCricket system is the clear example of simple programming to make devices work the way we programmed it. There’s an easy but effective way to program small, devices just saying what we want them to do. We cannot do complex algorithms and mathematical operations but we can program them in a short time. Nowadays, the easier and faster we produce, the more we earn. So the tendency is to develop fast, cheap and easy, and PicoCricket system can do it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Variable definitions of outcome (Constant score, Simple Shoulder Test [SST]) have been used to assess outcome after shoulder treatment, although none has been accepted as the universal standard. Physicians lack an objective method to reliably assess the activity of their patients in dynamic conditions. Our purpose was to clinically validate the shoulder kinematic scores given by a portable movement analysis device, using the activities of daily living described in the SST as a reference. The secondary objective was to determine whether this device could be used to document the effectiveness of shoulder treatments (for glenohumeral osteoarthritis and rotator cuff disease) and detect early failures.Methods: A clinical trial including 34 patients and a control group of 31 subjects over an observation period of 1 year was set up. Evaluations were made at baseline and 3, 6, and 12 months after surgery by 2 independent observers. Miniature sensors (3-dimensional gyroscopes and accelerometers) allowed kinematic scores to be computed. They were compared with the regular outcome scores: SST; Disabilities of the Arm, Shoulder and Hand; American Shoulder and Elbow Surgeons; and Constant.Results: Good to excellent correlations (0.61-0.80) were found between kinematics and clinical scores. Significant differences were found at each follow-up in comparison with the baseline status for all the kinematic scores (P < .015). The kinematic scores were able to point out abnormal patient outcomes at the first postoperative follow-up.Conclusion: Kinematic scores add information to the regular outcome tools. They offer an effective way to measure the functional performance of patients with shoulder pathology and have the potential to detect early treatment failures.Level of evidence: Level II, Development of Diagnostic Criteria, Diagnostic Study. (C) 2011 Journal of Shoulder and Elbow Surgery Board of Trustees.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La présente étude est à la fois une évaluation du processus de la mise en oeuvre et des impacts de la police de proximité dans les cinq plus grandes zones urbaines de Suisse - Bâle, Berne, Genève, Lausanne et Zurich. La police de proximité (community policing) est à la fois une philosophie et une stratégie organisationnelle qui favorise un partenariat renouvelé entre la police et les communautés locales dans le but de résoudre les problèmes relatifs à la sécurité et à l'ordre public. L'évaluation de processus a analysé des données relatives aux réformes internes de la police qui ont été obtenues par l'intermédiaire d'entretiens semi-structurés avec des administrateurs clés des cinq départements de police, ainsi que dans des documents écrits de la police et d'autres sources publiques. L'évaluation des impacts, quant à elle, s'est basée sur des variables contextuelles telles que des statistiques policières et des données de recensement, ainsi que sur des indicateurs d'impacts construit à partir des données du Swiss Crime Survey (SCS) relatives au sentiment d'insécurité, à la perception du désordre public et à la satisfaction de la population à l'égard de la police. Le SCS est un sondage régulier qui a permis d'interroger des habitants des cinq grandes zones urbaines à plusieurs reprises depuis le milieu des années 1980. L'évaluation de processus a abouti à un « Calendrier des activités » visant à créer des données de panel permettant de mesurer les progrès réalisés dans la mise en oeuvre de la police de proximité à l'aide d'une grille d'évaluation à six dimensions à des intervalles de cinq ans entre 1990 et 2010. L'évaluation des impacts, effectuée ex post facto, a utilisé un concept de recherche non-expérimental (observational design) dans le but d'analyser les impacts de différents modèles de police de proximité dans des zones comparables à travers les cinq villes étudiées. Les quartiers urbains, délimités par zone de code postal, ont ainsi été regroupés par l'intermédiaire d'une typologie réalisée à l'aide d'algorithmes d'apprentissage automatique (machine learning). Des algorithmes supervisés et non supervisés ont été utilisés sur les données à haute dimensionnalité relatives à la criminalité, à la structure socio-économique et démographique et au cadre bâti dans le but de regrouper les quartiers urbains les plus similaires dans des clusters. D'abord, les cartes auto-organisatrices (self-organizing maps) ont été utilisées dans le but de réduire la variance intra-cluster des variables contextuelles et de maximiser simultanément la variance inter-cluster des réponses au sondage. Ensuite, l'algorithme des forêts d'arbres décisionnels (random forests) a permis à la fois d'évaluer la pertinence de la typologie de quartier élaborée et de sélectionner les variables contextuelles clés afin de construire un modèle parcimonieux faisant un minimum d'erreurs de classification. Enfin, pour l'analyse des impacts, la méthode des appariements des coefficients de propension (propensity score matching) a été utilisée pour équilibrer les échantillons prétest-posttest en termes d'âge, de sexe et de niveau d'éducation des répondants au sein de chaque type de quartier ainsi identifié dans chacune des villes, avant d'effectuer un test statistique de la différence observée dans les indicateurs d'impacts. De plus, tous les résultats statistiquement significatifs ont été soumis à une analyse de sensibilité (sensitivity analysis) afin d'évaluer leur robustesse face à un biais potentiel dû à des covariables non observées. L'étude relève qu'au cours des quinze dernières années, les cinq services de police ont entamé des réformes majeures de leur organisation ainsi que de leurs stratégies opérationnelles et qu'ils ont noué des partenariats stratégiques afin de mettre en oeuvre la police de proximité. La typologie de quartier développée a abouti à une réduction de la variance intra-cluster des variables contextuelles et permet d'expliquer une partie significative de la variance inter-cluster des indicateurs d'impacts avant la mise en oeuvre du traitement. Ceci semble suggérer que les méthodes de géocomputation aident à équilibrer les covariables observées et donc à réduire les menaces relatives à la validité interne d'un concept de recherche non-expérimental. Enfin, l'analyse des impacts a révélé que le sentiment d'insécurité a diminué de manière significative pendant la période 2000-2005 dans les quartiers se trouvant à l'intérieur et autour des centres-villes de Berne et de Zurich. Ces améliorations sont assez robustes face à des biais dus à des covariables inobservées et covarient dans le temps et l'espace avec la mise en oeuvre de la police de proximité. L'hypothèse alternative envisageant que les diminutions observées dans le sentiment d'insécurité soient, partiellement, un résultat des interventions policières de proximité semble donc être aussi plausible que l'hypothèse nulle considérant l'absence absolue d'effet. Ceci, même si le concept de recherche non-expérimental mis en oeuvre ne peut pas complètement exclure la sélection et la régression à la moyenne comme explications alternatives. The current research project is both a process and impact evaluation of community policing in Switzerland's five major urban areas - Basel, Bern, Geneva, Lausanne, and Zurich. Community policing is both a philosophy and an organizational strategy that promotes a renewed partnership between the police and the community to solve problems of crime and disorder. The process evaluation data on police internal reforms were obtained through semi-structured interviews with key administrators from the five police departments as well as from police internal documents and additional public sources. The impact evaluation uses official crime records and census statistics as contextual variables as well as Swiss Crime Survey (SCS) data on fear of crime, perceptions of disorder, and public attitudes towards the police as outcome measures. The SCS is a standing survey instrument that has polled residents of the five urban areas repeatedly since the mid-1980s. The process evaluation produced a "Calendar of Action" to create panel data to measure community policing implementation progress over six evaluative dimensions in intervals of five years between 1990 and 2010. The impact evaluation, carried out ex post facto, uses an observational design that analyzes the impact of the different community policing models between matched comparison areas across the five cities. Using ZIP code districts as proxies for urban neighborhoods, geospatial data mining algorithms serve to develop a neighborhood typology in order to match the comparison areas. To this end, both unsupervised and supervised algorithms are used to analyze high-dimensional data on crime, the socio-economic and demographic structure, and the built environment in order to classify urban neighborhoods into clusters of similar type. In a first step, self-organizing maps serve as tools to develop a clustering algorithm that reduces the within-cluster variance in the contextual variables and simultaneously maximizes the between-cluster variance in survey responses. The random forests algorithm then serves to assess the appropriateness of the resulting neighborhood typology and to select the key contextual variables in order to build a parsimonious model that makes a minimum of classification errors. Finally, for the impact analysis, propensity score matching methods are used to match the survey respondents of the pretest and posttest samples on age, gender, and their level of education for each neighborhood type identified within each city, before conducting a statistical test of the observed difference in the outcome measures. Moreover, all significant results were subjected to a sensitivity analysis to assess the robustness of these findings in the face of potential bias due to some unobserved covariates. The study finds that over the last fifteen years, all five police departments have undertaken major reforms of their internal organization and operating strategies and forged strategic partnerships in order to implement community policing. The resulting neighborhood typology reduced the within-cluster variance of the contextual variables and accounted for a significant share of the between-cluster variance in the outcome measures prior to treatment, suggesting that geocomputational methods help to balance the observed covariates and hence to reduce threats to the internal validity of an observational design. Finally, the impact analysis revealed that fear of crime dropped significantly over the 2000-2005 period in the neighborhoods in and around the urban centers of Bern and Zurich. These improvements are fairly robust in the face of bias due to some unobserved covariate and covary temporally and spatially with the implementation of community policing. The alternative hypothesis that the observed reductions in fear of crime were at least in part a result of community policing interventions thus appears at least as plausible as the null hypothesis of absolutely no effect, even if the observational design cannot completely rule out selection and regression to the mean as alternative explanations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Growth is a central process in paediatrics. Weight and height evaluation are therefore routine exams for every child but in some situation, particularly inflammatory bowel disease (IBD), a wider evaluation of nutritional status needs to be performed. Objectives: To assess the accuracy of bio-impedance analysis (BIA) compared to the gold standard dual energy X-ray absorptiometry (DEXA) in estimating percentage body fat (fat mass; FM) and lean body mass (fat free mass; FFM) in children with inflammatory bowel disease (IBD). To compare FM and FFM levels between patients with IBD and healthy controls. Methods: Twenty-nine healthy controls (12 females; mean age: 12.7 ± 1.9 years) and 21 patients (11 females; 14.3 ± 1.3 years) were recruited from August 2011 to October 2012 at our institution. BIA was performed in all children and DEXA in patients only. Concordance between BIA and DEXA was assessed using Lin's concordance correlation and the Bland-Altman method. Between-group comparisons were made using analysis of variance adjusting for age. Results: BIA-derived FM% showed a good concordance with DEXA-derived values, while BIA-derived FFM% tended to be slightly higher than DEXA-derived values (table). No differences were found between patients and controls regarding body mass index (mean ± SD: 19.3 ± 3.3 vs. 20.1 ± 2.8 kg/m2, respectively; age-adjusted P = 0.08) and FM% (boys: 25.3 ± 10.2 vs. 22.6 ± 7.1%, for patients and controls, respectively; P = 0.20; girls: 28.2 ± 5.7 vs. 26.4 ± 7.7%; P = 0.91). Also, no differences were found regarding FFM% in boys (74.9 ± 10.2 vs. 77.4 ± 7.1%; P = 0.22) and girls (71.8 ± 5.6 vs. 73.5 ± 7.7%; P = 0.85). Conclusion: BIA adequately assesses body composition (FM%) in children with IBD and could advantageously replace DEXA, which is more expensive and less available. No differences in body composition were found between children with IBD and healthy controls.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT In recent years, geotechnologies as remote and proximal sensing and attributes derived from digital terrain elevation models indicated to be very useful for the description of soil variability. However, these information sources are rarely used together. Therefore, a methodology for assessing and specialize soil classes using the information obtained from remote/proximal sensing, GIS and technical knowledge has been applied and evaluated. Two areas of study, in the State of São Paulo, Brazil, totaling approximately 28.000 ha were used for this work. First, in an area (area 1), conventional pedological mapping was done and from the soil classes found patterns were obtained with the following information: a) spectral information (forms of features and absorption intensity of spectral curves with 350 wavelengths -2,500 nm) of soil samples collected at specific points in the area (according to each soil type); b) obtaining equations for determining chemical and physical properties of the soil from the relationship between the results obtained in the laboratory by the conventional method, the levels of chemical and physical attributes with the spectral data; c) supervised classification of Landsat TM 5 images, in order to detect changes in the size of the soil particles (soil texture); d) relationship between classes relief soils and attributes. Subsequently, the obtained patterns were applied in area 2 obtain pedological classification of soils, but in GIS (ArcGIS). Finally, we developed a conventional pedological mapping in area 2 to which was compared with a digital map, ie the one obtained only with pre certain standards. The proposed methodology had a 79 % accuracy in the first categorical level of Soil Classification System, 60 % accuracy in the second category level and became less useful in the categorical level 3 (37 % accuracy).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Molecular tools may help to uncover closely related and still diverging species from a wide variety of taxa and provide insight into the mechanisms, pace and geography of marine speciation. There is a certain controversy on the phylogeography and speciation modes of species-groups with an Eastern Atlantic-Western Indian Ocean distribution, with previous studies suggesting that older events (Miocene) and/or more recent (Pleistocene) oceanographic processes could have influenced the phylogeny of marine taxa. The spiny lobster genus Palinurus allows for testing among speciation hypotheses, since it has a particular distribution with two groups of three species each in the Northeastern Atlantic (P. elephas, P. mauritanicus and P. charlestoni) and Southeastern Atlantic and Southwestern Indian Oceans (P. gilchristi, P. delagoae and P. barbarae). In the present study, we obtain a more complete understanding of the phylogenetic relationships among these species through a combined dataset with both nuclear and mitochondrial markers, by testing alternative hypotheses on both the mutation rate and tree topology under the recently developed approximate Bayesian computation (ABC) methods. Results Our analyses support a North-to-South speciation pattern in Palinurus with all the South-African species forming a monophyletic clade nested within the Northern Hemisphere species. Coalescent-based ABC methods allowed us to reject the previously proposed hypothesis of a Middle Miocene speciation event related with the closure of the Tethyan Seaway. Instead, divergence times obtained for Palinurus species using the combined mtDNA-microsatellite dataset and standard mutation rates for mtDNA agree with known glaciation-related processes occurring during the last 2 my. Conclusion The Palinurus speciation pattern is a typical example of a series of rapid speciation events occurring within a group, with very short branches separating different species. Our results support the hypothesis that recent climate change-related oceanographic processes have influenced the phylogeny of marine taxa, with most Palinurus species originating during the last two million years. The present study highlights the value of new coalescent-based statistical methods such as ABC for testing different speciation hypotheses using molecular data.