930 resultados para Precision-recall analysis
Resumo:
A new online method to analyse water isotopes of speleothem fluid inclusions using a wavelength scanned cavity ring down spectroscopy (WS-CRDS) instrument is presented. This novel technique allows us simultaneously to measure hydrogen and oxygen isotopes for a released aliquot of water. To do so, we designed a new simple line that allows the online water extraction and isotope analysis of speleothem samples. The specificity of the method lies in the fact that fluid inclusions release is made on a standard water background, which mainly improves the δ D robustness. To saturate the line, a peristaltic pump continuously injects standard water into the line that is permanently heated to 140 °C and flushed with dry nitrogen gas. This permits instantaneous and complete vaporisation of the standard water, resulting in an artificial water background with well-known δ D and δ18O values. The speleothem sample is placed in a copper tube, attached to the line, and after system stabilisation it is crushed using a simple hydraulic device to liberate speleothem fluid inclusions water. The released water is carried by the nitrogen/standard water gas stream directly to a Picarro L1102-i for isotope determination. To test the accuracy and reproducibility of the line and to measure standard water during speleothem measurements, a syringe injection unit was added to the line. Peak evaluation is done similarly as in gas chromatography to obtain &delta D; and δ18O isotopic compositions of measured water aliquots. Precision is better than 1.5 ‰ for δ D and 0.4 ‰ for δ18O for water measurements for an extended range (−210 to 0 ‰ for δ D and −27 to 0 ‰ for δ18O) primarily dependent on the amount of water released from speleothem fluid inclusions and secondarily on the isotopic composition of the sample. The results show that WS-CRDS technology is suitable for speleothem fluid inclusion measurements and gives results that are comparable to the isotope ratio mass spectrometry (IRMS) technique.
Resumo:
OBJECTIVE The cost-effectiveness of cast nonprecious frameworks has increased their prevalence in cemented implant crowns. The purpose of this study was to assess the effect of the design and height of the retentive component of a standard titanium implant abutment on the fit, possible horizontal rotation and retention forces of cast nonprecious alloy crowns prior to cementation. MATERIALS AND METHODS Two abutment designs were examined: Type A with a 6° taper and 8 antirotation planes (Straumann Tissue-Level RN) and Type B with a 7.5° taper and 1 antirotation plane (SICace implant). Both types were analyzed using 60 crowns: 20 with a full abutment height (6 mm), 20 with a medium abutment height (4 mm), and 20 with a minimal (2.5 mm) abutment height. The marginal and internal fit and the degree of possible rotation were evaluated by using polyvinylsiloxane impressions under a light microscope (magnification of ×50). To measure the retention force, a custom force-measuring device was employed. STATISTICAL ANALYSIS one-sided Wilcoxon rank-sum tests with Bonferroni-Holm corrections, Fisher's exact tests, and Spearman's rank correlation coefficient. RESULTS Type A exhibited increased marginal gaps (primary end-point: 55 ± 20 μm vs. 138 ± 59 μm, P < 0.001) but less rotation (P < 0.001) than Type B. The internal fit was also better for Type A than for Type B (P < 0.001). The retention force of Type A (2.49 ± 3.2 N) was higher (P = 0.019) than that of Type B (1.27 ± 0.84 N). Reduction in abutment height did not affect the variables observed. CONCLUSION Less-tapered abutments with more antirotation planes provide an increase in the retention force, which confines the horizontal rotation but widens the marginal gaps of the crowns. Thus, casting of nonprecious crowns with Type A abutments may result in clinically unfavorable marginal gaps.
Resumo:
Recurrent wheezing or asthma is a common problem in children that has increased considerably in prevalence in the past few decades. The causes and underlying mechanisms are poorly understood and it is thought that a numb er of distinct diseases causing similar symptoms are involved. Due to the lack of a biologically founded classification system, children are classified according to their observed disease related features (symptoms, signs, measurements) into phenotypes. The objectives of this PhD project were a) to develop tools for analysing phenotypic variation of a disease, and b) to examine phenotypic variability of wheezing among children by applying these tools to existing epidemiological data. A combination of graphical methods (multivariate co rrespondence analysis) and statistical models (latent variables models) was used. In a first phase, a model for discrete variability (latent class model) was applied to data on symptoms and measurements from an epidemiological study to identify distinct phenotypes of wheezing. In a second phase, the modelling framework was expanded to include continuous variability (e.g. along a severity gradient) and combinations of discrete and continuo us variability (factor models and factor mixture models). The third phase focused on validating the methods using simulation studies. The main body of this thesis consists of 5 articles (3 published, 1 submitted and 1 to be submitted) including applications, methodological contributions and a review. The main findings and contributions were: 1) The application of a latent class model to epidemiological data (symptoms and physiological measurements) yielded plausible pheno types of wheezing with distinguishing characteristics that have previously been used as phenotype defining characteristics. 2) A method was proposed for including responses to conditional questions (e.g. questions on severity or triggers of wheezing are asked only to children with wheeze) in multivariate modelling.ii 3) A panel of clinicians was set up to agree on a plausible model for wheezing diseases. The model can be used to generate datasets for testing the modelling approach. 4) A critical review of methods for defining and validating phenotypes of wheeze in children was conducted. 5) The simulation studies showed that a parsimonious parameterisation of the models is required to identify the true underlying structure of the data. The developed approach can deal with some challenges of real-life cohort data such as variables of mixed mode (continuous and categorical), missing data and conditional questions. If carefully applied, the approach can be used to identify whether the underlying phenotypic variation is discrete (classes), continuous (factors) or a combination of these. These methods could help improve precision of research into causes and mechanisms and contribute to the development of a new classification of wheezing disorders in children and other diseases which are difficult to classify.
Resumo:
The development and improvement of MC-ICP-MS instruments have fueled the growth of Lu–Hf geochronology over the last two decades, but some limitations remain. Here, we present improvements in chemical separation and mass spectrometry that allow accurate and precise measurements of 176Hf/177Hf and 176Lu/177Hf in high-Lu/Hf samples (e.g., garnet and apatite), as well as for samples containing sub-nanogram quantities of Hf. When such samples are spiked, correcting for the isobaric interference of 176Lu on 176Hf is not always possible if the separation of Lu and Hf is insufficient. To improve the purification of Hf, the high field strength elements (HFSE, including Hf) are first separated from the rare earth elements (REE, including Lu) on a first-stage cation column modified after Patchett and Tatsumoto (Contrib. Mineral. Petrol., 1980, 75, 263–267). Hafnium is further purified on an Ln-Spec column adapted from the procedures of Münker et al. (Geochem., Geophys., Geosyst., 2001, DOI: 10.1029/2001gc000183) and Wimpenny et al. (Anal. Chem., 2013, 85, 11258–11264) typically resulting in Lu/Hf < 0.0001, Zr/Hf < 1, and Ti/Hf < 0.1. In addition, Sm–Nd and Rb–Sr separations can easily be added to the described two-stage ion-exchange procedure for Lu–Hf. The isotopic compositions are measured on a Thermo Scientific Neptune Plus MC-ICP-MS equipped with three 1012 Ω resistors. Multiple 176Hf/177Hf measurements of international reference rocks yield a precision of 5–20 ppm for solutions containing 40 ppb of Hf, and 50–180 ppm for 1 ppb solutions (=0.5 ng sample Hf 0.5 in ml). The routine analysis of sub-ng amounts of Hf will facilitate Lu–Hf dating of low-concentration samples.
Resumo:
The lexical items like and well can serve as discourse markers (DMs), but can also play numerous other roles, such as verb or adverb. Identifying the occurrences that function as DMs is an important step for language understanding by computers. In this study, automatic classifiers using lexical, prosodic/positional and sociolinguistic features are trained over transcribed dialogues, manually annotated with DM information. The resulting classifiers improve state-of-the-art performance of DM identification, at about 90% recall and 79% precision for like (84.5% accuracy, κ = 0.69), and 99% recall and 98% precision for well (97.5% accuracy, κ = 0.88). Automatic feature analysis shows that lexical collocations are the most reliable indicators, followed by prosodic/positional features, while sociolinguistic features are marginally useful for the identification of DM like and not useful for well. The differentiated processing of each type of DM improves classification accuracy, suggesting that these types should be treated individually.
Resumo:
The isotope composition of selenium (Se) can provide important constraints on biological, geochemical, and cosmochemical processes taking place in different reservoirs on Earth and during planet formation. To provide precise qualitative and quantitative information on these processes, accurate and highly precise isotope data need to be obtained. The currently applied ICP-MS methods for Se isotope measurements are compromised by the necessity to perform a large number of interference corrections. Differences in these correction methods can lead to discrepancies in published Se isotope values of rock standards which are significantly higher than the acclaimed precision. An independent analytical approach applying a double spike (DS) and state-of-the-art TIMS may yield better precision due to its smaller number of interferences and could test the accuracy of data obtained by ICP-MS approaches. This study shows that the precision of Se isotope measurements performed with two different Thermo Scientific™ Triton™ Plus TIMS is distinctly deteriorated by about ±1‰ (2 s.d.) due to δ80/78Se by a memory Se signal of up to several millivolts and additional minor residual mass bias which could not be corrected for with the common isotope fractionation laws. This memory Se has a variable isotope composition with a DS fraction of up to 20% and accumulates with increasing number of measurements. Thus it represents an accumulation of Se from previous Se measurements with a potential addition from a sample or machine blank. Several cleaning techniques of the MS parts were tried to decrease the memory signal, but were not sufficient to perform precise Se isotope analysis. If these serious memory problems can be overcome in the future, the precision and accuracy of Se isotope analysis with TIMS should be significantly better than those of the current ICP-MS approaches.
Resumo:
In situ and simultaneous measurement of the three most abundant isotopologues of methane using mid-infrared laser absorption spectroscopy is demonstrated. A field-deployable, autonomous platform is realized by coupling a compact quantum cascade laser absorption spectrometer (QCLAS) to a preconcentration unit, called trace gas extractor (TREX). This unit enhances CH4 mole fractions by a factor of up to 500 above ambient levels and quantitatively separates interfering trace gases such as N2O and CO2. The analytical precision of the QCLAS isotope measurement on the preconcentrated (750 ppm, parts-per-million, µmole mole−1) methane is 0.1 and 0.5 ‰ for δ13C- and δD-CH4 at 10 min averaging time. Based on repeated measurements of compressed air during a 2-week intercomparison campaign, the repeatability of the TREX–QCLAS was determined to be 0.19 and 1.9 ‰ for δ13C and δD-CH4, respectively. In this intercomparison campaign the new in situ technique is compared to isotope-ratio mass spectrometry (IRMS) based on glass flask and bag sampling and real time CH4 isotope analysis by two commercially available laser spectrometers. Both laser-based analyzers were limited to methane mole fraction and δ13C-CH4 analysis, and only one of them, a cavity ring down spectrometer, was capable to deliver meaningful data for the isotopic composition. After correcting for scale offsets, the average difference between TREX–QCLAS data and bag/flask sampling–IRMS values are within the extended WMO compatibility goals of 0.2 and 5 ‰ for δ13C- and δD-CH4, respectively. This also displays the potential to improve the interlaboratory compatibility based on the analysis of a reference air sample with accurately determined isotopic composition.
Resumo:
BACKGROUND Panic disorder is characterised by the presence of recurrent unexpected panic attacks, discrete periods of fear or anxiety that have a rapid onset and include symptoms such as racing heart, chest pain, sweating and shaking. Panic disorder is common in the general population, with a lifetime prevalence of 1% to 4%. A previous Cochrane meta-analysis suggested that psychological therapy (either alone or combined with pharmacotherapy) can be chosen as a first-line treatment for panic disorder with or without agoraphobia. However, it is not yet clear whether certain psychological therapies can be considered superior to others. In order to answer this question, in this review we performed a network meta-analysis (NMA), in which we compared eight different forms of psychological therapy and three forms of a control condition. OBJECTIVES To assess the comparative efficacy and acceptability of different psychological therapies and different control conditions for panic disorder, with or without agoraphobia, in adults. SEARCH METHODS We conducted the main searches in the CCDANCTR electronic databases (studies and references registers), all years to 16 March 2015. We conducted complementary searches in PubMed and trials registries. Supplementary searches included reference lists of included studies, citation indexes, personal communication to the authors of all included studies and grey literature searches in OpenSIGLE. We applied no restrictions on date, language or publication status. SELECTION CRITERIA We included all relevant randomised controlled trials (RCTs) focusing on adults with a formal diagnosis of panic disorder with or without agoraphobia. We considered the following psychological therapies: psychoeducation (PE), supportive psychotherapy (SP), physiological therapies (PT), behaviour therapy (BT), cognitive therapy (CT), cognitive behaviour therapy (CBT), third-wave CBT (3W) and psychodynamic therapies (PD). We included both individual and group formats. Therapies had to be administered face-to-face. The comparator interventions considered for this review were: no treatment (NT), wait list (WL) and attention/psychological placebo (APP). For this review we considered four short-term (ST) outcomes (ST-remission, ST-response, ST-dropouts, ST-improvement on a continuous scale) and one long-term (LT) outcome (LT-remission/response). DATA COLLECTION AND ANALYSIS As a first step, we conducted a systematic search of all relevant papers according to the inclusion criteria. For each outcome, we then constructed a treatment network in order to clarify the extent to which each type of therapy and each comparison had been investigated in the available literature. Then, for each available comparison, we conducted a random-effects meta-analysis. Subsequently, we performed a network meta-analysis in order to synthesise the available direct evidence with indirect evidence, and to obtain an overall effect size estimate for each possible pair of therapies in the network. Finally, we calculated a probabilistic ranking of the different psychological therapies and control conditions for each outcome. MAIN RESULTS We identified 1432 references; after screening, we included 60 studies in the final qualitative analyses. Among these, 54 (including 3021 patients) were also included in the quantitative analyses. With respect to the analyses for the first of our primary outcomes, (short-term remission), the most studied of the included psychological therapies was CBT (32 studies), followed by BT (12 studies), PT (10 studies), CT (three studies), SP (three studies) and PD (two studies).The quality of the evidence for the entire network was found to be low for all outcomes. The quality of the evidence for CBT vs NT, CBT vs SP and CBT vs PD was low to very low, depending on the outcome. The majority of the included studies were at unclear risk of bias with regard to the randomisation process. We found almost half of the included studies to be at high risk of attrition bias and detection bias. We also found selective outcome reporting bias to be present and we strongly suspected publication bias. Finally, we found almost half of the included studies to be at high risk of researcher allegiance bias.Overall the networks appeared to be well connected, but were generally underpowered to detect any important disagreement between direct and indirect evidence. The results showed the superiority of psychological therapies over the WL condition, although this finding was amplified by evident small study effects (SSE). The NMAs for ST-remission, ST-response and ST-improvement on a continuous scale showed well-replicated evidence in favour of CBT, as well as some sparse but relevant evidence in favour of PD and SP, over other therapies. In terms of ST-dropouts, PD and 3W showed better tolerability over other psychological therapies in the short term. In the long term, CBT and PD showed the highest level of remission/response, suggesting that the effects of these two treatments may be more stable with respect to other psychological therapies. However, all the mentioned differences among active treatments must be interpreted while taking into account that in most cases the effect sizes were small and/or results were imprecise. AUTHORS' CONCLUSIONS There is no high-quality, unequivocal evidence to support one psychological therapy over the others for the treatment of panic disorder with or without agoraphobia in adults. However, the results show that CBT - the most extensively studied among the included psychological therapies - was often superior to other therapies, although the effect size was small and the level of precision was often insufficient or clinically irrelevant. In the only two studies available that explored PD, this treatment showed promising results, although further research is needed in order to better explore the relative efficacy of PD with respect to CBT. Furthermore, PD appeared to be the best tolerated (in terms of ST-dropouts) among psychological treatments. Unexpectedly, we found some evidence in support of the possible viability of non-specific supportive psychotherapy for the treatment of panic disorder; however, the results concerning SP should be interpreted cautiously because of the sparsity of evidence regarding this treatment and, as in the case of PD, further research is needed to explore this issue. Behaviour therapy did not appear to be a valid alternative to CBT as a first-line treatment for patients with panic disorder with or without agoraphobia.
Resumo:
Measurements of energetic neutral atoms (ENAs) have been extremely successful in providing very important information on the physical processes inside and outside of our heliosphere. For instance, recent Interstellar Boundary Explorer (IBEX) observations have provided new insights into the local interstellar environment and improved measurements of the interstellar He temperature, velocity, and direction of the interstellar flow vector. Since particle collisions are rare, and radiation pressure is negligible for these neutrals, gravitational forces mainly determine the trajectories of neutral He atoms. Depending on the distance of an ENA to the source of a gravitational field and its relative speed and direction, this can result in significant deflection and acceleration. In this paper, we investigate the impact of the gravitational effects of Earth, the Moon, and Jupiter on ENA measurements performed in Earth's orbit. The results show that current analysis of the interstellar neutral parameters by IBEX is not significantly affected by planetary gravitational effects. We further studied the possibility of whether or not the Helium focusing cone of the Sun and Jupiter could be measured by IBEX and whether or not these cones could be used as an independent measure of the temperature of interstellar Helium.
Resumo:
Introduction. Food frequency questionnaires (FFQ) are used study the association between dietary intake and disease. An instructional video may potentially offer a low cost, practical method of dietary assessment training for participants thereby reducing recall bias in FFQs. There is little evidence in the literature of the effect of using instructional videos on FFQ-based intake. Objective. This analysis compared the reported energy and macronutrient intake of two groups that were randomized either to watch an instructional video before completing an FFQ or to view the same instructional video after completing the same FFQ. Methods. In the parent study, a diverse group of students, faculty and staff from Houston Community College were randomized to two groups, stratified by ethnicity, and completed an FFQ. The "video before" group watched an instructional video about completing the FFQ prior to answering the FFQ. The "video after" group watched the instructional video after completing the FFQ. The two groups were compared on mean daily energy (Kcal/day), fat (g/day), protein (g/day), carbohydrate (g/day) and fiber (g/day) intakes using descriptive statistics and one-way ANOVA. Demographic, height, and weight information was collected. Dietary intakes were adjusted for total energy intake before the comparative analysis. BMI and age were ruled out as potential confounders. Results. There were no significant differences between the two groups in mean daily dietary intakes of energy, total fat, protein, carbohydrates and fiber. However, a pattern of higher energy intake and lower fiber intake was reported in the group that viewed the instructional video before completing the FFQ compared to those who viewed the video after. Discussion. Analysis of the difference between reported intake of energy and macronutrients showed an overall pattern, albeit not statistically significant, of higher intake in the video before versus the video after group. Application of instructional videos for dietary assessment may require further research to address the validity of reported dietary intakes in those who are randomized to watch an instructional video before reporting diet compared to a control groups that does not view a video.^
Resumo:
The relative influence of race, income, education, and Food Stamp Program participation/nonparticipation on the food and nutrient intake of 102 fecund women ages 18-45 years in a Florida urban clinic population was assessed using the technique of multiple regression analysis. Study subgroups were defined by race and Food Stamp Program participation status. Education was found to have the greatest influence on food and nutrient intake. Race was the next most influential factor followed in order by Food Stamp Program participation and income. The combined effect of the four independent variables explained no more than 19 percent of the variance for any of the food and nutrient intake variables. This would indicate that a more complex model of influences is needed if variations in food and nutrient intake are to be fully explained.^ A socioeconomic questionnaire was administered to investigate other factors of influence. The influence of the mother, frequency and type of restaurant dining, and perceptions of food intake and weight were found to be factors deserving further study.^ Dietary data were collected using the 24-hour recall and food frequency checklist. Descriptive dietary findings indicated that iron and calcium were nutrients where adequacy was of concern for all study subgroups. White Food Stamp Program participants had the greatest number of mean nutrient intake values falling below the 1980 Recommended Dietary Allowances (RDAs). When Food Stamp Program participants were contrasted to nonparticipants, mean intakes of six nutrients (kilocalories, calcium, iron, vitamin A, thiamin, and riboflavin) were below the 1980 RDA compared to five mean nutrient intakes (kilocalories, calcium, iron, thiamin and riboflavin) for the nonparticipants. Use of the Index of Nutritional Quality (INQ), however, revealed that the quality of the diet of Food Stamp Program participants per 1000 kilocalories was adequate with exception of calcium and iron. Intakes of these nutrients were also not adequate on a 1000 kilocalorie basis for the nonparticipant group. When mean nutrient intakes of the groups were compared using Student's t-test oleicacid intake was the only significant difference found. Being a nonparticipant in the Food Stamp Program was found to be associated with more frequent consumption of cookies, sweet rolls, doughnuts, and honey. The findings of this study contradict the negative image of the Food Stamp Program participant and emphasize the importance of education. ^
New methods for quantification and analysis of quantitative real-time polymerase chain reaction data
Resumo:
Quantitative real-time polymerase chain reaction (qPCR) is a sensitive gene quantitation method that has been widely used in the biological and biomedical fields. The currently used methods for PCR data analysis, including the threshold cycle (CT) method, linear and non-linear model fitting methods, all require subtracting background fluorescence. However, the removal of background fluorescence is usually inaccurate, and therefore can distort results. Here, we propose a new method, the taking-difference linear regression method, to overcome this limitation. Briefly, for each two consecutive PCR cycles, we subtracted the fluorescence in the former cycle from that in the later cycle, transforming the n cycle raw data into n-1 cycle data. Then linear regression was applied to the natural logarithm of the transformed data. Finally, amplification efficiencies and the initial DNA molecular numbers were calculated for each PCR run. To evaluate this new method, we compared it in terms of accuracy and precision with the original linear regression method with three background corrections, being the mean of cycles 1-3, the mean of cycles 3-7, and the minimum. Three criteria, including threshold identification, max R2, and max slope, were employed to search for target data points. Considering that PCR data are time series data, we also applied linear mixed models. Collectively, when the threshold identification criterion was applied and when the linear mixed model was adopted, the taking-difference linear regression method was superior as it gave an accurate estimation of initial DNA amount and a reasonable estimation of PCR amplification efficiencies. When the criteria of max R2 and max slope were used, the original linear regression method gave an accurate estimation of initial DNA amount. Overall, the taking-difference linear regression method avoids the error in subtracting an unknown background and thus it is theoretically more accurate and reliable. This method is easy to perform and the taking-difference strategy can be extended to all current methods for qPCR data analysis.^
Resumo:
Bulk chemical fine-grained sediment compositions from southern Victoria Land glacimarine sediments provide significant constraints on the reconstruction of sediment provenance models in the McMurdo Sound during Late Cenozoic time. High-resolution (~ 1 ka) geochemical data were obtained with a non-destructive AVAATECH XRF Core Scanner (XRF-CS) on the 1285 m long ANDRILL McMurdo Ice Shelf Project (MIS) sediment core AND-1B. This data set is complemented by high-precision chemical analyses (XRF and ICP-OES) on discrete samples. Statistical analyses reveal three geochemical facies which are interpreted to represent the following sources for the sediments recovered in the AND-1B core: 1) local McMurdo Volcanic Group (MVG) rocks, 2) Transantarctic Mountain rocks west of Ross Island (W TAM), and 3) Transantarctic Mountain rocks from more southerly areas (S TAM). Data indicate in combination with other sediment facies analyses (McKay et al., 2009, doi:10.1130/B26540.1) and provenance scenarios (Talarico and Sandroni, 2009, doi:10.1016/j.gloplacha.2009.04.007) that diamictites at the drill site are largely dominated by local sources (MVG) and are interpreted to indicate cold polar conditions with dry-based ice. MVG is interpreted to indicate cold polar condition with dry-based ice. A mixture of MVG and W TAM is interpreted to represent polar conditions and the S TAM facies is interpreted to represent open-marine conditions. Down-core variations in geochemical facies in the AND-1B core are interpreted to represent five major paleoclimate phases over the past 14 Ma. Cold polar conditions with major MVG influence occur below 1045 mbsf and above 120 mbsf. A section of warmer climate conditions with extensive peaks of S TAM influence characterizes the rest of the core, which is interrupted by a section from 525 to 855 mbsf of alternating influences of MVG and W TAM.
Resumo:
The grain size of deep-sea sediments provides an apparently simple proxy for current speed. However, grain size-based proxies may be ambiguous when the size distribution reflects a combination of processes, with current sorting only one of them. In particular, such sediment mixing hinders reconstruction of deep circulation changes associated with ice-rafting events in the glacial North Atlantic because variable ice-rafted detritus (IRD) input may falsely suggest current speed changes. Inverse modeling has been suggested as a way to overcome this problem. However, this approach requires high-precision size measurements that register small changes in the size distribution. Here we show that such data can be obtained using electrosensing and laser diffraction techniques, despite issues previously raised on the low precision of electrosensing methods and potential grain shape effects on laser diffraction. Down-core size patterns obtained from a sediment core from the North Atlantic are similar for both techniques, reinforcing the conclusion that both techniques yield comparable results. However, IRD input leads to a coarsening that spuriously suggests faster current speed. We show that this IRD influence can be accounted for using inverse modeling as long as wide size spectra are taken into account. This yields current speed variations that are in agreement with other proxies. Our experiments thus show that for current speed reconstruction, the choice of instrument is subordinate to a proper recognition of the various processes that determine the size distribution and that by using inverse modeling meaningful current speed reconstructions can be obtained from mixed sediments.