942 resultados para Modal interval analysis
Resumo:
BACKGROUND: Normal pregnancy depends on pronounced adaptations in steroid hormone concentrations. Although in recent years, the understanding of these hormones in pregnancy has improved, the interpretation is hampered by insufficient reference values. Our aim was to establish gestation-specific reference intervals for spot urinary steroid hormone levels in normal singleton pregnancies and 6 weeks postpartum. METHODS: Cross-sectional multicentre observational study. Women recruited between 2008 and 2013 at 3 University Hospitals in Switzerland (Bern), Scotland (Glasgow) and Austria (Graz). Spot urine was collected from healthy women undergoing a normal pregnancy (age, 16-45 years; mean, 31 years) attending routine antenatal clinics at gestation weeks 11, 20, and 28 and approximately 6 weeks postpartum. Urine steroid hormone levels were analysed using gas-chromatography mass spectrometry. Creatinine was also measured by routine analysis and used for normalisation. RESULTS: From the results, a reference interval was calculated for each hormone metabolite at each trimester and 6 weeks postpartum. Changes in these concentrations between trimesters and postpartum were also observed for several steroid hormones and followed changes proposed for index steroid hormones. CONCLUSIONS: Normal gestation-specific reference values for spot urinary steroid hormones throughout pregnancy and early postpartum are now available to facilitate clinical management and research approaches to steroid hormone metabolism in pregnancy and the early postpartum period.
Resumo:
Studies have shown that the discriminability of successive time intervals depends on the presentation order of the standard (St) and the comparison (Co) stimuli. Also, this order affects the point of subjective equality. The first effect is here called the standard-position effect (SPE); the latter is known as the time-order error. In the present study, we investigated how these two effects vary across interval types and standard durations, using Hellström’s sensation-weighting model to describe the results and relate them to stimulus comparison mechanisms. In Experiment 1, four modes of interval presentation were used, factorially combining interval type (filled, empty) and sensory modality (auditory, visual). For each mode, two presentation orders (St–Co, Co–St) and two standard durations (100 ms, 1,000 ms) were used; half of the participants received correctness feedback, and half of them did not. The interstimulus interval was 900 ms. The SPEs were negative (i.e., a smaller difference limen for St–Co than for Co–St), except for the filled-auditory and empty-visual 100-ms standards, for which a positive effect was obtained. In Experiment 2, duration discrimination was investigated for filled auditory intervals with four standards between 100 and 1,000 ms, an interstimulus interval of 900 ms, and no feedback. Standard duration interacted with presentation order, here yielding SPEs that were negative for standards of 100 and 1,000 ms, but positive for 215 and 464 ms. Our findings indicate that the SPE can be positive as well as negative, depending on the interval type and standard duration, reflecting the relative weighting of the stimulus information, as is described by the sensation-weighting model.
Resumo:
BACKGROUND Non-steroidal anti-inflammatory drugs (NSAIDs) are the backbone of osteoarthritis pain management. We aimed to assess the effectiveness of different preparations and doses of NSAIDs on osteoarthritis pain in a network meta-analysis. METHODS For this network meta-analysis, we considered randomised trials comparing any of the following interventions: NSAIDs, paracetamol, or placebo, for the treatment of osteoarthritis pain. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) and the reference lists of relevant articles for trials published between Jan 1, 1980, and Feb 24, 2015, with at least 100 patients per group. The prespecified primary and secondary outcomes were pain and physical function, and were extracted in duplicate for up to seven timepoints after the start of treatment. We used an extension of multivariable Bayesian random effects models for mixed multiple treatment comparisons with a random effect at the level of trials. For the primary analysis, a random walk of first order was used to account for multiple follow-up outcome data within a trial. Preparations that used different total daily dose were considered separately in the analysis. To assess a potential dose-response relation, we used preparation-specific covariates assuming linearity on log relative dose. FINDINGS We identified 8973 manuscripts from our search, of which 74 randomised trials with a total of 58 556 patients were included in this analysis. 23 nodes concerning seven different NSAIDs or paracetamol with specific daily dose of administration or placebo were considered. All preparations, irrespective of dose, improved point estimates of pain symptoms when compared with placebo. For six interventions (diclofenac 150 mg/day, etoricoxib 30 mg/day, 60 mg/day, and 90 mg/day, and rofecoxib 25 mg/day and 50 mg/day), the probability that the difference to placebo is at or below a prespecified minimum clinically important effect for pain reduction (effect size [ES] -0·37) was at least 95%. Among maximally approved daily doses, diclofenac 150 mg/day (ES -0·57, 95% credibility interval [CrI] -0·69 to -0·46) and etoricoxib 60 mg/day (ES -0·58, -0·73 to -0·43) had the highest probability to be the best intervention, both with 100% probability to reach the minimum clinically important difference. Treatment effects increased as drug dose increased, but corresponding tests for a linear dose effect were significant only for celecoxib (p=0·030), diclofenac (p=0·031), and naproxen (p=0·026). We found no evidence that treatment effects varied over the duration of treatment. Model fit was good, and between-trial heterogeneity and inconsistency were low in all analyses. All trials were deemed to have a low risk of bias for blinding of patients. Effect estimates did not change in sensitivity analyses with two additional statistical models and accounting for methodological quality criteria in meta-regression analysis. INTERPRETATION On the basis of the available data, we see no role for single-agent paracetamol for the treatment of patients with osteoarthritis irrespective of dose. We provide sound evidence that diclofenac 150 mg/day is the most effective NSAID available at present, in terms of improving both pain and function. Nevertheless, in view of the safety profile of these drugs, physicians need to consider our results together with all known safety information when selecting the preparation and dose for individual patients. FUNDING Swiss National Science Foundation (grant number 405340-104762) and Arco Foundation, Switzerland.
Resumo:
BACKGROUND Evidence suggests that EMS-physician-guided cardiopulmonary resuscitation (CPR) in out-of-hospital cardiac arrest (OOHCA) may be associated with improved outcomes, yet randomized controlled trials are not available. The goal of this meta-analysis was to determine the association between EMS-physician- versus paramedic-guided CPR and survival after OOHCA. METHODS AND RESULTS Studies that compared EMS-physician- versus paramedic-guided CPR in OOHCA published until June 2014 were systematically searched in MEDLINE, EMBASE and Cochrane databases. All studies were required to contain survival data. Data on study characteristics, methods, and as well as survival outcomes were extracted. A random-effects model was used for the meta-analysis due to a high degree of heterogeneity among the studies (I (2) = 44 %). Return of spontaneous circulation [ROSC], survival to hospital admission, and survival to hospital discharge were the outcome measures. Out of 3,385 potentially eligible studies, 14 met the inclusion criteria. In the pooled analysis (n = 126,829), EMS-physician-guided CPR was associated with significantly improved outcomes compared to paramedic-guided CPR: ROSC 36.2 % (95 % confidence interval [CI] 31.0 - 41.7 %) vs. 23.4 % (95 % CI 18.5 - 29.2 %) (pooled odds ratio [OR] 1.89, 95 % CI 1.36 - 2.63, p < 0.001); survival to hospital admission 30.1 % (95 % CI 24.2 - 36.7 %) vs. 19.2 % (95 % CI 12.7 - 28.1 %) (pooled OR 1.78, 95 % CI 0.97 - 3.28, p = 0.06); and survival to discharge 15.1 % (95 % CI 14.6 - 15.7 %) vs. 8.4 % (95 % CI 8.2 - 8.5 %) (pooled OR 2.03, 95 % CI 1.48 - 2.79, p < 0.001). CONCLUSIONS This systematic review suggests that EMS-physician-guided CPR in out-of-hospital cardiac arrest is associated with improved survival outcomes.
Resumo:
All forms of Kaposi sarcoma (KS) are more common in men than in women. It is unknown if this is due to a higher prevalence of human herpesvirus 8 (HHV-8), the underlying cause of KS, in men compared to women. We did a systematic review and meta-analysis to examine the association between HHV-8 seropositivity and gender in the general population. Studies in selected populations like for example, blood donors, hospital patients, and men who have sex with men were excluded. We searched Medline and Embase from January 1994 to February 2015. We included observational studies that recruited participants from the general population and reported HHV-8 seroprevalence for men and women or boys and girls. We used random-effects meta-analysis to pool odds ratios (OR) of the association between HHV-8 and gender. We used meta-regression to identify effect modifiers, including age, geographical region and type of HHV-8 antibody test. We included 22 studies, with 36,175 participants. Men from sub-Saharan Africa (SSA) (OR 1.21, 95% confidence interval [CI] 1.09-1.34), but not men from elsewhere (OR 0.94, 95% CI 0.83-1.06), were more likely to be HHV-8 seropositive than women (p value for interaction=0.010). There was no difference in HHV-8 seroprevalence between boys and girls from SSA (OR 0.90, 95% CI 0.72-1.13). The type of HHV-8 assay did not affect the overall results. A higher HHV-8 seroprevalence in men than women in SSA may partially explain why men have higher KS risk in this region. This article is protected by copyright. All rights reserved.
Resumo:
INTRODUCTION Although hepatitis C virus (HCV) screening is recommended for all HIV-infected patients initiating antiretroviral therapy, data on epidemiologic characteristics of HCV infection in resource-limited settings are scarce. METHODS We searched PubMed and EMBASE for studies assessing the prevalence of HCV infection among HIV-infected individuals in Africa and extracted data on laboratory methods used. Prevalence estimates from individual studies were combined for each country using random-effects meta-analysis. The importance of study design, population and setting as well as type of test (anti-HCV antibody tests and polymerase chain reactions) was examined with meta-regression. RESULTS Three randomized controlled trials, 28 cohort studies and 121 cross-sectional analyses with 108,180 HIV-infected individuals from 35 countries were included. The majority of data came from outpatient populations (55%), followed by blood donors (15%) and pregnant women (14%). Based on estimates from 159 study populations, anti-HCV positivity prevalence ranged between 3.3% (95% confidence interval (CI) 1.8-4.7) in Southern Africa and 42.3% (95% CI 4.1-80.5) in North Africa. Study design, type of setting and age distribution did not influence this prevalence significantly. The prevalence of replicating HCV infection, estimated from data of 29 cohorts, was 2.0% (95% CI 1.5-2.6). Ten studies from nine countries reported the HCV genotype of 74 samples, 53% were genotype 1, 24% genotype 2, 14% genotype 4 and 9% genotypes 3, 5 or 6. CONCLUSIONS The prevalence of anti-HCV antibodies is high in HIV-infected patients in Africa, but replicating HCV infection is rare and varies widely across countries.
Resumo:
The Data Envelopment Analysis (DEA) efficiency score obtained for an individual firm is a point estimate without any confidence interval around it. In recent years, researchers have resorted to bootstrapping in order to generate empirical distributions of efficiency scores. This procedure assumes that all firms have the same probability of getting an efficiency score from any specified interval within the [0,1] range. We propose a bootstrap procedure that empirically generates the conditional distribution of efficiency for each individual firm given systematic factors that influence its efficiency. Instead of resampling directly from the pooled DEA scores, we first regress these scores on a set of explanatory variables not included at the DEA stage and bootstrap the residuals from this regression. These pseudo-efficiency scores incorporate the systematic effects of unit-specific factors along with the contribution of the randomly drawn residual. Data from the U.S. airline industry are utilized in an empirical application.
Resumo:
Hepatocellular carcinoma (HCC) has been ranked as the top cause of death due to neoplasm malignancy in Taiwan for years. The high incidence of HCC in Taiwan is primarily attributed to high prevalence of hepatitis viral infection. Screening the subjects with liver cirrhosis for HCC was widely recommended by many previous studies. The latest practice guideline for management of HCC released by the American Association for the Study of Liver Disease (AASLD) in 2005 recommended that the high risk groups, including cirrhotic patients, chronic HBV/HCV carriers, and subjects with family history of HCC and etc., should undergo surveillance.^ This study aims to investigate (1) whether the HCC screening program can prolong survival period of the high risk group, (2) what is the incremental cost-effectiveness ratio of the HCC screening program in Taiwan, as compared with a non-screening strategy from the payer perspective, (3) which high risk group has the lowest ICER for the HCC screening program from the insurer's perspective, in comparison with no screening strategy of each group, and (4) the estimated total cost of providing the HCC screening program to all high risk groups.^ The high risk subjects in the study were identified from the communities with high prevalence of hepatitis viral infection and classified into three groups (cirrhosis group, early cirrhosis group, and no cirrhosis group) at different levels of risk to HCC by status of liver disease at the time of enrollment. The repeated ultrasound screenings at an interval of 3, 6, and 12 months were applied to cirrhosis group, early cirrhosis group, and no cirrhosis group, respectively. The Markov-based decision model was constructed to simulate progression of HCC and to estimate the ICER for each group of subjects.^ The screening group had longer survival in the statistical results and the model outcomes. Owing to the low HCC incidence rate in the community-based screening program, screening services only have limited effect on survival of the screening group. The incremental cost-effectiveness ratio of the HCC screening program was $3834 per year of life saved, in comparison with the non-screening strategy. The estimated total cost of each group from the screening model over 13.5 years approximately consumes 0.13%, 1.06%, and 0.71% of total amount of adjusted National Health Expenditure from Jan 1992 to Jun 2005. ^ The subjects at high risk of developing HCC to undergo repeated ultrasound screenings had longer survival than those without screening, but screening was not the only factor to cause longer survival in the screening group. The incremental cost-effectiveness ratio of the 2-stage community-based HCC screening program in Taiwan was small. The HCC screening program was worthy of investment in Taiwan. In comparison with early cirrhosis group and no cirrhosis group, cirrhosis group has the lowest ICER when the screening period is less than 19 years. The estimated total cost of providing the HCC screening program to all high risk groups consumes approximately 1.90% of total amount of adjusted 13.5-year NHE in Taiwan.^
Resumo:
Introduction and objective. A number of prognostic factors have been reported for predicting survival in patients with renal cell carcinoma. Yet few studies have analyzed the effects of those factors at different stages of the disease process. In this study, different stages of disease progression starting from nephrectomy to metastasis, from metastasis to death, and from evaluation to death were evaluated. ^ Methods. In this retrospective follow-up study, records of 97 deceased renal cell carcinoma (RCC) patients were reviewed between September 2006 to October 2006. Patients with TNM Stage IV disease before nephrectomy or with cancer diagnoses other than RCC were excluded leaving 64 records for analysis. Patient TNM staging, Furhman Grade, age, tumor size, tumor volume, histology and patient gender were analyzed in relation to time to metastases. Time from nephrectomy to metastasis, TNM staging, Furhman Grade, age, tumor size, tumor volume, histology and patient gender were tested for significance in relation to time from metastases to death. Finally, analysis of laboratory values at time of evaluation, Eastern Cooperative Oncology Group performance status (ECOG), UCLA Integrated Staging System (UISS), time from nephrectomy to metastasis, TNM staging, Furhman Grade, age, tumor size, tumor volume, histology and patient gender were tested for significance in relation to time from evaluation to death. Linear regression and Cox Proportional Hazard (univariate and multivariate) was used for testing significance. Kaplan-Meier Log-Rank test was used to detect any significance between groups at various endpoints. ^ Results. Compared to negative lymph nodes at time of nephrectomy, a single positive lymph node had significantly shorter time to metastasis (p<0.0001). Compared to other histological types, clear cell histology had significant metastasis free survival (p=0.003). Clear cell histology compared to other types (p=0.0002 univariate, p=0.038 multivariate) and time to metastasis with log conversion (p=0.028) significantly affected time from metastasis to death. A greater than one year and greater than two year metastasis free interval, compared to patients that had metastasis before one and two years, had statistically significant survival benefit (p=0.004 and p=0.0318). Time from evaluation to death was affected by greater than one year metastasis free interval (p=0.0459), alcohol consumption (p=0.044), LDH (p=0.006), ECOG performance status (p<0.001), and hemoglobin level (p=0.0092). The UISS risk stratified the patient population in a statistically significant manner for survival (p=0.001). No other factors were found to be significant. ^ Conclusion. Clear cell histology is predictive for both time to metastasis and metastasis to death. Nodal status at time of nephrectomy may predict risk of metastasis. The time interval to metastasis significantly predicts time from metastasis to death and time from evaluation to death. ECOG performance status, and hemoglobin levels predicts survival outcome at evaluation. Finally, UISS appropriately stratifies risk in our population. ^
Resumo:
Background. A community-wide outbreak of cryptosporidiosis occurred in Dallas County during the summer of 2008. A subset of cases occurring with onset of illness within a 2 week interval was epidemiologically linked to 2 neighborhood interactive water fountain parks. ^ Methods. A case control study was conducted to evaluate risk factors associated with developing illness with cryptosporidiosis from the fountain parks. Cases were selected from a line list from the epidemiological study. The selection for the controls was either healthy family members or a daycare center nearby. Cases and controls were not matched. ^ Results. Interviews were completed for 44 fountain park attendees who met case definition and 54 community controls. Twenty-seven percent (27.3%) of the cases and 13.0% of the controls were between the ages of 0–4 years. Thirty-nine percent (38.6%) of the cases and 24.1% of the controls were between the ages of 5–13 years. Fourteen percent (13.6%) of the cases and 33.3% of the controls were between the ages of 14–31 years. Twenty percent (20.5%) of the cases and 29.6% of the controls were between the ages of 32–63 years. 47.7% of the cases and 42.6% of the controls were males. Fountain park attendees who reported having been splashed in the face with water were 10 times more likely to become ill than controls (OR = 10.0, 95% CI = 2.8–35.1). Persons who reported having swallowed water from the interactive fountains were 34 times more likely to become ill than controls (OR = 34.3, 95%CI = 9.3–125.7). ^ Conclusion. Prompt reporting of cases, identification of outbreak sources, and immediate implementation of remediation measures were critical in curtailing further transmission from these particular sites through the remainder of the season. This investigation underscores the potential for cryptosporidiosis outbreaks to occur in interactive fountain parks, and the need for enhanced preventive measures in these settings. Education of the public regarding avoidance of behaviors such as drinking water from interactive fountains is also an important component of public health prevention efforts. ^
Perinatal mortality and quality of care at the National Institute of Perinatology: A 3-year analysis
Resumo:
Quality of medical care has been indirectly assessed through the collection of negative outcomes. A preventable death is one that could have been avoided if optimum care had been offered. The general objective of the present project was to analyze the perinatal mortality at the National Institute of Perinatology (located in Mexico City) by social, biological and some available components of quality of care such as avoidability, provider responsibility, and structure and process deficiencies in the delivery of medical care. A Perinatal Mortality Committee data base was utilized. The study population consisted of all singleton perinatal deaths occurring between January 1, 1988 and June 30, 1991 (n = 522). A proportionate study was designed.^ The population studied mostly corresponded to married young adult mothers, who were residents of urban areas, with an educational level of junior high school or more, two to three pregnancies, and intermediate prenatal care. The mean gestational age at birth was 33.4 $\pm$ 3.9 completed weeks and the mean birthweight at birth was 1,791.9 $\pm$ 853.1 grams.^ Thirty-five percent of perinatal deaths were categorized as avoidable. Postnatal infection and premature rupture of membranes were the most frequent primary causes of avoidable perinatal death. The avoidable perinatal mortality rate was 8.7 per 1000 and significantly declined during the study period (p $<$.05). Preventable perinatal mortality aggregated data suggested that at least part of the mortality decline for amenable conditions was due to better medical care.^ Structure deficiencies were present in 35% of avoidable deaths and process deficiencies were present in 79%. Structure deficiencies remained constant over time. Process deficiencies consisted of diagnosis failures (45.8%) and treatment failures (87.3%), they also remained constant through the years. Party responsibility was as follows: Obstetric (35.4%), pediatric (41.4%), institutional (26.5%), and patient (6.6%). Obstetric responsibility significantly increased during the study period (p $<$.05). Pediatric responsibility declined only for newborns less than 1500 g (p $<$.05). Institutional responsibility remained constant.^ Process deficiencies increased the risk for an avoidable death eightfold (confidence interval 1.7-41.4, p $<$.01) and provider responsibility ninety-fivefold (confidence interval 14.8-612.1, p $<$.001), after adjustment for several confounding variables. Perinatal mortality due to prematurity, barotrauma and nosocomial infection, was highly preventable, but not that due to transpartum asphyxia. Once specific deficiencies in the quality of care have been identified, quality assurance actions should begin. ^
Resumo:
Traditional comparison of standardized mortality ratios (SMRs) can be misleading if the age-specific mortality ratios are not homogeneous. For this reason, a regression model has been developed which incorporates the mortality ratio as a function of age. This model is then applied to mortality data from an occupational cohort study. The nature of the occupational data necessitates the investigation of mortality ratios which increase with age. These occupational data are used primarily to illustrate and develop the statistical methodology.^ The age-specific mortality ratio (MR) for the covariates of interest can be written as MR(,ij...m) = ((mu)(,ij...m)/(theta)(,ij...m)) = r(.)exp (Z('')(,ij...m)(beta)) where (mu)(,ij...m) and (theta)(,ij...m) denote the force of mortality in the study and chosen standard populations in the ij...m('th) stratum, respectively, r is the intercept, Z(,ij...m) is the vector of covariables associated with the i('th) age interval, and (beta) is a vector of regression coefficients associated with these covariables. A Newton-Raphson iterative procedure has been used for determining the maximum likelihood estimates of the regression coefficients.^ This model provides a statistical method for a logical and easily interpretable explanation of an occupational cohort mortality experience. Since it gives a reasonable fit to the mortality data, it can also be concluded that the model is fairly realistic. The traditional statistical method for the analysis of occupational cohort mortality data is to present a summary index such as the SMR under the assumption of constant (homogeneous) age-specific mortality ratios. Since the mortality ratios for occupational groups usually increase with age, the homogeneity assumption of the age-specific mortality ratios is often untenable. The traditional method of comparing SMRs under the homogeneity assumption is a special case of this model, without age as a covariate.^ This model also provides a statistical technique to evaluate the relative risk between two SMRs or a dose-response relationship among several SMRs. The model presented has application in the medical, demographic and epidemiologic areas. The methods developed in this thesis are suitable for future analyses of mortality or morbidity data when the age-specific mortality/morbidity experience is a function of age or when there is an interaction effect between confounding variables needs to be evaluated. ^
Resumo:
A multiproxy study of palaeoceanographic and climatic changes in northernmost Baffin Bay shows that major environmental changes have occurred since the deglaciation of the area at about 12 500 cal. yr BP. The interpretation is based on sedimentology, benthic and planktonic foraminifera and their isotopic composition, as well as diatom assemblages in the sedimentary records at two core sites, one located in the deeper central part of northernmost Baffin Bay and one in a separate trough closer to the Greenland coast. A revised chronology for the two records is established on the basis of 15 previously published AMS 14C age determinations. A basal diamicton is overlain by laminated, fossil-free sediments. Our data from the early part of the fossiliferous record (12 300 - 11 300 cal. yr BP), which is also initially laminated, indicate extensive seasonal sea-ice cover and brine release. There is indication of a cooling event between 11 300 and 10 900 cal. yr BP, and maximum Atlantic Water influence occurred between 10 900 and 8200 cal. yr BP (no sediment recovery between 8200 and 7300 cal. yr BP). A gradual, but fluctuating, increase in sea-ice cover is seen after 7300 cal. yr BP. Sea-ice diatoms were particularly abundant in the central part of northernmost Baffin Bay, presumably due to the inflow of Polar waters from the Arctic Ocean, and less sea ice occurred at the near-coastal site, which was under continuous influence of the West Greenland Current. Our data from the deep, central part show a fluctuating degree of upwelling after c. 7300 cal. yr BP, culminating between 4000 and 3050 cal. yr BP. There was a gradual increase in the influence of cold bottom waters from the Arctic Ocean after about 3050 cal. yr BP, when agglutinated foraminifera became abundant. A superimposed short-term change in the sea-surface proxies is correlated with the Little Ice Age cooling.
Resumo:
Early and Mid-Pleistocene climate, ocean hydrography and ice sheet dynamics have been reconstructed using a high-resolution data set (planktonic and benthic d18O time series, faunal-based sea surface temperature (SST) reconstructions and ice-rafted debris (IRD)) record from a high-deposition-rate sedimentary succession recovered at the Gardar Drift formation in the subpolar North Atlantic (Integrated Ocean Drilling Program Leg 306, Site U1314). Our sedimentary record spans from late in Marine Isotope Stage (MIS) 31 to MIS 19 (1069-779 ka). Different trends of the benthic and planktonic oxygen isotopes, SST and IRD records before and after MIS 25 (~940 ka) evidence the large increase in Northern Hemisphere ice-volume, linked to the cyclicity change from the 41-kyr to the 100-kyr that occurred during the Mid-Pleistocene Transition (MPT). Beside longer glacial-interglacial (G-IG) variability, millennial-scale fluctuations were a pervasive feature across our study. Negative excursions in the benthic d18O time series observed at the times of IRD events may be related to glacio-eustatic changes due to ice sheets retreats and/or to changes in deep hydrography. Time series analysis on surface water proxies (IRD, SST and planktonic d18O) of the interval between MIS 31 to MIS 26 shows that the timing of these millennial-scale climate changes are related to half-precessional (10 kyr) components of the insolation forcing, which are interpreted as cross-equatorial heat transport toward high latitudes during both equinox insolation maxima at the equator.