911 resultados para time-to-rehospitalization
Resumo:
AIMS To determine efficacy of a minimally invasive (MI) surgical approach using a human MI lumbar retractor for canine lumbosacral dorsal laminectomy and partial discectomy and to compare this technique to the standard open surgical (OS) approach. METHODS Lumbosacral dorsal laminectomy and partial discectomy was performed on 16 large-breed canine cadavers using either a standard OS (n=8) or MI (n=8) approach. Skin and fascial incision length, procedure time, and intraoperative complications were recorded. Postoperatively specimens were evaluated for laminectomy and discectomy dimensions, and visible damage to the cauda equina and exiting nerve roots. RESULTS Median length of skin and fascial incisions in the OS group were longer than in the MI group (p<0.001). Median laminectomy length was similar between both approaches (p=0.234) but width was greater for the MI than OS approach (p=0.002). Both approaches achieved similar partial discectomy width (p=0.279). Overall surgical time was longer for MI approaches compared to OS, with a median of 18.5 (min 15.5, max 21.8) minutes for MI compared to 14.6 (min 13.1, max 16.9) minutes for OS (p=0.001). CONCLUSIONS The MI approach reduced incision lengths while retaining comparable laminectomy and discectomy dimensions. For this in vitro model the MI approach required more time to complete, but this difference may not be relevant in clinical cases. CLINICAL RELEVANCE Dogs undergoing lumbosacral dorsal laminectomy are commonly large-breed dogs. The traditional open approach requires a large skin incision and soft tissue dissection, especially in overweight animals. A MI approach accomplishing the same surgical result while minimising soft tissue trauma could reduce post-operative pain and recovery time, and may lower wound-related complications. Clinical studies are needed to confirm postoperative benefit and assess operating times in vivo.
Resumo:
BACKGROUND Long-term hormone therapy has been the standard of care for advanced prostate cancer since the 1940s. STAMPEDE is a randomised controlled trial using a multiarm, multistage platform design. It recruits men with high-risk, locally advanced, metastatic or recurrent prostate cancer who are starting first-line long-term hormone therapy. We report primary survival results for three research comparisons testing the addition of zoledronic acid, docetaxel, or their combination to standard of care versus standard of care alone. METHODS Standard of care was hormone therapy for at least 2 years; radiotherapy was encouraged for men with N0M0 disease to November, 2011, then mandated; radiotherapy was optional for men with node-positive non-metastatic (N+M0) disease. Stratified randomisation (via minimisation) allocated men 2:1:1:1 to standard of care only (SOC-only; control), standard of care plus zoledronic acid (SOC + ZA), standard of care plus docetaxel (SOC + Doc), or standard of care with both zoledronic acid and docetaxel (SOC + ZA + Doc). Zoledronic acid (4 mg) was given for six 3-weekly cycles, then 4-weekly until 2 years, and docetaxel (75 mg/m(2)) for six 3-weekly cycles with prednisolone 10 mg daily. There was no blinding to treatment allocation. The primary outcome measure was overall survival. Pairwise comparisons of research versus control had 90% power at 2·5% one-sided α for hazard ratio (HR) 0·75, requiring roughly 400 control arm deaths. Statistical analyses were undertaken with standard log-rank-type methods for time-to-event data, with hazard ratios (HRs) and 95% CIs derived from adjusted Cox models. This trial is registered at ClinicalTrials.gov (NCT00268476) and ControlledTrials.com (ISRCTN78818544). FINDINGS 2962 men were randomly assigned to four groups between Oct 5, 2005, and March 31, 2013. Median age was 65 years (IQR 60-71). 1817 (61%) men had M+ disease, 448 (15%) had N+/X M0, and 697 (24%) had N0M0. 165 (6%) men were previously treated with local therapy, and median prostate-specific antigen was 65 ng/mL (IQR 23-184). Median follow-up was 43 months (IQR 30-60). There were 415 deaths in the control group (347 [84%] prostate cancer). Median overall survival was 71 months (IQR 32 to not reached) for SOC-only, not reached (32 to not reached) for SOC + ZA (HR 0·94, 95% CI 0·79-1·11; p=0·450), 81 months (41 to not reached) for SOC + Doc (0·78, 0·66-0·93; p=0·006), and 76 months (39 to not reached) for SOC + ZA + Doc (0·82, 0·69-0·97; p=0·022). There was no evidence of heterogeneity in treatment effect (for any of the treatments) across prespecified subsets. Grade 3-5 adverse events were reported for 399 (32%) patients receiving SOC, 197 (32%) receiving SOC + ZA, 288 (52%) receiving SOC + Doc, and 269 (52%) receiving SOC + ZA + Doc. INTERPRETATION Zoledronic acid showed no evidence of survival improvement and should not be part of standard of care for this population. Docetaxel chemotherapy, given at the time of long-term hormone therapy initiation, showed evidence of improved survival accompanied by an increase in adverse events. Docetaxel treatment should become part of standard of care for adequately fit men commencing long-term hormone therapy. FUNDING Cancer Research UK, Medical Research Council, Novartis, Sanofi-Aventis, Pfizer, Janssen, Astellas, NIHR Clinical Research Network, Swiss Group for Clinical Cancer Research.
Resumo:
INTRODUCTION Monitoring breathing pattern is especially relevant in infants with lung disease. Recently, a vest-based inductive plethysmograph system (FloRight®) has been developed for tidal breathing measurement in infants. We investigated the accuracy of tidal breathing flow volume loop (TBFVL) measurements in healthy term-born infants and infants with lung disease by the vest-based system in comparison to an ultrasonic flowmeter (USFM) with a face mask. We also investigated whether the system discriminates between healthy infants and those with lung disease. METHODS Floright® measures changes in thoracoabdominal volume during tidal breathing through magnetic field changes generated by current-carrying conductor coils in an elastic vest. Simultaneous TBFVL measurements by the vest-based system and the USFM were performed at 44 weeks corrected postmenstrual age during quiet unsedated sleep. TBFVL parameters derived by both techniques and within both groups were compared. RESULTS We included 19 healthy infants and 18 infants with lung disease. Tidal volume per body weight derived by the vest-based system was significantly lower with a mean difference (95% CI) of -1.33 ml/kg (-1.73; -0.92), P < 0.001. Respiratory rate and ratio of time to peak tidal expiratory flow over total expiratory time (tPTEF/tE) did not differ between the two techniques. Both systems were able to discriminate between healthy infants and those with lung disease using tPTEF/tE. CONCLUSION FloRight® accurately measures time indices and may discriminate between healthy infants and those with lung disease, but demonstrates differences in tidal volume measurements. It may be better suited to monitor breathing pattern than for TBFVL measurements.
Resumo:
Objective: Colorectal cancer (CRC) can be largely prevented or effectively treated in its early stages, yet disparities exist in timely screening. The aim of this study was to explore the disparities in CRC screening on the basis of health insurance status including private, Medicare, Medicaid, and State Administered General Assistance (SAGA). Methods: A retrospective chart review for the period January 2000 to May 2007 (95 records) was conducted at two clinic sites; a private clinic and a university hospital clinic. All individuals at these sites who met study criteria (>50 years old with screening colonoscopy) were included. Age, gender, date of first clinic visit when screening referral was made, and date of completed procedure (screening colonoscopy) were recorded. Groups were dichotomized between individuals with private health insurance and individuals with public health insurance. Individuals with any history of CRC, known pre-cancerous conditions as well as family history of CRC requiring frequent colonoscopy were excluded from the study. Linear model analysis was performed to compare the average waiting time to receiving screening colonoscopy between the groups. T-test was performed to analyze age or gender related differences between the two groups as well as within each group. Results: The average waiting time (33 days) for screening colonoscopy in privately insured individuals was significantly lower than publicly insured individuals (200 days). The time difference between the first clinic visit and the procedure was statistically significant (p < 0.0001) between the two groups. There was no statistical difference (p=0.089) in gender between these groups (public vs. private). There were also no statistically significant gender or age related differences found within each group. Conclusions: Disparities exist in timely screening for CRC and one of the barriers leading to delayed CRC screening includes health insurance status of an individual. Even within the insured group, type of insurance plays major role. There is a negative correlation between public health insurance status and timely screening. Differences in access to medical care and delivery of care experienced by patients who are publicly insured through Medicaid, Medicare, and SAGA, suggests that the State of Connecticut needs to implement changes in health care policies that would provide timely screening colonoscopy. It is evident that health insurance coverage facilitates timely access to healthcare. Therefore, there is a need for increased efforts in advocacy for policy, payment and physician participation in public insurance programs. A state-wide comprehensive program involving multiple components targeting different levels of change such as provider, patients and the community should help reduce some of the observed causes of healthcare disparities based on the insurance status.
Resumo:
Various airborne aldehydes and ketones (i.e., airborne carbonyls) present in outdoor, indoor, and personal air pose a risk to human health at present environmental concentrations. To date, there is no adequate, simple-to-use sampler for monitoring carbonyls at parts per billion concentrations in personal air. The Passive Aldehydes and Ketones Sampler (PAKS) originally developed for this purpose has been found to be unreliable in a number of relatively recent field studies. The PAKS method uses dansylhydrazine, DNSH, as the derivatization agent to produce aldehyde derivatives that are analyzed by HPLC with fluorescence detection. The reasons for the poor performance of the PAKS are not known but it is hypothesized that the chemical derivatization conditions and reaction kinetics combined with a relatively low sampling rate may play a role. This study evaluated the effect of absorption and emission wavelengths, pH of the DNSH coating solution, extraction solvent, and time post-extraction for the yield and stability of formaldehyde, acetaldehyde, and acrolein DNSH derivatives. The results suggest that the optimum conditions for the analysis of DNSHydrazones are the following. The excitation and emission wavelengths for HPLC analysis should be at 250nm and 500nm, respectively. The optimal pH of the coating solution appears to be pH 2 because it improves the formation of di-derivatized acrolein DNSHydrazones without affecting the response of the derivatives of the formaldehyde and acetaldehyde derivatives. Acetonitrile is the preferable extraction solvent while the optimal time to analyze the aldehyde derivatives is 72 hours post-extraction. ^
Resumo:
Research studies on the association between exposures to air contaminants and disease frequently use worn dosimeters to measure the concentration of the contaminant of interest. But investigation of exposure determinants requires additional knowledge beyond concentration, i.e., knowledge about personal activity such as whether the exposure occurred in a building or outdoors. Current studies frequently depend upon manual activity logging to record location. This study's purpose was to evaluate the use of a worn data logger recording three environmental parameters—temperature, humidity, and light intensity—as well as time of day, to determine indoor or outdoor location, with an ultimate aim of eliminating the need to manually log location or at least providing a method to verify such logs. For this study, data collection was limited to a single geographical area (Houston, Texas metropolitan area) during a single season (winter) using a HOBO H8 four-channel data logger. Data for development of a Location Model were collected using the logger for deliberate sampling of programmed activities in outdoor, building, and vehicle locations at various times of day. The Model was developed by analyzing the distributions of environmental parameters by location and time to establish a prioritized set of cut points for assessing locations. The final Model consisted of four "processors" that varied these priorities and cut points. Data to evaluate the Model were collected by wearing the logger during "typical days" while maintaining a location log. The Model was tested by feeding the typical day data into each processor and generating assessed locations for each record. These assessed locations were then compared with true locations recorded in the manual log to determine accurate versus erroneous assessments. The utility of each processor was evaluated by calculating overall error rates across all times of day, and calculating individual error rates by time of day. Unfortunately, the error rates were large, such that there would be no benefit in using the Model. Another analysis in which assessed locations were classified as either indoor (including both building and vehicle) or outdoor yielded slightly lower error rates that still precluded any benefit of the Model's use.^
Resumo:
The Federal Food and Drug Administration (FDA) and the Centers for Medicare and Medicaid (CMS) play key roles in making Class III, medical devices available to the public, and they are required by law to meet statutory deadlines for applications under review. Historically, both agencies have failed to meet their respective statutory requirements. Since these failures affect patient access and may adversely impact public health, Congress has enacted several “modernization” laws. However, the effectiveness of these modernization laws has not been adequately studied or established for Class III medical devices. ^ The aim of this research study was, therefore, to analyze how these modernization laws may have affected public access to medical devices. Two questions were addressed: (1) How have the FDA modernization laws affected the time to approval for medical device premarket approval applications (PMAs)? (2) How has the CMS modernization law affected the time to approval for national coverage decisions (NCDs)? The data for this research study were collected from publicly available databases for the period January 1, 1995, through December 31, 2008. These dates were selected to ensure that a sufficient period of time was captured to measure pre- and post-modernization effects on time to approval. All records containing original PMAs were obtained from the FDA database, and all records containing NCDs were obtained from the CMS database. Source documents, including FDA premarket approval letters and CMS national coverage decision memoranda, were reviewed to obtain additional data not found in the search results. Analyses were conducted to determine the effects of the pre- and post-modernization laws on time to approval. Secondary analyses of FDA subcategories were conducted to uncover any causal factors that might explain differences in time to approval and to compare with the primary trends. The primary analysis showed that the FDA modernization laws of 1997 and 2002 initially reduced PMA time to approval; after the 2002 modernization law, the time to approval began increasing and continued to increase through December 2008. The non-combined, subcategory approval trends were similar to the primary analysis trends. The combined, subcategory analysis showed no clear trends with the exception of non-implantable devices, for which time to approval trended down after 1997. The CMS modernization law of 2003 reduced NCD time to approval, a trend that continued through December 2008. This study also showed that approximately 86% of PMA devices do not receive NCDs. ^ As a result of this research study, recommendations are offered to help resolve statutory non-compliance and access issues, as follows: (1) Authorities should examine underlying causal factors for the observed trends; (2) Process improvements should be made to better coordinate FDA and CMS activities to include sharing data, reducing duplication, and establishing clear criteria for “safe and effective” and “reasonable and necessary”; (3) A common identifier should be established to allow tracking and trending of applications between FDA and CMS databases; (4) Statutory requirements may need to be revised; and (5) An investigation should be undertaken to determine why NCDs are not issued for the majority of PMAs. Any process improvements should be made without creating additional safety risks and adversely impacting public health. Finally, additional studies are needed to fully characterize and better understand the trends identified in this research study.^
Resumo:
A life table methodology was developed which estimates the expected remaining Army service time and the expected remaining Army sick time by years of service for the United States Army population. A measure of illness impact was defined as the ratio of expected remaining Army sick time to the expected remaining Army service time. The variances of the resulting estimators were developed on the basis of current data. The theory of partial and complete competing risks was considered for each type of decrement (death, administrative separation, and medical separation) and for the causes of sick time.^ The methodology was applied to world-wide U.S. Army data for calendar year 1978. A total of 669,493 enlisted personnel and 97,704 officers were reported on active duty as of 30 September 1978. During calendar year 1978, the Army Medical Department reported 114,647 inpatient discharges and 1,767,146 sick days. Although the methodology is completely general with respect to the definition of sick time, only sick time associated with an inpatient episode was considered in this study.^ Since the temporal measure was years of Army service, an age-adjusting process was applied to the life tables for comparative purposes. Analyses were conducted by rank (enlisted and officer), race and sex, and were based on the ratio of expected remaining Army sick time to expected remaining Army service time. Seventeen major diagnostic groups, classified by the Eighth Revision, International Classification of Diseases, Adapted for Use In The United States, were ranked according to their cumulative (across years of service) contribution to expected remaining sick time.^ The study results indicated that enlisted personnel tend to have more expected hospital-associated sick time relative to their expected Army service time than officers. Non-white officers generally have more expected sick time relative to their expected Army service time than white officers. This racial differential was not supported within the enlisted population. Females tend to have more expected sick time relative to their expected Army service time than males. This tendency remained after diagnostic groups 580-629 (Genitourinary System) and 630-678 (Pregnancy and Childbirth) were removed. Problems associated with the circulatory system, digestive system and musculoskeletal system were among the three leading causes of cumulative sick time across years of service. ^
Resumo:
Breast cancer is the most common non-skin cancer and the second leading cause of cancer-related death in women in the United States. Studies on ipsilateral breast tumor relapse (IBTR) status and disease-specific survival will help guide clinic treatment and predict patient prognosis.^ After breast conservation therapy, patients with breast cancer may experience breast tumor relapse. This relapse is classified into two distinct types: true local recurrence (TR) and new ipsilateral primary tumor (NP). However, the methods used to classify the relapse types are imperfect and are prone to misclassification. In addition, some observed survival data (e.g., time to relapse and time from relapse to death)are strongly correlated with relapse types. The first part of this dissertation presents a Bayesian approach to (1) modeling the potentially misclassified relapse status and the correlated survival information, (2) estimating the sensitivity and specificity of the diagnostic methods, and (3) quantify the covariate effects on event probabilities. A shared frailty was used to account for the within-subject correlation between survival times. The inference was conducted using a Bayesian framework via Markov Chain Monte Carlo simulation implemented in softwareWinBUGS. Simulation was used to validate the Bayesian method and assess its frequentist properties. The new model has two important innovations: (1) it utilizes the additional survival times correlated with the relapse status to improve the parameter estimation, and (2) it provides tools to address the correlation between the two diagnostic methods conditional to the true relapse types.^ Prediction of patients at highest risk for IBTR after local excision of ductal carcinoma in situ (DCIS) remains a clinical concern. The goals of the second part of this dissertation were to evaluate a published nomogram from Memorial Sloan-Kettering Cancer Center, to determine the risk of IBTR in patients with DCIS treated with local excision, and to determine whether there is a subset of patients at low risk of IBTR. Patients who had undergone local excision from 1990 through 2007 at MD Anderson Cancer Center with a final diagnosis of DCIS (n=794) were included in this part. Clinicopathologic factors and the performance of the Memorial Sloan-Kettering Cancer Center nomogram for prediction of IBTR were assessed for 734 patients with complete data. Nomogram for prediction of 5- and 10-year IBTR probabilities were found to demonstrate imperfect calibration and discrimination, with an area under the receiver operating characteristic curve of .63 and a concordance index of .63. In conclusion, predictive models for IBTR in DCIS patients treated with local excision are imperfect. Our current ability to accurately predict recurrence based on clinical parameters is limited.^ The American Joint Committee on Cancer (AJCC) staging of breast cancer is widely used to determine prognosis, yet survival within each AJCC stage shows wide variation and remains unpredictable. For the third part of this dissertation, biologic markers were hypothesized to be responsible for some of this variation, and the addition of biologic markers to current AJCC staging were examined for possibly provide improved prognostication. The initial cohort included patients treated with surgery as first intervention at MDACC from 1997 to 2006. Cox proportional hazards models were used to create prognostic scoring systems. AJCC pathologic staging parameters and biologic tumor markers were investigated to devise the scoring systems. Surveillance Epidemiology and End Results (SEER) data was used as the external cohort to validate the scoring systems. Binary indicators for pathologic stage (PS), estrogen receptor status (E), and tumor grade (G) were summed to create PS+EG scoring systems devised to predict 5-year patient outcomes. These scoring systems facilitated separation of the study population into more refined subgroups than the current AJCC staging system. The ability of the PS+EG score to stratify outcomes was confirmed in both internal and external validation cohorts. The current study proposes and validates a new staging system by incorporating tumor grade and ER status into current AJCC staging. We recommend that biologic markers be incorporating into revised versions of the AJCC staging system for patients receiving surgery as the first intervention.^ Chapter 1 focuses on developing a Bayesian method to solve misclassified relapse status and application to breast cancer data. Chapter 2 focuses on evaluation of a breast cancer nomogram for predicting risk of IBTR in patients with DCIS after local excision gives the statement of the problem in the clinical research. Chapter 3 focuses on validation of a novel staging system for disease-specific survival in patients with breast cancer treated with surgery as the first intervention. ^
Resumo:
Prevalent sampling is an efficient and focused approach to the study of the natural history of disease. Right-censored time-to-event data observed from prospective prevalent cohort studies are often subject to left-truncated sampling. Left-truncated samples are not randomly selected from the population of interest and have a selection bias. Extensive studies have focused on estimating the unbiased distribution given left-truncated samples. However, in many applications, the exact date of disease onset was not observed. For example, in an HIV infection study, the exact HIV infection time is not observable. However, it is known that the HIV infection date occurred between two observable dates. Meeting these challenges motivated our study. We propose parametric models to estimate the unbiased distribution of left-truncated, right-censored time-to-event data with uncertain onset times. We first consider data from a length-biased sampling, a specific case in left-truncated samplings. Then we extend the proposed method to general left-truncated sampling. With a parametric model, we construct the full likelihood, given a biased sample with unobservable onset of disease. The parameters are estimated through the maximization of the constructed likelihood by adjusting the selection bias and unobservable exact onset. Simulations are conducted to evaluate the finite sample performance of the proposed methods. We apply the proposed method to an HIV infection study, estimating the unbiased survival function and covariance coefficients. ^
Resumo:
The extent to which the spatial distribution of marine planktonic microbes is controlled by local environmental selection or dispersal is poorly understood. Our ability to separate the effects of these two biogeographic controls is limited by the enormous environmental variability both in space and through time. To circumvent this limitation, we analyzed fossil diatom assemblages over the past ~1.5 million years from the world oceans and show that these eukaryotic microbes are not limited by dispersal. The lack of dispersal limitation in marine diatoms suggests that the biodiversity at the microbial level fundamentally differs from that of macroscopic animals and plants for which geographic isolation is a common component of speciation.
Resumo:
It is still an open question how equilibrium warming in response to increasing radiative forcing - the specific equilibrium climate sensitivity S - depends on background climate. We here present palaeodata-based evidence on the state dependency of S, by using CO2 proxy data together with a 3-D ice-sheet-model-based reconstruction of land ice albedo over the last 5 million years (Myr). We find that the land ice albedo forcing depends non-linearly on the background climate, while any non-linearity of CO2 radiative forcing depends on the CO2 data set used. This non-linearity has not, so far, been accounted for in similar approaches due to previously more simplistic approximations, in which land ice albedo radiative forcing was a linear function of sea level change. The latitudinal dependency of ice-sheet area changes is important for the non-linearity between land ice albedo and sea level. In our set-up, in which the radiative forcing of CO2 and of the land ice albedo (LI) is combined, we find a state dependence in the calculated specific equilibrium climate sensitivity, S[CO2,LI], for most of the Pleistocene (last 2.1 Myr). During Pleistocene intermediate glaciated climates and interglacial periods, S[CO2,LI] is on average ~ 45 % larger than during Pleistocene full glacial conditions. In the Pliocene part of our analysis (2.6-5 Myr BP) the CO2 data uncertainties prevent a well-supported calculation for S[CO2,LI], but our analysis suggests that during times without a large land ice area in the Northern Hemisphere (e.g. before 2.82 Myr BP), the specific equilibrium climate sensitivity, S[CO2,LI], was smaller than during interglacials of the Pleistocene. We thus find support for a previously proposed state change in the climate system with the widespread appearance of northern hemispheric ice sheets. This study points for the first time to a so far overlooked non-linearity in the land ice albedo radiative forcing, which is important for similar palaeodata-based approaches to calculate climate sensitivity. However, the implications of this study for a suggested warming under CO2 doubling are not yet entirely clear since the details of necessary corrections for other slow feedbacks are not fully known and the uncertainties that exist in the ice-sheet simulations and global temperature reconstructions are large.
Resumo:
Basalts from Hole 534A are among the oldest recovered from the ocean bottom, dating from the opening of the Atlantic 155 Ma. Upon exposure to a 1-Oe field for one week, these basalts acquire a viscous remanent magnetization (VRM), which ranges from 4 to 223% of their natural remanent magnetization (NRM). A magnetic field of similar magnitude is observed in the paleomagnetic lab of the Glomar Challenger, and it is therefore doubtful if accurate measurements of magnetic moment in such rocks can be made on board unless the paleomagnetic area is magnetically shielded. No correlation is observed between the Konigsberger ratio (beta), which is usually less than 3, and the ability to acquire a VRM. The VRM shows both a log t dependence and a Richter aftereffect. Both of these, but especially the log t dependence, will cause the susceptibility measurements (made by applying a magnetic field for a very short time) to be minimum values. The susceptibility and derived Q should therefore be used cautiously for magnetic anomaly interpretation, because they can cause the importance of the induced magnetization to be underestimated.
Resumo:
To understand the validity of d18O proxy records as indicators of past temperature change, a series of experiments was conducted using an atmospheric general circulation model fitted with water isotope tracers (Community Atmosphere Model version 3.0, IsoCAM). A pre-industrial simulation was performed as the control experiment, as well as a simulation with all the boundary conditions set to Last Glacial Maximum (LGM) values. Results from the pre-industrial and LGM simulations were compared to experiments in which the influence of individual boundary conditions (greenhouse gases, ice sheet albedo and topography, sea surface temperature (SST), and orbital parameters) were changed each at a time to assess their individual impact. The experiments were designed in order to analyze the spatial variations of the oxygen isotopic composition of precipitation (d18Oprecip) in response to individual climate factors. The change in topography (due to the change in land ice cover) played a significant role in reducing the surface temperature and d18Oprecip over North America. Exposed shelf areas and the ice sheet albedo reduced the Northern Hemisphere surface temperature and d18Oprecip further. A global mean cooling of 4.1 °C was simulated with combined LGM boundary conditions compared to the control simulation, which was in agreement with previous experiments using the fully coupled Community Climate System Model (CCSM3). Large reductions in d18Oprecip over the LGM ice sheets were strongly linked to the temperature decrease over them. The SST and ice sheet topography changes were responsible for most of the changes in the climate and hence the d18Oprecip distribution among the simulations.
Resumo:
Heavy metals pollution in marine environments has caused great damage to marine biological and ecological systems. Heavy metals accumulate in marine creatures, after which they are delivered to higher trophic levels of marine organisms through the marine food chain, which causes serious harm to marine biological systems and human health. Additionally, excess carbon dioxide in the atmosphere has caused ocean acidification. Indeed, about one third of the CO2 released into the atmosphere by anthropogenic activities since the beginning of the industrial revolution has been absorbed by the world's oceans, which play a key role in moderating climate change. Modeling has shown that, if current trends in CO2 emissions continue, the average pH of the ocean will reach 7.8 by the end of this century, corresponding to 0.5 units below the pre-industrial level, or a three-fold increase in H+ concentration. The ocean pH has not been at this level for several millions of years. Additionally, these changes are occurring at speeds 100 times greater than ever previously observed. As a result, several marine species, communities and ecosystems might not have time to acclimate or adapt to these fast changes in ocean chemistry. In addition, decreasing ocean pH has the potential to seriously affect the growth, development and reproduction reproductive processes of marine organisms, as well as threaten normal development of the marine ecosystem. Copepods are an important part of the meiofauna that play an important role in the marine ecosystem. Pollution of the marine environment can influence their growth and development, as well as the ecological processes they are involved in. Accordingly, there is important scientific value to investigation of the response of copepods to ocean acidification and heavy metals pollution. In the present study, we evaluated the effects of simulated future ocean acidification and the toxicological interaction between ocean acidity and heavy metals of Cu and Cd on T. japonicus. To accomplish this, harpacticoids were exposed to Cu and Cd concentration gradient seawater that had been equilibrated with CO2 and air to reach pH 8.0, 7.7, 7.3 and 6.5 for 96 h. Survival was not significantly suppressed under single sea water acidification, and the final survival rates were greater than 93% in both the experimental groups and the controls. The toxicity of Cu to T. japonicus was significantly affected by sea water acidification, with the 96h LC50 decreasing by nearly threefold from 1.98 to 0.64 mg/L with decreasing pH. The 96 h LC50 of Cd decreased with decreasing pH, but there was no significant difference in mortality among pH treatments. The results of the present study demonstrated that the predicted future ocean acidification has the potential to negatively affect survival of T. japonicus by exacerbating the toxicity of Cu. The calculated safe concentrations of Cu were 11.9 (pH 7.7) and 10.5 (pH 7.3) µg/L, which were below the class I value and very close to the class II level of the China National Quality Standard for Sea Water. Overall, these results indicate that the Chinese coastal sea will face a