927 resultados para Multiple Change-point Analysis
Resumo:
Objective. In 2003, the State of Texas instituted the Driver Responsibility Program (TDRP), a program consisting of a driving infraction point system coupled with a series of graded fines and annual surcharges for specific traffic violations such as driving while intoxicated (DWI). Approximately half of the revenues generated are earmarked to be disbursed to the state's trauma system to cover uncompensated trauma care costs. This study examined initial program implementation, the impact of trauma system funding, and initial impact on impaired driving knowledge, attitudes and behaviors. A model for targeted media campaigns to improve the program's deterrence effects was developed. ^ Methods. Data from two independent driver survey samples (conducted in 1999 and 2005), department of public safety records, state health department data and a state auditor's report were used to evaluate the program's initial implementation, impact and outcome with respect to drivers' impaired driving knowledge, attitudes and behavior (based on constructs of social cognitive theory) and hospital uncompensated trauma care funding. Survey results were used to develop a regression model of high risk drivers who should be targeted to improve program outcome with respect to deterring impaired driving. ^ Results. Low driver compliance with fee payment (28%) and program implementation problems were associated with lower surcharge revenues in the first two years ($59.5 million versus $525 million predicted). Program revenue distribution to trauma hospitals was associated with a 16% increase in designated trauma centers. Survey data demonstrated that only 28% of drivers are aware of the TDRP and that there has been no initial impact on impaired driving behavior. Logistical regression modeling suggested that target media campaigns highlighting the likelihood of DWI detection by law enforcement and the increased surcharges associated with the TDRP are required to deter impaired driving. ^ Conclusions. Although the TDRP raised nearly $60 million in surcharge revenue for the Texas trauma system over the first two years, this study did not find evidence of a change in impaired driving knowledge, attitudes or behaviors from 1999 to 2005. Further research is required to measure whether the program is associated with decreased alcohol-related traffic fatalities. ^
Resumo:
Purpose. Understanding siblings' experiences after a major childhood burn injury was the purpose of this mixed method, qualitative dominant study. The following research questions guided this project: How do siblings describe the impact of a major childhood burn injury experience? How do sibling relationship factors of warmth/closeness, relative status/power, conflict, and rivalry further clarify their relationship and their experience after a major burn injury? ^ Methods. A mixed method, qualitative dominant, design was implemented to understand the sibling experiences in a family with a child suffering from a major burn injury. Informants were selected from patients with childhood burn injuries attending the reconstructive clinic at a Gulf coast children's specialty hospital. The qualitative portion used the life story method, a narrative process, to portray the long-term impact on sibling relationships. A "case" represents a family unit and could be composed of one or multiple family members. Participants from 22 cases (N = 40 participants) were interviewed. Interviews were conducted in person and via telephone. The quantitative portion, or the embedded part of this mixed method design, used the Sibling Relationship Questionnaire Revised (SRQ-R) to conduct an additional structured interview and acquire scoring data. It was postulated that the SRQ-R would provide another perspective on the sibling experience and expand the qualitative data analysis. Thematic analysis was implemented on the qualitative interview data including the qualitative data from the interviews structured on the SRQ-R. Additionally, scores on the SRQ-R were tabulated to further describe the cases. ^ Results. The overall thematic pattern for the sibling relationship in families having a child with a major burn injury was that of normalization. Areas of normalization as well as the process of adjustment were the major themes. Areas of normalization were found in play and other activities, in school and work, and in family relations with their siblings and their parents. The process of adjustment in the sibling relationship was described as varied, involved school and work re-entry, and might even change their life perspective. Further analysis included an examination of the cases in which more than one person were interviewed and completed the SRQ-R. Participants from five ( n = 11) of six cases (n = 14), scored above 3.0 on the five-point scale on the Warmth/Closeness construct, indicating they perceived the sibling relationship as close. Five participants scored high on the Conflict construct and four participants scored high on the Rivalry construct. Finally, Relative Status/Power was low or negative in the six cases (n = 13). ^ Conclusions/implications. These findings suggest the importance of returning to normalcy for many of the families and the significance of sibling relationships on the process. Some of these families were able to use this major life event in a positive way to promote normalization. ^
Resumo:
Candida albicans is the most common opportunistic fungal pathogen of humans. The balance between commensal and pathogenic C. albicans is maintained largely by phagocytes of the innate immune system. Analysis of transcriptional changes after macrophage phagocytosis indicates the C. albicans response is broadly similar to starvation, including up-regulation of alternate carbon metabolism. Systems known and suspected to be part of acetate/acetyl-CoA metabolism were also up-regulated, importantly the ACH and ACS genes, which manage acetate/acetyl-CoA interconversion, and the nine-member ATO gene family, thought to participate in transmembrane acetate transport and also linked to the process of environmental alkalinization. ^ Studies into the roles of Ach, Acs1 and Acs2 function in alternate carbon metabolism revealed a substantial role for Acs2 and lesser, but distinct roles, for Ach and Acs1. Deletion mutants were made in C. albicans and were phenotypically evaluated both in vitro and in vivo. Loss of Ach function resulted in mild growth defects on ethanol and acetate and no significant attenuation in virulence in a disseminated mouse model of infection. While loss of Acs1 did not produce any significant phenotypes, loss of Acs2 greatly impaired growth on multiple carbon sources, including glucose, ethanol and acetate. We also concluded that ACS1 and ACS2 likely comprise an essential gene pair. Expression analyses indicated that ACS2 is the predominant form under most growth conditions. ^ ATO gene function had been linked to the process of environmental alkalinization, an ammonium-mediated phenomenon described here first in C. albicans. During growth in glucose-poor, amino acid-rich conditions C. albicans can rapidly change its extracellular pH. This process was glucose-repressible and was accompanied by hyphal formation and changes in colony morphology. We showed that introduction of the ATO1G53D point mutant to C. albicans blocked alkalinization, as did over-expression of C. albicans ATO2, the only C. albicans ATO gene to lack the conserved N-terminal domain. A screen for alkalinization-deficient mutants revealed that ACH1 is essential for alkalinization. However, addition of acetate to the media restored alkalinization to the ach1 mutant. We proposed a model of ATO function in which Atos regulated the cellular co-export of ammonium and acetate. ^
Resumo:
Approximately one-third of US adults have metabolic syndrome, the clustering of cardiovascular risk factors that include hypertension, abdominal adiposity, elevated fasting glucose, low high-density lipoprotein (HDL)-cholesterol and elevated triglyceride levels. While the definition of metabolic syndrome continues to be much debated among leading health research organizations, the fact is that individuals with metabolic syndrome have an increased risk of developing cardiovascular disease and/or type 2 diabetes. A recent report by the Henry J. Kaiser Family Foundation found that the US spent $2.2 trillion (16.2% of the Gross Domestic Product) on healthcare in 2007 and cited that among other factors, chronic diseases, including type 2 diabetes and cardiovascular disease, are large contributors to this growing national expenditure. Bearing a substantial portion of this cost are employers, the leading providers of health insurance. In lieu of this, many employers have begun implementing health promotion efforts to counteract these rising costs. However, evidence-based practices, uniform guidelines and policy do not exist for this setting in regard to the prevention of metabolic syndrome risk factors as defined by the National Cholesterol Education Program (NCEP) Adult Treatment Panel III (ATP III). Therefore, the aim of this review was to determine the effects of worksite-based behavior change programs on reducing the risk factors for metabolic syndrome in adults. Using relevant search terms, OVID MEDLINE was used to search the peer-reviewed literature published since 1998, resulting in 23 articles meeting the inclusion criteria for the review. The American Dietetic Association's Evidence Analysis Process was used to abstract data from selected articles, assess the quality of each study, compile the evidence, develop a summarized conclusion, and assign a grade based upon the strength of supporting evidence. The results revealed that participating in a worksite-based behavior change program may be associated in one or more improved metabolic syndrome risk factors. Programs that delivered a higher dose (>22 hours), in a shorter duration (<2 years) using two or more behavior-change strategies were associated with more metabolic risk factors being positively impacted. A Conclusion Grade of III was obtained for the evidence, indicating that studies were of weak design or results were inconclusive due to inadequate sample sizes, bias and lack of generalizability. These results provide some support for the continued use of worksite-based health promotion and further research is needed to determine if multi-strategy, intense behavior change programs targeting multiple risk factors are able to sustain health improvements in the long-term.^
Resumo:
Objective: In this secondary data analysis, three statistical methodologies were implemented to handle cases with missing data in a motivational interviewing and feedback study. The aim was to evaluate the impact that these methodologies have on the data analysis. ^ Methods: We first evaluated whether the assumption of missing completely at random held for this study. We then proceeded to conduct a secondary data analysis using a mixed linear model to handle missing data with three methodologies (a) complete case analysis, (b) multiple imputation with explicit model containing outcome variables, time, and the interaction of time and treatment, and (c) multiple imputation with explicit model containing outcome variables, time, the interaction of time and treatment, and additional covariates (e.g., age, gender, smoke, years in school, marital status, housing, race/ethnicity, and if participants play on athletic team). Several comparisons were conducted including the following ones: 1) the motivation interviewing with feedback group (MIF) vs. the assessment only group (AO), the motivation interviewing group (MIO) vs. AO, and the intervention of the feedback only group (FBO) vs. AO, 2) MIF vs. FBO, and 3) MIF vs. MIO.^ Results: We first evaluated the patterns of missingness in this study, which indicated that about 13% of participants showed monotone missing patterns, and about 3.5% showed non-monotone missing patterns. Then we evaluated the assumption of missing completely at random by Little's missing completely at random (MCAR) test, in which the Chi-Square test statistic was 167.8 with 125 degrees of freedom, and its associated p-value was p=0.006, which indicated that the data could not be assumed to be missing completely at random. After that, we compared if the three different strategies reached the same results. For the comparison between MIF and AO as well as the comparison between MIF and FBO, only the multiple imputation with additional covariates by uncongenial and congenial models reached different results. For the comparison between MIF and MIO, all the methodologies for handling missing values obtained different results. ^ Discussions: The study indicated that, first, missingness was crucial in this study. Second, to understand the assumptions of the model was important since we could not identify if the data were missing at random or missing not at random. Therefore, future researches should focus on exploring more sensitivity analyses under missing not at random assumption.^
Resumo:
Life expectancy has consistently increased over the last 150 years due to improvements in nutrition, medicine, and public health. Several studies found that in many developed countries, life expectancy continued to rise following a nearly linear trend, which was contrary to a common belief that the rate of improvement in life expectancy would decelerate and was fit with an S-shaped curve. Using samples of countries that exhibited a wide range of economic development levels, we explored the change in life expectancy over time by employing both nonlinear and linear models. We then observed if there were any significant differences in estimates between linear models, assuming an auto-correlated error structure. When data did not have a sigmoidal shape, nonlinear growth models sometimes failed to provide meaningful parameter estimates. The existence of an inflection point and asymptotes in the growth models made them inflexible with life expectancy data. In linear models, there was no significant difference in the life expectancy growth rate and future estimates between ordinary least squares (OLS) and generalized least squares (GLS). However, the generalized least squares model was more robust because the data involved time-series variables and residuals were positively correlated. ^
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.
Resumo:
At issue is whether or not isolated DNA is patent eligible under the U.S. Patent Law and the implications of that determination on public health. The U.S. Patent and Trademark Office has issued patents on DNA since the 1980s, and scientists and researchers have proceeded under that milieu since that time. Today, genetic research and testing related to the human breast cancer genes BRCA1 and BRCA2 is conducted within the framework of seven patents that were issued to Myriad Genetics and the University of Utah Research Foundation between 1997 and 2000. In 2009, suit was filed on behalf of multiple researchers, professional associations and others to invalidate fifteen of the claims underlying those patents. The Court of Appeals for the Federal Circuit, which hears patent cases, has invalidated claims for analyzing and comparing isolated DNA but has upheld claims to isolated DNA. The specific issue of whether isolated DNA is patent eligible is now before the Supreme Court, which is expected to decide the case by year's end. In this work, a systematic review was performed to determine the effects of DNA patents on various stakeholders and, ultimately, on public health; and to provide a legal analysis of the patent eligibility of isolated DNA and the likely outcome of the Supreme Court's decision. ^ A literature review was conducted to: first, identify principle stakeholders with an interest in patent eligibility of the isolated DNA sequences BRCA1 and BRCA2; and second, determine the effect of the case on those stakeholders. Published reports that addressed gene patents, the Myriad litigation, and implications of gene patents on stakeholders were included. Next, an in-depth legal analysis of the patent eligibility of isolated DNA and methods for analyzing it was performed pursuant to accepted methods of legal research and analysis based on legal briefs, federal law and jurisprudence, scholarly works and standard practice legal analysis. ^ Biotechnology, biomedical and clinical research, access to health care, and personalized medicine were identified as the principle stakeholders and interests herein. Many experts believe that the patent eligibility of isolated DNA will not greatly affect the biotechnology industry insofar as genetic testing is concerned; unlike for therapeutics, genetic testing does not require tremendous resources or lead time. The actual impact on biomedical researchers is uncertain, with greater impact expected for researchers whose work is intended for commercial purposes (versus basic science). The impact on access to health care has been surprisingly difficult to assess; while invalidating gene patents might be expected to decrease the cost of genetic testing and improve access to more laboratories and physicians' offices that provide the test, a 2010 study on the actual impact was inconclusive. As for personalized medicine, many experts believe that the availability of personalized medicine is ultimately a public policy issue for Congress, not the courts. ^ Based on the legal analysis performed in this work, this writer believes the Supreme Court is likely to invalidate patents on isolated DNA whose sequences are found in nature, because these gene sequences are a basic tool of scientific and technologic work and patents on isolated DNA would unduly inhibit their future use. Patents on complementary DNA (cDNA) are expected to stand, however, based on the human intervention required to craft cDNA and the product's distinction from the DNA found in nature. ^ In the end, the solution as to how to address gene patents may lie not in jurisprudence but in a fundamental change in business practices to provide expanded licenses to better address the interests of the several stakeholders. ^
Resumo:
A multivariable approach utilising bulk sediment, planktonic Foraminifera and siliceous phytoplankton has been used to reconstruct rapid variations in palaeoproductivity in the Peru-Chile Current System off northern Chile for the past 19000 cal. yr. During the early deglaciation (19000-16000 cal. yr BP), our data point to strongest upwelling intensity and highest productivity of the past 19 000 cal. yr. The late deglaciation (16000-13000 cal. yr BP) is characterised by a major change in the oceanographic setting, warmer water masses and weaker upwelling at the study site. Lowest productivity and weakest upwelling intensity are observed from the early to the middle Holocene (13000-4000 cal. yr BP), and the beginning of the late Holocene (<4000 cal. yr BP) is marked by increasing productivity, mainly driven by silicate-producing organisms. Changes in the productivity and upwelling intensity in our record may have resulted from a large-scale compression and/or displacement of the South Pacific subtropical gyre during more productive periods, in line with a northward extension of the Antarctic Circumpolar Current and increased advection of Antarctic water masses with the Peru-Chile Current. The corresponding increase in hemispheric thermal gradient and wind stress induced stronger upwelling. During the periods of lower productivity, this scenario probably reversed.
Resumo:
In the Persian Gulf and the Gulf of Oman marl forms the primary sediment cover, particularly on the Iranian side. A detailed quantitative description of the sediment components > 63 µ has been attempted in order to establish the regional distribution of the most important constituents as well as the criteria governing marl sedimentation in general. During the course of the analysis, the sand fraction from about 160 bottom-surface samples was split into 5 phi° fractions and 500 to 800 grains were counted in each individual fraction. The grains were cataloged in up to 40 grain type catagories. The gravel fraction was counted separately and the values calculated as weight percent. Basic for understanding the mode of formation of the marl sediment is the "rule" of independent availability of component groups. It states that the sedimentation of different component groups takes place independently, and that variation in the quantity of one component is independent of the presence or absence of other components. This means, for example, that different grain size spectrums are not necessarily developed through transport sorting. In the Persian Gulf they are more likely the result of differences in the amount of clay-rich fine sediment brought in to the restricted mouth areas of the Iranian rivers. These local increases in clayey sediment dilute the autochthonous, for the most part carbonate, coarse fraction. This also explains the frequent facies changes from carbonate to clayey marl. The main constituent groups of the coarse fraction are faecal pellets and lumps, the non carbonate mineral components, the Pleistocene relict sediment, the benthonic biogene components and the plankton. Faecal pellets and lumps are formed through grain size transformation of fine sediment. Higher percentages of these components can be correlated to large amounts of fine sediment and organic C. No discernable change takes place in carbonate minerals as a result of digestion and faecal pellet formation. The non-carbonate sand components originate from several unrelated sources and can be distinguished by their different grain size spectrum; as well as by other characteristics. The Iranian rivers supply the greatest amounts (well sorted fine sand). Their quantitative variations can be used to trace fine sediment transport directions. Similar mineral maxima in the sediment of the Gulf of Oman mark the path of the Persian Gulf outflow water. Far out from the coast, the basin bottoms in places contain abundant relict minerals (poorly sorted medium sand) and localized areas of reworked salt dome material (medium sand to gravel). Wind transport produces only a minimal "background value" of mineral components (very fine sand). Biogenic and non-biogenic relict sediments can be placed in separate component groups with the help of several petrographic criteria. Part of the relict sediment (well sorted fine sand) is allochthonous and was derived from the terrigenous sediment of river mouths. The main part (coarse, poorly sorted sediment), however, was derived from the late Pleistocene and forms a quasi-autochthonous cover over wide areas which receive little recent sedimentation. Bioturbation results in a mixing of the relict sediment with the overlying younger sediment. Resulting vertical sediment displacement of more than 2.5 m has been observed. This vertical mixing of relict sediment is also partially responsible for the present day grain size anomalies (coarse sediment in deep water) found in the Persian Gulf. The mainly aragonitic components forming the relict sediment show a finely subdivided facies pattern reflecting the paleogeography of carbonate tidal flats dating from the post Pleistocene transgression. Standstill periods are reflected at 110 -125m (shelf break), 64-61 m and 53-41 m (e.g. coare grained quartz and oolite concentrations), and at 25-30m. Comparing these depths to similar occurrences on other shelf regions (e. g. Timor Sea) leads to the conclusion that at this time minimal tectonic activity was taking place in the Persian Gulf. The Pleistocene climate, as evidenced by the absence of Iranian river sediment, was probably drier than the present day Persian Gulf climate. Foremost among the benthonic biogene components are the foraminifera and mollusks. When a ratio is set up between the two, it can be seen that each group is very sensitive to bottom type, i.e., the production of benthonic mollusca increases when a stable (hard) bottom is present whereas the foraminifera favour a soft bottom. In this way, regardless of the grain size, areas with high and low rates of recent sedimentation can be sharply defined. The almost complete absence of mollusks in water deeper than 200 to 300 m gives a rough sedimentologic water depth indicator. The sum of the benthonic foraminifera and mollusca was used as a relative constant reference value for the investigation of many other sediment components. The ratio between arenaceous foraminifera and those with carbonate shells shows a direct relationship to the amount of coarse grained material in the sediment as the frequence of arenaceous foraminifera depends heavily on the availability of sand grains. The nearness of "open" coasts (Iranian river mouths) is directly reflected in the high percentage of plant remains, and indirectly by the increased numbers of ostracods and vertebrates. Plant fragments do not reach their ultimate point of deposition in a free swimming state, but are transported along with the remainder of the terrigenous fine sediment. The echinoderms (mainly echinoids in the West Basin and ophiuroids in the Central Basin) attain their maximum development at the greatest depth reached by the action of the largest waves. This depth varies, depending on the exposure of the slope to the waves, between 12 to 14 and 30 to 35 m. Corals and bryozoans have proved to be good indicators of stable unchanging bottom conditions. Although bryozoans and alcyonarian spiculae are independent of water depth, scleractinians thrive only above 25 to 30 m. The beginning of recent reef growth (restricted by low winter temperatures) was seen only in one single area - on a shoal under 16 m of water. The coarse plankton fraction was studied primarily through the use of a plankton-benthos ratio. The increase in planktonic foraminifera with increasing water depth is here heavily masked by the "Adjacent sea effect" of the Persian Gulf: for the most part the foraminifera have drifted in from the Gulf of Oman. In contrast, the planktonic mollusks are able to colonize the entire Persian Gulf water body. Their amount in the plankton-benthos ratio always increases with water depth and thereby gives a reliable picture of local water depth variations. This holds true to a depth of around 400 m (corresponding to 80-90 % plankton). This water depth effect can be removed by graphical analysis, allowing the percentage of planktonic mollusks per total sample to be used as a reference base for relative sedimentation rate (sedimentation index). These values vary between 1 and > 1000 and thereby agree well with all the other lines of evidence. The "pteropod ooze" facies is then markedly dependent on the sedimentation rate and can theoretically develop at any depth greater than 65 m (proven at 80 m). It should certainly no longer be thought of as "deep sea" sediment. Based on the component distribution diagrams, grain size and carbonate content, the sediments of the Persian Gulf and the Gulf of Oman can be grouped into 5 provisional facies divisions (Chapt.19). Particularly noteworthy among these are first, the fine grained clayey marl facies occupying the 9 narrow outflow areas of rivers, and second, the coarse grained, high-carbonate marl facies rich in relict sediment which covers wide sediment-poor areas of the basin bottoms. Sediment transport is for the most part restricted to grain sizes < 150 µ and in shallow water is largely coast-parallel due to wave action at times supplemented by tidal currents. Below the wave base gravity transport prevails. The only current capable of moving sediment is the Persian Gulf outflow water in the Gulf of Oman.
Resumo:
We present a 3000-yr rainfall reconstruction from the Galápagos Islands that is based on paired biomarker records from the sediment of El Junco Lake. Located in the eastern equatorial Pacific, the climate of the Galápagos Islands is governed by movements of the Intertropical Convergence Zone (ITCZ) and the El Niño-Southern Oscillation (ENSO). We use a novel method for reconstructing past ENSO- and ITCZ-related rainfall changes through analysis of molecular and isotopic biomarker records representing several types of plants and algae that grow under differing climatic conditions. We propose that ?D values of dinosterol, a sterol produced by dinoflagellates, record changes in mean rainfall in El Junco Lake, while dD values of C34 botryococcene, a hydrocarbon unique to the green alga Botryococcus braunii, record changes in rainfall associated with moderate-to-strong El Niño events. We use these proxies to infer changes in mean rainfall and El Niño-related rainfall over the past 3000 yr. During periods in which the inferred change in El Niño-related rainfall opposed the change in mean rainfall, we infer changes in the amount of ITCZ-related rainfall. Simulations with an idealized isotope hydrology model of El Junco Lake help illustrate the interpretation of these proxy reconstructions. Opposing changes in El Niño- and ITCZ-related rainfall appear to account for several of the largest inferred hydrologic changes in El Junco Lake. We propose that these reconstructions can be used to infer changes in frequency and/or intensity of El Niño events and changes in the position of the ITCZ in the eastern equatorial Pacific over the past 3000 yr. Comparison with El Junco Lake sediment grain size records indicates general agreement of inferred rainfall changes over the late Holocene.