952 resultados para log-ratio analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hepatitis B virus (HBV) is a significant cause of liver diseases and related complications worldwide. Both injecting and non-injecting drug users are at increased risk of contracting HBV infection. Scientific evidence suggests that drug users have subnormal response to HBV vaccination and the seroprotection rates are lower than that in the general population; potentially due to vaccine factors, host factors, or both. The purpose of this systematic review is to examine the rates of seroprotection following HBV vaccination in drug using populations and to conduct a meta-analysis to identify the factors associated with varying seroprotection rates. Seroprotection is defined as developing an anti-HBs antibody level of ≥ 10 mIU/ml after receiving the HBV vaccine. Original research articles were searched using online databases and reference lists of shortlisted articles. HBV vaccine intervention studies reporting seroprotection rates in drug users and published in English language during or after 1989 were eligible. Out of 235 citations reviewed, 11 studies were included in this review. The reported seroprotection rates ranged from 54.5 – 97.1%. Combination vaccine (HAV and HBV) (Risk ratio 12.91, 95% CI 2.98-55.86, p = 0.003), measurement of anti-HBs with microparticle immunoassay (Risk ratio 3.46, 95% CI 1.11-10.81, p = 0.035) and anti-HBs antibody measurement at 2 months after the last HBV vaccine dose (RR 4.11, 95% CI 1.55-10.89, p = 0.009) were significantly associated with higher seroprotection rates. Although statistically nonsignificant, the variables mean age>30 years, higher prevalence of anti-HBc antibody and anti-HIV antibody in the sample population, and current drug use (not in drug rehabilitation treatment) were strongly associated with decreased seroprotection rates. Proportion of injecting drug users, vaccine dose and accelerated vaccine schedule were not predictors of heterogeneity across studies. Studies examined in this review were significantly heterogeneous (Q = 180.850, p = 0.000) and factors identified should be considered when comparing immune response across studies. The combination vaccine showed promising results; however, its effectiveness compared to standard HBV vaccine needs to be examined systematically. Immune response in DUs can possibly be improved by the use of bivalent vaccines, booster doses, and improving vaccine completion rates through integrated public programs and incentives.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The determination of size as well as power of a test is a vital part of a Clinical Trial Design. This research focuses on the simulation of clinical trial data with time-to-event as the primary outcome. It investigates the impact of different recruitment patterns, and time dependent hazard structures on size and power of the log-rank test. A non-homogeneous Poisson process is used to simulate entry times according to the different accrual patterns. A Weibull distribution is employed to simulate survival times according to the different hazard structures. The current study utilizes simulation methods to evaluate the effect of different recruitment patterns on size and power estimates of the log-rank test. The size of the log-rank test is estimated by simulating survival times with identical hazard rates between the treatment and the control arm of the study resulting in a hazard ratio of one. Powers of the log-rank test at specific values of hazard ratio (≠1) are estimated by simulating survival times with different, but proportional hazard rates for the two arms of the study. Different shapes (constant, decreasing, or increasing) of the hazard function of the Weibull distribution are also considered to assess the effect of hazard structure on the size and power of the log-rank test. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An interim analysis is usually applied in later phase II or phase III trials to find convincing evidence of a significant treatment difference that may lead to trial termination at an earlier point than planned at the beginning. This can result in the saving of patient resources and shortening of drug development and approval time. In addition, ethics and economics are also the reasons to stop a trial earlier. In clinical trials of eyes, ears, knees, arms, kidneys, lungs, and other clustered treatments, data may include distribution-free random variables with matched and unmatched subjects in one study. It is important to properly include both subjects in the interim and the final analyses so that the maximum efficiency of statistical and clinical inferences can be obtained at different stages of the trials. So far, no publication has applied a statistical method for distribution-free data with matched and unmatched subjects in the interim analysis of clinical trials. In this simulation study, the hybrid statistic was used to estimate the empirical powers and the empirical type I errors among the simulated datasets with different sample sizes, different effect sizes, different correlation coefficients for matched pairs, and different data distributions, respectively, in the interim and final analysis with 4 different group sequential methods. Empirical powers and empirical type I errors were also compared to those estimated by using the meta-analysis t-test among the same simulated datasets. Results from this simulation study show that, compared to the meta-analysis t-test commonly used for data with normally distributed observations, the hybrid statistic has a greater power for data observed from normally, log-normally, and multinomially distributed random variables with matched and unmatched subjects and with outliers. Powers rose with the increase in sample size, effect size, and correlation coefficient for the matched pairs. In addition, lower type I errors were observed estimated by using the hybrid statistic, which indicates that this test is also conservative for data with outliers in the interim analysis of clinical trials.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. Breast cancer is the most frequently diagnosed cancer and the leading cause of cancer death among females, accounting for 23% (1.38 million) of the total new cancer cases and 14% (458,400) of the total cancer deaths in 2008. [1] Triple-negative breast cancer (TNBC) is an aggressive phenotype comprising 10–20% of all breast cancers (BCs). [2-4] TNBCs show absence of estrogen, progesterone and HER2/neu receptors on the tumor cells. Because of the absence of these receptors, TNBCs are not candidates for targeted therapies. Circulating tumor cells (CTCs) are observed in blood of breast cancer patients even at early stages (Stage I & II) of the disease. Immunological and molecular analysis can be used to detect the presence of tumor cells in the blood (Circulating tumor cells; CTCs) of many breast cancer patients. These cells may explain relapses in early stage breast cancer patients even after adequate local control. CTC detection may be useful in identifying patients at risk for disease progression, and therapies targeting CTCs may improve outcome in patients harboring them. Methods . In this study we evaluated 80 patients with TNBC who are enrolled in a larger prospective study conducted at M D Anderson Cancer Center in order to determine whether the presence of circulating tumor cells is a significant prognostic factor in relapse free and overall survival . Patients with metastatic disease at the time of presentation were excluded from the study. CTCs were assessed using CellSearch System™ (Veridex, Raritan, NJ). CTCs were defined as nucleated cells lacking the presence of CD45 but expressing cytokeratins 8, 18 or 19. The distribution of patient and tumor characteristics was analyzed using chi square test and Fisher's exact test. Log rank test and Cox regression analysis was applied to establish the association of circulating tumor cells with relapse free and overall survival. Results. The median age of the study participants was 53years. The median duration of follow-up was 40 months. Eighty-eight percent (88%) of patients were newly diagnosed (without a previous history of breast cancer), and (60%) of patients were chemo naïve (had not received chemotherapy at the time of their blood draw for CTC analysis). Tumor characteristics such as stage (P=0.40), tumor size (P=69), sentinel nodal involvement (P=0.87), axillary lymph node involvement (P=0.13), adjuvant therapy (P=0.83), and high histological grade of tumor (P=0.26) did not predict the presence of CTCs. However, CTCs predicted worse relapse free survival (1 or more CTCs log rank P value = 0.04, at 2 or more CTCs P = 0.02 and at 3 or more CTCs P < 0.0001) and overall survival (at 1 or more CTCs log rank P value = 0.08, at 2 or more CTCs P = 0.01 and at 3 or more CTCs P = 0.0001. Conclusions. The number of circulating tumor cells predicted worse relapse free survival and overall survival in TNBC patients.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The genomic era brought by recent advances in the next-generation sequencing technology makes the genome-wide scans of natural selection a reality. Currently, almost all the statistical tests and analytical methods for identifying genes under selection was performed on the individual gene basis. Although these methods have the power of identifying gene subject to strong selection, they have limited power in discovering genes targeted by moderate or weak selection forces, which are crucial for understanding the molecular mechanisms of complex phenotypes and diseases. Recent availability and rapid completeness of many gene network and protein-protein interaction databases accompanying the genomic era open the avenues of exploring the possibility of enhancing the power of discovering genes under natural selection. The aim of the thesis is to explore and develop normal mixture model based methods for leveraging gene network information to enhance the power of natural selection target gene discovery. The results show that the developed statistical method, which combines the posterior log odds of the standard normal mixture model and the Guilt-By-Association score of the gene network in a naïve Bayes framework, has the power to discover moderate/weak selection gene which bridges the genes under strong selection and it helps our understanding the biology under complex diseases and related natural selection phenotypes.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that an identification problem exists in the analysis of age-period-cohort data because of the relationship among the three factors (date of birth + age at death = date of death). There are numerous suggestions about how to analyze the data. No one solution has been satisfactory. The purpose of this study is to provide another analytic method by extending the Cox's lifetable regression model with time-dependent covariates. The new approach contains the following features: (1) It is based on the conditional maximum likelihood procedure using a proportional hazard function described by Cox (1972), treating the age factor as the underlying hazard to estimate the parameters for the cohort and period factors. (2) The model is flexible so that both the cohort and period factors can be treated as dummy or continuous variables, and the parameter estimations can be obtained for numerous combinations of variables as in a regression analysis. (3) The model is applicable even when the time period is unequally spaced.^ Two specific models are considered to illustrate the new approach and applied to the U.S. prostate cancer data. We find that there are significant differences between all cohorts and there is a significant period effect for both whites and nonwhites. The underlying hazard increases exponentially with age indicating that old people have much higher risk than young people. A log transformation of relative risk shows that the prostate cancer risk declined in recent cohorts for both models. However, prostate cancer risk declined 5 cohorts (25 years) earlier for whites than for nonwhites under the period factor model (0 0 0 1 1 1 1). These latter results are similar to the previous study by Holford (1983).^ The new approach offers a general method to analyze the age-period-cohort data without using any arbitrary constraint in the model. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stable isotope analysis was performed on the structural carbonate of fish bone apatite from early and early middle Eocene samples (~55 to ~45 Ma) recently recovered from the Lomonosov Ridge by Integrated Ocean Drilling Program Expedition 302 (the Arctic Coring Expedition). The d18O values of the Eocene samples ranged from -6.84 per mil to -2.96 per mil Vienna Peedee belemnite, with a mean value of -4.89 per mil, compared to 2.77 per mil for a Miocene sample in the overlying section. An average salinity of 21 to 25 per mil was calculated for the Eocene Arctic, compared to 35 per mil for the Miocene, with lower salinities during the Paleocene Eocene thermal maximum, the Azolla event at ~48.7 Ma, and a third previously unidentified event at ~47.6 Ma. At the Azolla event, where the organic carbon content of the sediment reaches a maximum, a positive d13C excursion was observed, indicating unusually high productivity in the surface waters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dataset characterizes the evolution of western African precipitation indicated by marine sediment geochemical records in comparison to transient simulations using CCSM3 global climate model throughout the Last Interglacial (130-115 ka). It contains (1) defined tie-points (age models), newly published stable isotopes of benthic foraminifera and Al/Si log-ratios of eight marine sediment cores from the western African margin and (2) annual and seasonal rainfall anomalies (relative to pre-industrial values) for six characteristic latitudinal bands in western Africa simulated by CCSM3 (two transient simulations: one non-accelerated and one accelerated experiment).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present uranium-thoriumchronology for a 102 mcore through a Pleistocene reef at Tahiti (French Polynesia) sampled during IODP Expedition 310 "Tahiti Sea Level". We employ total and partial dissolution procedures on the older coral samples to investigate the diagenetic overprint of the uranium-thoriumsystem. Although alteration of the U-Th system cannot be robustly corrected, diagenetic trends in the U-Th data, combined with sea level and subsidence constraints for the growth of the corals enables the age of critical samples to be constrained to marine isotope stage 9. We use the ages of the corals, together with d18O based sea-level histories, to provide maximum constraints on possible paleo water-depths. These depth constraints are then compared to independent depth estimates based on algal and foraminiferal assemblages, microbioerosion patterns, and sedimentary facies, confirming the accuracy of these paleo water-depth estimates. We also use the fact that corals could not have grown above sea level to place amaximumconstraint on the subsidence rate of Tahiti to be 0.39 m ka**-1,with the most likely rate being close to the existing minimum estimate of 0.25m ka**-1.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Greenland ice sheet is accepted as a key factor controlling the Quaternary glacial scenario. However, the origin and mechanisms of major Arctic glaciation starting at 3.15 Ma and culminating at 2.74 Ma are still controversial. For this phase of intense cooling Ravelo et al. proposed a complex gradual forcing mechanism. In contrast, our new submillennial-scale paleoceanographic records from the Pliocene North Atlantic suggest a far more precise timing and forcing for the initiation of northern hemisphere glaciation (NHG), since it was linked to a 2-3 °C surface water warming during warm stages from 2.95 to 2.82 Ma. These records support previous models, claiming that the final closure of the Panama Isthmus (3.0- ~2.5 Ma induced an increased poleward salt and heat transport. Associated strengthening of North Atlantic Thermohaline Circulation and in turn, an intensified moisture supply to northern high latitudes resulted in the build-up of NHG, finally culminating in the great, irreversible climate crash at marine isotope stage G6 (2.74 Ma). In summary, there was a two-step threshold mechanism that marked the onset of NHG with glacial-to-interglacial cycles quasi-persistent until today.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study presents the results of high-resolution sedimentological and clay mineralogical investigations on sediments from ODP Sites 908A and 909AlC located in the central Fram Strait. The objective was to reconstruct the paleoclimate and paleoceanography of the high northern latitudes since the middle Miocene. The sediments are characterised in particular by a distinctive input of ice-rafted material, which most probably occurs since 6 Ma and very likely since 15 Ma. A change in the source area at 1 1.2 Ma is clearly marked by variations within clay mineral composition and increasing accumulation rates. This is interpreted as a result of an increase in water mass exchange through the Fram Strait. A further period of increasing exchange between 4-3 Ma is identified by granulometric investigations and points to a synchronous intensification of deep water production in the North Atlantic during this time interval. A comparison of the components of coarse and clay fraction clearly shows that both are not delivered by the Same transport process. The input of the clay fraction can be related to transport mechanisms through sea ice and glaciers and very likely also through oceanic currents. A reconstruction of source areas for clay minerals is possible only with some restrictions. High smectite contents in middle and late Miocene sediments indicate a background signal produced by soil formation together with sediment input, possibly originating from the Greenland- Scotland Ridge. The applicability of clay mineral distribution as a climate proxy for the high northern latitudes can be confirmed. Based on a comparison of sediments from Site 909C, characterised by the smectite/illite and chlorite ratio, with regional and global climatic records (oxygen isotopes), a middle Miocene cooling phase between 14.8-14.6 Ma can be proposed. A further cooling phase between 10-9 Ma clearly shows similarities in its Progress toward drastic decrease in carbonate sedimentation and preservation in the eastern equatorial Pacific. The modification in sea water and atmosphere chemistry may represent a possible link due to the built-up of equatorial carbonate platforms. Between 4.8-4.6 Ma clay mineral distribution indicates a distinct cooling trend in the Fram Strait region. This is not accompanied by relevant glaciation, which would otherwise be indicated by the coarse fraction. The intensification of glaciation in the northern hemisphere is distinctly documented by a rapid increase of illite and chlorite starting from 3.3 Ma, which corresponds to oxygen isotope data trends from North Atlantic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sarcya 1 dive explored a previously unknown 12 My old submerged volcano, labelled Cornacya. A well developed fracturation is characterised by the following directions: N 170 to N-S, N 20 to N 40, N 90 to N 120, N 50 to N 70, which corresponds to the fracturation pattern of the Sardinian margin. The sampled lavas exhibit features of shoshonitic suites of intermediate composition and include amphibole-and mica-bearing lamprophyric xenoliths which are geochemically similar to Ti-poor lamproites. Mica compositions reflect chemical exchanges between the lamprophyre and its shoshonitic host rock suggesting their simultaneous emplacement. Nd compositions of the Cornacya K-rich suite indicate that continental crust was largely involved in the genesis of these rocks. The spatial association of the lamprophyre with the shoshonitic rocks is geochemically similar to K-rich and TiO2-poor igneous suites, emplaced in post-collisional settings. Among shoshonitic rocks, sample SAR 1-01 has been dated at 12.6±0.3 My using the 40Ar/39Ar method with a laser microprobe on single grains. The age of the Cornacya shoshonitic suite is similar to that of the Sisco lamprophyre from Corsica, which similarly is located on the western margin of the Tyrrhenian Sea. Thus, the Cornacya shoshonitic rocks and their lamprophyric xenolith and the Sisco lamprophyre could represent post-collisional suites emplaced during the lithospheric extension of the Corsica-Sardinia block, just after its rotation and before the Tyrrhenian sea opening. Drilling on the Sardinia margin (ODP Leg 107) shows that the upper levels of the present day margin (Hole 654) suffered tectonic subsidence before the lower part (Hole 652). The structure of this lower part is interpreted as the result of an eastward migration of the extension during Late Miocene and Early Pliocene times. Data of Cornacya volcano are in good agreement with this model and provide good chronological constraints for the beginning of the phenomenon.