950 resultados para Lead-time reduction


Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES The aim of this prospective cohort trial was to perform a cost/time analysis for implant-supported single-unit reconstructions in the digital workflow compared to the conventional pathway. MATERIALS AND METHODS A total of 20 patients were included for rehabilitation with 2 × 20 implant crowns in a crossover study design and treated consecutively each with customized titanium abutments plus CAD/CAM-zirconia-suprastructures (test: digital) and with standardized titanium abutments plus PFM-crowns (control conventional). Starting with prosthetic treatment, analysis was estimated for clinical and laboratory work steps including measure of costs in Swiss Francs (CHF), productivity rates and cost minimization for first-line therapy. Statistical calculations were performed with Wilcoxon signed-rank test. RESULTS Both protocols worked successfully for all test and control reconstructions. Direct treatment costs were significantly lower for the digital workflow 1815.35 CHF compared to the conventional pathway 2119.65 CHF [P = 0.0004]. For subprocess evaluation, total laboratory costs were calculated as 941.95 CHF for the test group and 1245.65 CHF for the control group, respectively [P = 0.003]. The clinical dental productivity rate amounted to 29.64 CHF/min (digital) and 24.37 CHF/min (conventional) [P = 0.002]. Overall, cost minimization analysis exhibited an 18% cost reduction within the digital process. CONCLUSION The digital workflow was more efficient than the established conventional pathway for implant-supported crowns in this investigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hintergrund Begleitverletzungen können in bis zu 90 % der Fälle nach erstmaliger Schulterluxation auftreten. Auch wenn sie nicht immer einen Einfluss auf die Therapiewahl haben, so ist eine sorgfältige Diagnostik entscheidend. Einteilung In der Akutsituation ist eine konventionelle Bildgebung in mindestens 2 Ebenen (a.-p./Neer/evtl. axial) vor und nach Reposition zwingend. Luxationsfrakturen dürfen nicht übersehen bzw. durch das Manöver der geschlossenen Reposition sekundär disloziert werden. Bestehen ossäre glenoidale, humerale oder kombinierte Verletzungen, sollten sie gemäß Stabilitätskriterien versorgt werden. Dies kann umgehend, nach manifester Dezentrierung oder Instabilität entweder mittels Osteosythese oder als glenohumerale Stabilisation im Verlauf erfolgen. Bei einer Instabilität ist prinzipiell zur Bilanzierung einer ossären Ursache das Arthro-CT die Untersuchung der Wahl, welche auch eine Beurteilung der kapsulolabroligamentären Verletzung sowie einer traumatischen Rotatorenmanschettenläsion ermöglicht. Letztere ist jedoch besser mittels Arthro-MRT zu beurteilen. Diskussion Eine signifikante frische, meist größere oder massive, Rotatorenmanschettenläsion sollte rasch operativ angegangen werden. Medial reichende „off the track“ Hill-Sachs-Läsionen können mittels einer Hill-Sachs-Remplissage oder, wie auch glenoidale Defekte, mittels einer Kochenaugmentation versorgt werden. Langzeitresultate des Latarjet-Verfahrens zeigen 25 Jahre nach dem Eingriff die niedrigste Reluxationsrate < 4 %, eine gute Außenrotation, eine sehr hohe Patientenzufriedenheit und degenerative Veränderungen, welche vergleichbar mit der natürlichen Entwicklung nach erstmaliger Schulterluxation ohne Rezidiv sind.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aetiology of childhood cancers remains largely unknown. It has been hypothesized that infections may be involved and that mini-epidemics thereof could result in space-time clustering of incident cases. Most previous studies support spatio-temporal clustering for leukaemia, while results for other diagnostic groups remain mixed. Few studies have corrected for uneven regional population shifts which can lead to spurious detection of clustering. We examined whether there is space-time clustering of childhood cancers in Switzerland identifying cases diagnosed at age <16 years between 1985 and 2010 from the Swiss Childhood Cancer Registry. Knox tests were performed on geocoded residence at birth and diagnosis separately for leukaemia, acute lymphoid leukaemia (ALL), lymphomas, tumours of the central nervous system, neuroblastomas and soft tissue sarcomas. We used Baker's Max statistic to correct for multiple testing and randomly sampled time-, sex- and age-matched controls from the resident population to correct for uneven regional population shifts. We observed space-time clustering of childhood leukaemia at birth (Baker's Max p = 0.045) but not at diagnosis (p = 0.98). Clustering was strongest for a spatial lag of <1 km and a temporal lag of <2 years (Observed/expected close pairs: 124/98; p Knox test = 0.003). A similar clustering pattern was observed for ALL though overall evidence was weaker (Baker's Max p = 0.13). Little evidence of clustering was found for other diagnostic groups (p > 0.2). Our study suggests that childhood leukaemia tends to cluster in space-time due to an etiologic factor present in early life.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Much of the literature on disparities in access to health care among children has focused on measuring absolute and relative differences experienced by race/ethnic groups and, to a lesser extent, socioeconomic groups. However, it is not clear from existing literature how disparities in access to care may have changed over time for children, especially following implementation of the State Children’s Health Insurance Program (SCHIP). The primary objective of this research was to determine if there has been a decrease in disparities in access to care for children across two socioeconomic groups and race/ethnicity groups after SCHIP implementation. Methods commonly used to measure ‘health inequalities’ were used to measure disparities in access to care including population-attributable risk (PAR) and the relative index of inequality (RII). Using these measures there is evidence of a substantial decrease in socioeconomic disparities in health insurance coverage and to a lesser extent in having a usual source of care since the SCHIP program began. There is also evidence of a considerable decrease in non-Hispanic Black disparities in access to care. However, there appears to be a slight increase in disparities in access to care among Hispanic compared to non-Hispanic White children. While there were great improvements in disparities in access to care with the introduction of the SCHIP program, continuing progress in disparities may depend on continuation of the SCHIP program or similar targeted health policy programs. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent data have shown that the percentage of time spent preparing food has decreased during the past few years, and little information is know about how much time people spend grocery shopping. Food that is pre-prepared is often higher in calories and fat compared to foods prepared at home from scratch. It has been suggested that, because of the higher energy and total fat levels, increased consumption of pre-prepared foods compared to home-cooked meals can lead to weight gain, which in turn can lead to obesity. Nevertheless, to date no study has examined this relationship. The purpose of this study is to determine (i) the association between adult body mass index (BMI) and the time spent preparing meals, and (ii) the association between adult BMI and time spent shopping for food. Data on food habits and body size were collected with a self-report survey of ethnically diverse adults between the ages of 17 and 70 at a large university. The survey was used to recruit people to participate in nutrition or appetite studies. Among other data, the survey collected demographic data (gender, race/ethnicity), minutes per week spent in preparing meals and minutes per week spent grocery shopping. Height and weight were self-reported and used to calculate BMI. The study population consisted of 689 subjects, of which 276 were male and 413 were female. The mean age was 23.5 years, with a median age of 21 years. The fraction of subjects with BMI less than 24.9 was 65%, between 25 and 29.9 was 26%, and 30 or greater was 9%. Analysis of variation was used to examine associations between food preparation time and BMI. ^ The results of the study showed that there were no significant statistical association between adult healthy weight, overweight and obesity with either food preparation time and grocery shopping time. Of those in the sample who reported preparing food, the mean food preparation time per week for the healthy weight, overweight, and obese groups were 12.8 minutes, 12.3 minutes, and 11.6 minutes respectively. Similarly, the mean weekly grocery shopping for healthy, overweight, and obese groups were 60.3 minutes per week (8.6min./day), 61.4 minutes (8.8min./day), and 57.3 minutes (8.2min./day), respectively. Since this study was conducted through a University campus, it is assumed that most of the sample was students, and a percentage might have been utilizing meal plans on campus, and thus, would have reported little meal preparation or grocery shopping time. Further research should examine the relationships between meal preparation time and time spent shopping for food in a sample that is more representative of the general public. In addition, most people spent very little time preparing food, and thus, health promotion programs for this population need to focus on strategies for preparing quick meals or eating in restaurants/cafeterias. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective. Congenital limb defects are common birth defects occurring in approximately 2-7/10,000 live births. Because congenital limb defects are pervasive throughout all populations, and the conditions profoundly affect quality of life, they represent a significant public health concern. Currently there is a paucity of etiologic information in the literature regarding congenital limb reduction defects which represents a major limitation in developing treatment strategies as well as identifying high risk pregnancies. ^ Additionally, despite the fact that the majority of congenital limb reduction defects are isolated, most previous studies have not separated them from those occurring as part of a known syndrome or with multiple additional congenital anomalies of unknown etiology. It stands to reason that factors responsible for multiple congenital anomalies that happen to include congenital limb reduction defects may be quite different from those factors leading to an isolated congenital limb reduction defect. ^ As a first step toward gaining etiologic understanding, this cross-sectional study was undertaken to determine the birth prevalence and obtain demographic information about non-syndromic (isolated) congenital limb reduction defects that occurred in Texas from 1999-2001. ^ Methods. The study population included all infants/fetuses with isolated congenital limb reduction defects born in Texas during 1999-2001; the comparison population was all infants who were born to mothers who were residents of Texas during the same period of time. The overall birth prevalence of limb reduction defects was determined and adjusted for ethnicity, gender, site of defect (upper limb versus lower limb), county of residence, maternal age and maternal education. ^ Results. In Texas, the overall birth prevalence of isolated CLRDs was 2.1/10,000 live births (1.5 and 0.6/10,000 live births for upper limb and lower limb, respectively). ^ The risk of isolated lower limb CLRDs in Texas was significantly lower in females when gender was examined individually (crude prevalence odds ratio of 0.57, 95% CI of 0.36-0.91) as well as in relation to all other variables used in the analysis (adjusted prevalence odds ratio of 0.58, 95% CI of 0.36-0.93). ^ Harris County (which includes the Houston metropolitan area) had significantly lower risks of all (upper limb and lower limb combined) isolated CLRDs when examined in relation to other counties in Texas, with a crude prevalence odds ratio of 0.4 (95% CI: 0.29-0.72) and an adjusted prevalence odds ratio of 0.50 (95% CI: 0.31-0.80). The risk of isolated upper limb CLRDs was significantly lower in Harris County (crude prevalence odds ratio of 0.45, CI of 0.26-0.76 and adjusted prevalence odds ratio of 0.49, CI of 0.28-0.84). This trend toward decreased risk in Harris County was not observed for isolated lower limb reduction defects (adjusted prevalence odds ratio of 0.50, 95% confidence interval: 0.22-1.12). ^ Conclusions. The birth prevalence of isolated congenital limb reduction defects in Texas is in the lower limits of the range of rates that have been reported by other authors for other states (Alabama, Arkansas, California, Georgia, Hawaii, Iowa, Maryland, Massachusetts, North Carolina, Oklahoma, Utah, Washington) and other countries (Argentina, Australia, Austria, Bolivia, Brazil, Canada, Chile, China, Colombia, Costa Rica, Croatia, Denmark, Ecuador, England, Finland, France, Germany, Hungary, Ireland, Israel, Italy, Lithuania, Mexico, Norway, Paraguay, Peru, Spain, Scotland, Sweden, Switzerland, Uruguay, and Venezuela). In Texas, the birth prevalence of isolated congenital lower limb reduction defects was greater for males than females, while the birth prevalence of isolated congenital upper limb reduction defects was not significantly different between males and females. The reduced rates of limb reduction defects in Harris County warrant further investigation. This study has provided an important first step toward gaining etiologic understanding in the study of isolated congenital limb reduction defects. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Can the early identification of the species of staphylococcus responsible for infection by the use of Real Time PCR technology influence the approach to the treatment of these infections? ^ This study was a retrospective cohort study in which two groups of patients were compared. The first group, ‘Physician Aware’ consisted of patients in whom physicians were informed of specific staphylococcal species and antibiotic sensitivity (using RT-PCR) at the time of notification of the gram stain. The second group, ‘Physician Unaware’ consisted of patients in whom treating physicians received the same information 24–72 hours later as a result of blood culture and antibiotic sensitivity determination. ^ The approach to treatment was compared between ‘Physician Aware’ and ‘Physician Unaware’ groups for three different microbiological diagnoses—namely MRSA, MSSA and no-SA (or coagulase negative Staphylococcus). ^ For a diagnosis of MRSA, the mean time interval to the initiation of Vancomycin therapy was 1.08 hours in the ‘Physician Aware’ group as compared to 5.84 hours in the ‘Physician Unaware’ group (p=0.34). ^ For a diagnosis of MSSA, the mean time interval to the initiation of specific anti-MSSA therapy with Nafcillin was 5.18 hours in the ‘Physician Aware’ group as compared to 49.8 hours in the ‘Physician Unaware’ group (p=0.007). Also, for the same diagnosis, the mean duration of empiric therapy in the ‘Physician Aware’ group was 19.68 hours as compared to 80.75 hours in the ‘Physician Unaware’ group (p=0.003) ^ For a diagnosis of no-SA or coagulase negative staphylococcus, the mean duration of empiric therapy was 35.65 hours in the ‘Physician Aware’ group as compared to 44.38 hours in the ‘Physician Unaware’ group (p=0.07). However, when treatment was considered a categorical variable and after exclusion of all cases where anti-MRS therapy was used for unrelated conditions, only 20 of 72 cases in the ‘Physician Aware’ group received treatment as compared to 48 of 106 cases in the ‘Physician Unaware’ group. ^ Conclusions. Earlier diagnosis of MRSA may not alter final treatment outcomes. However, earlier identification may lead to the earlier institution of measures to limit the spread of infection. The early diagnosis of MSSA infection, does lead to treatment with specific antibiotic therapy at an earlier stage of treatment. Also, the duration of empiric therapy is greatly reduced by early diagnosis. The early diagnosis of coagulase negative staphylococcal infection leads to a lower rate of unnecessary treatment for these infections as they are commonly considered contaminants. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction. Despite the ban of lead-containing gasoline and paint, childhood lead poisoning remains a public health issue. Furthermore, a Medicaid-eligible child is 8 times more likely to have an elevated blood lead level (EBLL) than a non-Medicaid child, which is the primary reason for the early detection lead screening mandate for ages 12 and 24 months among the Medicaid population. Based on field observations, there was evidence that suggested a screening compliance issue. Objective. The purpose of this study was to analyze blood lead screening compliance in previously lead poisoned Medicaid children and test for an association between timely lead screening and timely childhood immunizations. The mean months between follow-up tests were also examined for a significant difference between the non-compliant and compliant lead screened children. Methods. Access to the surveillance data of all childhood lead poisoned cases in Bexar County was granted by the San Antonio Metropolitan Health District. A database was constructed and analyzed using descriptive statistics, logistic regression methods and non-parametric tests. Lead screening at 12 months of age was analyzed separately from lead screening at 24 months. The small portion of the population who were also related were included in one analysis and removed from a second analysis to check for significance. Gender, ethnicity, age of home, and having a sibling with an EBLL were ruled out as confounders for the association tests but ethnicity and age of home were adjusted in the nonparametric tests. Results. There was a strong significant association between lead screening compliance at 12 months and childhood immunization compliance, with or without including related children (p<0.00). However, there was no significant association between the two variables at the age of 24 months. Furthermore, there was no significant difference between the median of the mean months of follow-up blood tests among the non-compliant and compliant lead screened population for at the 12 month screening group but there was a significant difference at the 24 month screening group (p<0.01). Discussion. Descriptive statistics showed that 61% and 56% of the previously lead poisoned Medicaid population did not receive their 12 and 24 month mandated lead screening on time, respectively. This suggests that their elevated blood lead level may have been diagnosed earlier in their childhood. Furthermore, a child who is compliant with their lead screening at 12 months of age is 2.36 times more likely to also receive their childhood immunizations on time compared to a child who was not compliant with their 12 month screening. Even though there was no statistical significant association found for the 24 month group, the public health significance of a screening compliance issue is no less important. The Texas Medicaid program needs to enforce lead screening compliance because it is evident that there has been no monitoring system in place. Further recommendations include a need for an increased focus on parental education and the importance of taking their children for wellness exams on time.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The relationship between degree of diastolic blood pressure (DBP) reduction and mortality was examined among hypertensives, ages 30-69, in the Hypertension Detection and Follow-up Program (HDFP). The HDFP was a multi-center community-based trial, which followed 10,940 hypertensive participants for five years. One-year survival was required for inclusion in this investigation since the one-year annual visit was the first occasion where change in blood pressure could be measured on all participants. During the subsequent four years of follow-up on 10,052 participants, 568 deaths occurred. For levels of change in DBP and for categories of variables related to mortality, the crude mortality rate was calculated. Time-dependent life tables were also calculated so as to utilize available blood pressure data over time. In addition, the Cox life table regression model, extended to take into account both time-constant and time-dependent covariates, was used to examine the relationship change in blood pressure over time and mortality.^ The results of the time-dependent life table and time-dependent Cox life table regression analyses supported the existence of a quadratic function which modeled the relationship between DBP reduction and mortality, even after adjusting for other risk factors. The minimum mortality hazard ratio, based on a particular model, occurred at a DBP reduction of 22.6 mm Hg (standard error = 10.6) in the whole population and 8.5 mm Hg (standard error = 4.6) in the baseline DBP stratum 90-104. After this reduction, there was a small increase in the risk of death. There was not evidence of the quadratic function after fitting the same model using systolic blood pressure. Methodologic issues involved in studying a particular degree of blood pressure reduction were considered. The confidence interval around the change corresponding to the minimum hazard ratio was wide and the obtained blood pressure level should not be interpreted as a goal for treatment. Blood pressure reduction was attributed, not only to pharmacologic therapy, but also to regression to the mean, and to other unknown factors unrelated to treatment. Therefore, the surprising results of this study do not provide direct implications for treatment, but strongly suggest replication in other populations. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The infant mortality rate (IMR) is considered to be one of the most important indices of a country's well-being. Countries around the world and other health organizations like the World Health Organization are dedicating their resources, knowledge and energy to reduce the infant mortality rates. The well-known Millennium Development Goal 4 (MDG 4), whose aim is to archive a two thirds reduction of the under-five mortality rate between 1990 and 2015, is an example of the commitment. ^ In this study our goal is to model the trends of IMR between the 1950s to 2010s for selected countries. We would like to know how the IMR is changing overtime and how it differs across countries. ^ IMR data collected over time forms a time series. The repeated observations of IMR time series are not statistically independent. So in modeling the trend of IMR, it is necessary to account for these correlations. We proposed to use the generalized least squares method in general linear models setting to deal with the variance-covariance structure in our model. In order to estimate the variance-covariance matrix, we referred to the time-series models, especially the autoregressive and moving average models. Furthermore, we will compared results from general linear model with correlation structure to that from ordinary least squares method without taking into account the correlation structure to check how significantly the estimates change.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study evaluated a modified home-based model of family preservation services, the long-term community case management model, as operationalized by a private child welfare agency that serves as the last resort for hard-to-serve families with children at severe risk of out-of-home placement. The evaluation used a One-Group Pretest-Posttest design with a modified time-series design to determine if the intervention would produce a change over time in the composite score of each family's Child Well-Being Scales (CWBS). A comparison of the mean CWBS scores of the 208 families and subsets of these families at the pretest and various posttests showed a statistically significant decrease in the CWBS scores, indicating decreased risk factors. The longer the duration of services, the greater the statistically significant risk reduction. The results support the conclusion that the families who participate in empowerment-oriented community case management, with the option to extend service duration to resolve or ameliorate chronic family problems, have experienced effective strengthening in family functioning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tuberculosis is a major cause of death due to an infection in mankind. BCG vaccine protects against childhood tuberculosis although, it fails to protect against adult tuberculosis. BCG vaccine localizes to immature phagosomes of macrophages, and avoids lysosomal fusion, which decreases peptide antigen production. Peptides are essential for macrophage-mediated priming of CD4 and CD8 T cells respectively through MHC-II and MHC-I pathways. Furthermore, BCG reduces the expression of MHC-II in macrophages of mice after infection, through Toll-like receptor-1/2 (TLR-1/2) mediated signaling. In my first aim, I hypothesized that BCG-induced reduction of MHC-II levels in macrophages can decrease CD4 T cell function, while activation of other surface Toll-like receptors (TLR) can enhance CD4 T cell function. An in vitro antigen presentation model was used where, TLR activated macrophages presented an epitope of Ag85B, a major immunogen of BCG to CD4 T cells, and T cell derived IL-2 was quantitated as a measure of antigen presentation. Macrophages with BCG were poor presenters of Ag85B while, TLR-7/9/5/4 and 1/2 activation led to an enhanced antigen presentation. Furthermore, TLR-7/9 activation was found to down-regulate the degradation of MHC-II through ubiquitin ligase MARCH1, and also stimulate MHC-II expression through activation of AP-1 and CREB transcription elements via p38 and ERK1/2 MAP kinases. I conclude from Aim-I studies that TLR-7/9 ligands can be used as more effective ‘adjuvants’ for BCG vaccine. In Aim-II, I evaluated the poor CD8 T cell function in BCG vaccinated mice thought to be due to a decreased leak of antigens into cytosol from immature phagosomes, which reduces the MHC-I mediated activation of CD8 T cells. I hypothesized that rapamycin co-treatment could boost CD8 T cell function since it was known to sort BCG vaccine into lysosomes increasing peptide generation, and it also enhanced the longevity of CD8 T cells. Since CD8 T cell function is a dynamic event better measurable in vivo, mice were given BCG vaccine with or without rapamycin injections and challenged with virulent Mycobacterium tuberculosis. Organs were analysed for tetramer or surface marker stained CD8 T cells using flow cytometry, and bacterial counts of organisms for evaluation of BCG-induced protection. Co-administration of rapamycin with BCG significantly increased the numbers of CD8 T cells in mice which developed into both short living effector- SLEC type of CD8 T cells, and memory precursor effector-MPEC type of longer-living CD8 T cells. Increased levels of tetramer specific-CD8 T cells correlated with a better protection against tuberculosis in rapamycin-BCG group compared to BCG vaccinated mice. When rapamycin-BCG mice were rested and re-challenged with M.tuberculosis, MPECs underwent stronger recall expansion and protected better against re-infection than mice vaccinated with BCG alone. Since BCG induced immunity wanes with time in humans, we made two novel observations in this study that adjuvant activation of BCG vaccine and rapamycin co-treatment both lead to a stronger and longer vaccine-mediated immunity to tuberculosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A limiting factor in the accuracy and precision of U/Pb zircon dates is accurate correction for initial disequilibrium in the 238U and 235U decay chains. The longest-lived-and therefore most abundant-intermediate daughter product in the 235U isotopic decay chain is 231Pa (T1/2 = 32.71 ka), and the partitioning behavior of Pa in zircon is not well constrained. Here we report high-precision thermal ionization mass spectrometry (TIMS) U-Pb zircon data from two samples from Ocean Drilling Program (ODP) Hole 735B, which show evidence for incorporation of excess 231Pa during zircon crystallization. The most precise analyses from the two samples have consistent Th-corrected 206Pb/238U dates with weighted means of 11.9325 ± 0.0039 Ma (n = 9) and 11.920 ± 0.011 Ma (n = 4), but distinctly older 207Pb/235U dates that vary from 12.330 ± 0.048 Ma to 12.140 ± 0.044 Ma and 12.03 ± 0.24 to 12.40 ± 0.27 Ma, respectively. If the excess 207Pb is due to variable initial excess 231Pa, calculated initial (231Pa)/(235U) activity ratios for the two samples range from 5.6 ± 1.0 to 9.6 ± 1.1 and 3.5 ± 5.2 to 11.4 ± 5.8. The data from the more precisely dated sample yields estimated DPazircon/DUzircon from 2.2-3.8 and 5.6-9.6, assuming (231Pa)/(235U) of the melt equal to the global average of recently erupted mid-ocean ridge basaltic glasses or secular equilibrium, respectively. High precision ID-TIMS analyses from nine additional samples from Hole 735B and nearby Hole 1105A suggest similar partitioning. The lower range of DPazircon/DUzircon is consistent with ion microprobe measurements of 231Pa in zircons from Holocene and Pleistocene rhyolitic eruptions (Schmitt (2007; doi:10.2138/am.2007.2449) and Schmitt (2011; doi:10.1146/annurev-earth-040610-133330)). The data suggest that 231Pa is preferentially incorporated during zircon crystallization over a range of magmatic compositions, and excess initial 231Pa may be more common in zircons than acknowledged. The degree of initial disequilibrium in the 235U decay chain suggested by the data from this study, and other recent high precision datasets, leads to resolvable discordance in high precision dates of Cenozoic to Mesozoic zircons. Minor discordance in zircons of this age may therefore reflect initial excess 231Pa and does not require either inheritance or Pb loss.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent geochemical models invoke ocean alkalinity changes, particularly in the surface Southern Ocean, to explain glacial age pCO2 reduction. In such models, alkalinity increases in glacial periods are driven by reductions in North Atlantic Deep Water (NADW) supply, which lead to increases in deep-water nutrients and dissolution of carbonate sediments, and to increased alkalinity of Circumpolar Deep Water upwelling in the surface Southern Ocean. We use cores from the Southeast Indian Ridge and from the deep Cape Basin in the South Atlantic to show that carbonate dissolution was enhanced during glacial stages in areas now bathed by Circumpolar Deep Water. This suggests that deep Southern Ocean carbonate ion concentrations were lower in glacial stages than in interglacials, rather than higher as suggested by the polar alkalinity model [Broecker and Peng, 1989, doi:10.1029/GB001i001p00015]. Our observations show that changes in Southern Ocean CaCO3 preservation are coherent with changes in the relative flux of NADW, suggesting that Southern Ocean carbonate chemistry is closely linked to changes in deepwater circulation. The pattern of enhanced dissolution in glacials is consistent with a reduction in the supply of nutrient-depleted water (NADW) to the Southern Ocean and with an increase of nutrients in deep water masses. Carbonate mass accumulation rates on the Southeast Indian Ridge (3200-3800 m), and in relatively shallow cores (<3000 m) from the Kerguelen Plateau and the South Pacific were significantly reduced during glacial stages, by about 50%. The reduced carbonate mass accumulation rates and enhanced dissolution during glacials may be partly due to decreases in CaCO3:Corg flux ratios, acting as another mechanism which would raise the alkalinity of Southern Ocean surface waters. The polar alkalinity model assumes that the ratio of organic carbon to carbonate production on surface alkalinity is constant. Even if overall productivity in the Southern Ocean were held constant, a decrease in the CaCO3:Corg ratio would result in increased alkalinity and reduced pCO2 in Southern Ocean surface waters during glacials. This ecologically driven surface alkalinity change may enhance deepwater-mediated changes in alkalinity, and amplify rapid changes in pCO2.