832 resultados para partial least square modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cefepime is a broad-spectrum cephalosporin indicated for in-hospital treatment of severe infections. Acute neurotoxicity, an increasingly recognized adverse effect of this drug in an overdose, predominantly affects patients with reduced renal function. Although dialytic approaches have been advocated to treat this condition, their role in this indication remains unclear. We report the case of an 88-year-old female patient with impaired renal function who developed life-threatening neurologic symptoms during cefepime therapy. She was treated with two intermittent 3-hour high-flux, high-efficiency hemodialysis sessions. Serial pre-, post-, and peridialytic (pre- and postfilter) serum cefepime concentrations were measured. Pharmacokinetic modeling showed that this dialytic strategy allowed for serum cefepime concentrations to return to the estimated nontoxic range 15 hours earlier than would have been the case without an intervention. The patient made a full clinical recovery over the next 48 hours. We conclude that at least 1 session of intermittent hemodialysis may shorten the time to return to the nontoxic range in severe clinically patent intoxication. It should be considered early in its clinical course pending chemical confirmation, even in frail elderly patients. Careful dosage adjustment and a high index of suspicion are essential in this population.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE To determine neurologic outcome and factors influencing outcome after thoracolumbar partial lateral corpectomy (PLC) in dogs with intervertebral disc disease (IVDD) causing ventral spinal cord compression. STUDY DESIGN Retrospective case series. ANIMALS Dogs with IVDD (n = 72; 87 PLC). METHODS Dogs with IVDD between T9 and L5 were included if treated by at least 1 PLC. Exclusion criteria were: previous spinal surgery, combination of PLC with another surgical procedure. Neurologic outcome was assessed by: (1) modified Frankel score (MFS) based on neurologic examinations at 4 time points (before surgery, immediately after PLC, at discharge and 4 weeks after PLC); and (2) owner questionnaire. The association of the following factors with neurologic outcome was analyzed: age, body weight, duration of current neurologic dysfunction (acute, chronic), IVDD localization, breed (chondrodystrophic, nonchondrodystrophic), number of PLCs, degree of presurgical spinal cord compression and postsurgical decompression, slot depth, presurgical MFS. Presurgical spinal cord compression was determined by CT myelography (71 dogs) or MRI (1 dog), whereas postsurgical decompression and slot depth were determined on CT myelography (69 dogs). RESULTS MFS was improved in 18.7%, 31.7%, and 64.2% of dogs at the 3 postsurgical assessments, whereas it was unchanged in 62.6%, 52.8%, and 32.0% at corresponding time points. Based on owner questionnaire, 91.4% of dogs were ambulatory 6 months postsurgically with 74.5% having a normal gait. Most improvement in neurologic function developed within 6 months after surgery. Presurgical MFS was the only variable significantly associated with several neurologic outcome measurements (P < .01). CONCLUSIONS PLC is an option for decompression in ventrally compressing thoracolumbar IVDD. Prognosis is associated with presurgical neurologic condition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE The aim was to develop a delineation guideline for target definition for APBI or boost by consensus of the Breast Working Group of GEC-ESTRO. PROPOSED RECOMMENDATIONS Appropriate delineation of CTV (PTV) with low inter- and intra-observer variability in clinical practice is complex and needs various steps as: (1) Detailed knowledge of primary surgical procedure, of all details of pathology, as well as of preoperative imaging. (2) Definition of tumour localization before breast conserving surgery inside the breast and translation of this information in the postoperative CT imaging data set. (3) Calculation of the size of total safety margins. The size should be at least 2 cm. (4) Definition of the target. (5) Delineation of the target according to defined rules. CONCLUSION Providing guidelines based on the consensus of a group of experts should make it possible to achieve a reproducible and consistent definition of CTV (PTV) for Accelerated Partial Breast Irradiation (APBI) or boost irradiation after breast conserving closed cavity surgery, and helps to define it after selected cases of oncoplastic surgery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to assess implant therapy after a staged guided bone regeneration procedure in the anterior maxilla by lateralization of the nasopalatine nerve and vessel bundle. Neurosensory function following augmentative procedures and implant placement, assessed using a standardized questionnaire and clinical examination, were the primary outcome variables measured. This retrospective study included patients with a bone defect in the anterior maxilla in need of horizontal and/or vertical ridge augmentation prior to dental implant placement. The surgical sites were allowed to heal for at least 6 months before placement of dental implants. All patients received fixed implant-supported restorations and entered into a tightly scheduled maintenance program. In addition to the maintenance program, patients were recalled for a clinical examination and to fill out a questionnaire to assess any changes in the neurosensory function of the nasopalatine nerve at least 6 months after function. Twenty patients were included in the study from February 2001 to December 2010. They received a total of 51 implants after augmentation of the alveolar crest and lateralization of the nasopalatine nerve. The follow-up examination for questionnaire and neurosensory assessment was scheduled after a mean period of 4.18 years of function. None of the patients examined reported any pain, they did not have less or an altered sensation, and they did not experience a "foreign body" feeling in the area of surgery. Overall, 6 patients out of 20 (30%) showed palatal sensibility alterations of the soft tissues in the region of the maxillary canines and incisors resulting in a risk for a neurosensory change of 0.45 mucosal teeth regions per patient after ridge augmentation with lateralization of the nasopalatine nerve. Regeneration of bone defects in the anterior maxilla by horizontal and/or vertical ridge augmentation and lateralization of the nasopalatine nerve prior to dental implant placement is a predictable surgical technique. Whether or not there were clinically measurable impairments of neurosensory function, the patients did not report them or were not bothered by them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Animal guts have been idealized as axially uniform plug-flow reactors (PFRs) without significant axial mixing or as combinations in series of such PFRs with other reactor types. To relax these often unrealistic assumptions and to provide a means for relaxing others, I approximated an animal gut as a series of n continuously stirred tank reactors (CSTRs) and examined its performance as a Function of n. For the digestion problem of hydrolysis and absorption in series, I suggest as a first approximation that a tubular gut of length L and diameter D comprises n=L/D tanks in series. For n greater than or equal to 10, there is little difference between performance of the nCSTR model and an ideal PFR in the coupled tasks of hydrolysis and absorption. Relatively thinner and longer guts, characteristic of animals feeding on poorer forage, prove more efficient in both conversion and absorption by restricting axial mixing, in the same total volume, they also give a higher rate of absorption. I then asked how a fixed number of absorptive sites should be distributed among the n compartments. Absorption rate generally is maximized when absorbers are concentrated in the hindmost few compartments, but high food quality or suboptimal ingestion rates decrease the advantage of highly concentrated absorbers. This modeling approach connects gut function and structure at multiple scales and can be extended to include other nonideal reactor behaviors observed in real animals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Kriging is a widely employed method for interpolating and estimating elevations from digital elevation data. Its place of prominence is due to its elegant theoretical foundation and its convenient practical implementation. From an interpolation point of view, kriging is equivalent to a thin-plate spline and is one species among the many in the genus of weighted inverse distance methods, albeit with attractive properties. However, from a statistical point of view, kriging is a best linear unbiased estimator and, consequently, has a place of distinction among all spatial estimators because any other linear estimator that performs as well as kriging (in the least squares sense) must be equivalent to kriging, assuming that the parameters of the semivariogram are known. Therefore, kriging is often held to be the gold standard of digital terrain model elevation estimation. However, I prove that, when used with local support, kriging creates discontinuous digital terrain models, which is to say, surfaces with “rips” and “tears” throughout them. This result is general; it is true for ordinary kriging, kriging with a trend, and other forms. A U.S. Geological Survey (USGS) digital elevation model was analyzed to characterize the distribution of the discontinuities. I show that the magnitude of the discontinuity does not depend on surface gradient but is strongly dependent on the size of the kriging neighborhood.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Anticancer drugs typically are administered in the clinic in the form of mixtures, sometimes called combinations. Only in rare cases, however, are mixtures approved as drugs. Rather, research on mixtures tends to occur after single drugs have been approved. The goal of this research project was to develop modeling approaches that would encourage rational preclinical mixture design. To this end, a series of models were developed. First, several QSAR classification models were constructed to predict the cytotoxicity, oral clearance, and acute systemic toxicity of drugs. The QSAR models were applied to a set of over 115,000 natural compounds in order to identify promising ones for testing in mixtures. Second, an improved method was developed to assess synergistic, antagonistic, and additive effects between drugs in a mixture. This method, dubbed the MixLow method, is similar to the Median-Effect method, the de facto standard for assessing drug interactions. The primary difference between the two is that the MixLow method uses a nonlinear mixed-effects model to estimate parameters of concentration-effect curves, rather than an ordinary least squares procedure. Parameter estimators produced by the MixLow method were more precise than those produced by the Median-Effect Method, and coverage of Loewe index confidence intervals was superior. Third, a model was developed to predict drug interactions based on scores obtained from virtual docking experiments. This represents a novel approach for modeling drug mixtures and was more useful for the data modeled here than competing approaches. The model was applied to cytotoxicity data for 45 mixtures, each composed of up to 10 selected drugs. One drug, doxorubicin, was a standard chemotherapy agent and the others were well-known natural compounds including curcumin, EGCG, quercetin, and rhein. Predictions of synergism/antagonism were made for all possible fixed-ratio mixtures, cytotoxicities of the 10 best-scoring mixtures were tested, and drug interactions were assessed. Predicted and observed responses were highly correlated (r2 = 0.83). Results suggested that some mixtures allowed up to an 11-fold reduction of doxorubicin concentrations without sacrificing efficacy. Taken together, the models developed in this project present a general approach to rational design of mixtures during preclinical drug development. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In population studies, most current methods focus on identifying one outcome-related SNP at a time by testing for differences of genotype frequencies between disease and healthy groups or among different population groups. However, testing a great number of SNPs simultaneously has a problem of multiple testing and will give false-positive results. Although, this problem can be effectively dealt with through several approaches such as Bonferroni correction, permutation testing and false discovery rates, patterns of the joint effects by several genes, each with weak effect, might not be able to be determined. With the availability of high-throughput genotyping technology, searching for multiple scattered SNPs over the whole genome and modeling their joint effect on the target variable has become possible. Exhaustive search of all SNP subsets is computationally infeasible for millions of SNPs in a genome-wide study. Several effective feature selection methods combined with classification functions have been proposed to search for an optimal SNP subset among big data sets where the number of feature SNPs far exceeds the number of observations. ^ In this study, we take two steps to achieve the goal. First we selected 1000 SNPs through an effective filter method and then we performed a feature selection wrapped around a classifier to identify an optimal SNP subset for predicting disease. And also we developed a novel classification method-sequential information bottleneck method wrapped inside different search algorithms to identify an optimal subset of SNPs for classifying the outcome variable. This new method was compared with the classical linear discriminant analysis in terms of classification performance. Finally, we performed chi-square test to look at the relationship between each SNP and disease from another point of view. ^ In general, our results show that filtering features using harmononic mean of sensitivity and specificity(HMSS) through linear discriminant analysis (LDA) is better than using LDA training accuracy or mutual information in our study. Our results also demonstrate that exhaustive search of a small subset with one SNP, two SNPs or 3 SNP subset based on best 100 composite 2-SNPs can find an optimal subset and further inclusion of more SNPs through heuristic algorithm doesn't always increase the performance of SNP subsets. Although sequential forward floating selection can be applied to prevent from the nesting effect of forward selection, it does not always out-perform the latter due to overfitting from observing more complex subset states. ^ Our results also indicate that HMSS as a criterion to evaluate the classification ability of a function can be used in imbalanced data without modifying the original dataset as against classification accuracy. Our four studies suggest that Sequential Information Bottleneck(sIB), a new unsupervised technique, can be adopted to predict the outcome and its ability to detect the target status is superior to the traditional LDA in the study. ^ From our results we can see that the best test probability-HMSS for predicting CVD, stroke,CAD and psoriasis through sIB is 0.59406, 0.641815, 0.645315 and 0.678658, respectively. In terms of group prediction accuracy, the highest test accuracy of sIB for diagnosing a normal status among controls can reach 0.708999, 0.863216, 0.639918 and 0.850275 respectively in the four studies if the test accuracy among cases is required to be not less than 0.4. On the other hand, the highest test accuracy of sIB for diagnosing a disease among cases can reach 0.748644, 0.789916, 0.705701 and 0.749436 respectively in the four studies if the test accuracy among controls is required to be at least 0.4. ^ A further genome-wide association study through Chi square test shows that there are no significant SNPs detected at the cut-off level 9.09451E-08 in the Framingham heart study of CVD. Study results in WTCCC can only detect two significant SNPs that are associated with CAD. In the genome-wide study of psoriasis most of top 20 SNP markers with impressive classification accuracy are also significantly associated with the disease through chi-square test at the cut-off value 1.11E-07. ^ Although our classification methods can achieve high accuracy in the study, complete descriptions of those classification results(95% confidence interval or statistical test of differences) require more cost-effective methods or efficient computing system, both of which can't be accomplished currently in our genome-wide study. We should also note that the purpose of this study is to identify subsets of SNPs with high prediction ability and those SNPs with good discriminant power are not necessary to be causal markers for the disease.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this research is to develop a new statistical method to determine the minimum set of rows (R) in a R x C contingency table of discrete data that explains the dependence of observations. The statistical power of the method will be empirically determined by computer simulation to judge its efficiency over the presently existing methods. The method will be applied to data on DNA fragment length variation at six VNTR loci in over 72 populations from five major racial groups of human (total sample size is over 15,000 individuals; each sample having at least 50 individuals). DNA fragment lengths grouped in bins will form the basis of studying inter-population DNA variation within the racial groups are significant, will provide a rigorous re-binning procedure for forensic computation of DNA profile frequencies that takes into account intra-racial DNA variation among populations. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that ocean acidification can have profound impacts on marine organisms. However, we know little about the direct and indirect effects of ocean acidification and also how these effects interact with other features of environmental change such as warming and declining consumer pressure. In this study, we tested whether the presence of consumers (invertebrate mesograzers) influenced the interactive effects of ocean acidification and warming on benthic microalgae in a seagrass community mesocosm experiment. Net effects of acidification and warming on benthic microalgal biomass and production, as assessed by analysis of variance, were relatively weak regardless of grazer presence. However, partitioning these net effects into direct and indirect effects using structural equation modeling revealed several strong relationships. In the absence of grazers, benthic microalgae were negatively and indirectly affected by sediment-associated microalgal grazers and macroalgal shading, but directly and positively affected by acidification and warming. Combining indirect and direct effects yielded no or weak net effects. In the presence of grazers, almost all direct and indirect climate effects were nonsignificant. Our analyses highlight that (i) indirect effects of climate change may be at least as strong as direct effects, (ii) grazers are crucial in mediating these effects, and (iii) effects of ocean acidification may be apparent only through indirect effects and in combination with other variables (e.g., warming). These findings highlight the importance of experimental designs and statistical analyses that allow us to separate and quantify the direct and indirect effects of multiple climate variables on natural communities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Seventeen basalts from Ocean Drilling Program (ODP) Leg 183 to the Kerguelen Plateau (KP) were analyzed for the platinum-group elements (PGEs: Ir, Ru, Rh, Pt, and Pd), and 15 were analyzed for trace elements. Relative concentrations of the PGEs ranged from ~0.1 (Ir, Ru) to ~5 (Pt) times primitive mantle. These relatively high PGE abundances and fractionated patterns are not accounted for by the presence of sulfide minerals; there are only trace sulfides present in thin-section. Sulfur saturation models applied to the KP basalts suggest that the parental magmas may have never reached sulfide saturation, despite large degrees of partial melting (~30%) and fractional crystallization (~45%). First order approximations of the fractionation required to produce the KP basalts from an ~30% partial melt of a spinel peridotite were determined using the PELE program. The model was adapted to better fit the physical and chemical observations from the KP basalts, and requires an initial crystal fractionation stage of at least 30% olivine plus Cr-spinel (49:1), followed by magma replenishment and fractional crystallization (RFC) that included clinopyroxene, plagioclase, and titanomagnetite (15:9:1). The low Pd values ([Pd/Pt]_pm < 1.7) for these samples are not predicted by currently available Kd values. These Pd values are lowest in samples with relatively higher degrees of alteration as indicated by petrographic observations. Positive anomalies are a function of the behavior of the PGEs; they can be reproduced by Cr-spinel, and titanomagnetite crystallization, followed by titanomagnetite resorption during the final stages of crystallization. Our modeling shows that it is difficult to reproduce the PGE abundances by either depleted upper or even primitive mantle sources. Crustal contamination, while indicated at certain sites by the isotopic compositions of the basalts, appears to have had a minimal affect on the PGEs. The PGE abundances measured in the Kerguelen Plateau basalts are best modeled by melting a primitive mantle source to which was added up to 1% of outer core material, followed by fractional crystallization of the melt produced. This reproduces both the abundances and patterns of the PGEs in the Kerguelen Plateau basalts. An alternative model for outer core PGE abundances requires only 0.3% of outer core material to be mixed into the primitive mantle source. While our results are clearly model dependent, they indicate that an outer core component may be present in the Kerguelen plume source.