968 resultados para Fatigue. Composites. Modular Network. S-N Curves Probability. Weibull Distribution


Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To assess the outcomes and patterns of failure in solitary plasmacytoma (SP). METHODS AND MATERIALS: The data from 258 patients with bone (n = 206) or extramedullary (n = 52) SP without evidence of multiple myeloma (MM) were collected. A histopathologic diagnosis was obtained for all patients. Most (n = 214) of the patients received radiotherapy (RT) alone; 34 received chemotherapy and RT, and 8 surgery alone. The median radiation dose was 40 Gy. The median follow-up was 56 months (range 7-245). RESULTS: The median time to MM development was 21 months (range 2-135), with a 5-year probability of 45%. The 5-year overall survival, disease-free survival, and local control rate was 74%, 50%, and 86%, respectively. On multivariate analyses, the favorable factors were younger age and tumor size <4 cm for survival; younger age, extramedullary localization, and RT for disease-free survival; and small tumor and RT for local control. Bone localization was the only predictor of MM development. No dose-response relationship was found for doses >30 Gy, even for larger tumors. CONCLUSION: Progression to MM remains the main problem. Patients with extramedullary SP had the best outcomes, especially when treated with moderate-dose RT. Chemotherapy and/or novel therapies should be investigated for bone or bulky extramedullary SP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mendelian models can predict who carries an inherited deleterious mutation of known disease genes based on family history. For example, the BRCAPRO model is commonly used to identify families who carry mutations of BRCA1 and BRCA2, based on familial breast and ovarian cancers. These models incorporate the age of diagnosis of diseases in relatives and current age or age of death. We develop a rigorous foundation for handling multiple diseases with censoring. We prove that any disease unrelated to mutations can be excluded from the model, unless it is sufficiently common and dependent on a mutation-related disease time. Furthermore, if a family member has a disease with higher probability density among mutation carriers, but the model does not account for it, then the carrier probability is deflated. However, even if a family only has diseases the model accounts for, if the model excludes a mutation-related disease, then the carrier probability will be inflated. In light of these results, we extend BRCAPRO to account for surviving all non-breast/ovary cancers as a single outcome. The extension also enables BRCAPRO to extract more useful information from male relatives. Using 1500 familes from the Cancer Genetics Network, accounting for surviving other cancers improves BRCAPRO’s concordance index from 0.758 to 0.762 (p = 0.046), improves its positive predictive value from 35% to 39% (p < 10−6) without impacting its negative predictive value, and improves its overall calibration, although calibration slightly worsens for those with carrier probability < 10%. Copyright c 2000 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effect of shot particles on the high temperature, low cycle fatigue of a hybrid fiber/particulate metal-matrix composite (MMC) was studied. Two hybrid composites with the general composition A356/35%SiC particle/5%Fiber (one without shot) were tested. It was found that shot particles acting as stress concentrators had little effect on the fatigue performance. It appears that fibers with a high silica content were more likely to debond from the matrix. Final failure of the composite was found to occur preferentially in the matrix. SiC particles fracture progressively during fatigue testing, leading to higher stress in the matrix, and final failure by matrix overload. A continuum mechanics based model was developed to predict failure in fatigue based on the tensile properties of the matrix and particles. By accounting for matrix yielding and recovery, composite creep and particle strength distribution, failure of the composite was predicted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The developmental processes and functions of an organism are controlled by the genes and the proteins that are derived from these genes. The identification of key genes and the reconstruction of gene networks can provide a model to help us understand the regulatory mechanisms for the initiation and progression of biological processes or functional abnormalities (e.g. diseases) in living organisms. In this dissertation, I have developed statistical methods to identify the genes and transcription factors (TFs) involved in biological processes, constructed their regulatory networks, and also evaluated some existing association methods to find robust methods for coexpression analyses. Two kinds of data sets were used for this work: genotype data and gene expression microarray data. On the basis of these data sets, this dissertation has two major parts, together forming six chapters. The first part deals with developing association methods for rare variants using genotype data (chapter 4 and 5). The second part deals with developing and/or evaluating statistical methods to identify genes and TFs involved in biological processes, and construction of their regulatory networks using gene expression data (chapter 2, 3, and 6). For the first part, I have developed two methods to find the groupwise association of rare variants with given diseases or traits. The first method is based on kernel machine learning and can be applied to both quantitative as well as qualitative traits. Simulation results showed that the proposed method has improved power over the existing weighted sum method (WS) in most settings. The second method uses multiple phenotypes to select a few top significant genes. It then finds the association of each gene with each phenotype while controlling the population stratification by adjusting the data for ancestry using principal components. This method was applied to GAW 17 data and was able to find several disease risk genes. For the second part, I have worked on three problems. First problem involved evaluation of eight gene association methods. A very comprehensive comparison of these methods with further analysis clearly demonstrates the distinct and common performance of these eight gene association methods. For the second problem, an algorithm named the bottom-up graphical Gaussian model was developed to identify the TFs that regulate pathway genes and reconstruct their hierarchical regulatory networks. This algorithm has produced very significant results and it is the first report to produce such hierarchical networks for these pathways. The third problem dealt with developing another algorithm called the top-down graphical Gaussian model that identifies the network governed by a specific TF. The network produced by the algorithm is proven to be of very high accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Several treatment strategies are available for adults with advanced-stage Hodgkin's lymphoma, but studies assessing two alternative standards of care-increased dose bleomycin, etoposide, doxorubicin, cyclophosphamide, vincristine, procarbazine, and prednisone (BEACOPPescalated), and doxorubicin, bleomycin, vinblastine, and dacarbazine (ABVD)-were not powered to test differences in overall survival. To guide treatment decisions in this population of patients, we did a systematic review and network meta-analysis to identify the best initial treatment strategy. METHODS We searched the Cochrane Library, Medline, and conference proceedings for randomised controlled trials published between January, 1980, and June, 2013, that assessed overall survival in patients with advanced-stage Hodgkin's lymphoma given BEACOPPbaseline, BEACOPPescalated, BEACOPP variants, ABVD, cyclophosphamide (mechlorethamine), vincristine, procarbazine, and prednisone (C[M]OPP), hybrid or alternating chemotherapy regimens with ABVD as the backbone (eg, COPP/ABVD, MOPP/ABVD), or doxorubicin, vinblastine, mechlorethamine, vincristine, bleomycin, etoposide, and prednisone combined with radiation therapy (the Stanford V regimen). We assessed studies for eligibility, extracted data, and assessed their quality. We then pooled the data and used a Bayesian random-effects model to combine direct comparisons with indirect evidence. We also reconstructed individual patient survival data from published Kaplan-Meier curves and did standard random-effects Poisson regression. Results are reported relative to ABVD. The primary outcome was overall survival. FINDINGS We screened 2055 records and identified 75 papers covering 14 eligible trials that assessed 11 different regimens in 9993 patients, providing 59 651 patient-years of follow-up. 1189 patients died, and the median follow-up was 5·9 years (IQR 4·9-6·7). Included studies were of high methodological quality, and between-trial heterogeneity was negligible (τ(2)=0·01). Overall survival was highest in patients who received six cycles of BEACOPPescalated (HR 0·38, 95% credibility interval [CrI] 0·20-0·75). Compared with a 5 year survival of 88% for ABVD, the survival benefit for six cycles of BEACOPPescalated is 7% (95% CrI 3-10)-ie, a 5 year survival of 95%. Reconstructed individual survival data showed that, at 5 years, BEACOPPescalated has a 10% (95% CI 3-15) advantage over ABVD in overall survival. INTERPRETATION Six cycles of BEACOPPescalated significantly improves overall survival compared with ABVD and other regimens, and thus we recommend this treatment strategy as standard of care for patients with access to the appropriate supportive care.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers a framework where data from correlated sources are transmitted with the help of network coding in ad hoc network topologies. The correlated data are encoded independently at sensors and network coding is employed in the intermediate nodes in order to improve the data delivery performance. In such settings, we focus on the problem of reconstructing the sources at decoder when perfect decoding is not possible due to losses or bandwidth variations. We show that the source data similarity can be used at decoder to permit decoding based on a novel and simple approximate decoding scheme. We analyze the influence of the network coding parameters and in particular the size of finite coding fields on the decoding performance. We further determine the optimal field size that maximizes the expected decoding performance as a trade-off between information loss incurred by limiting the resolution of the source data and the error probability in the reconstructed data. Moreover, we show that the performance of the approximate decoding improves when the accuracy of the source model increases even with simple approximate decoding techniques. We provide illustrative examples showing how the proposed algorithm can be deployed in sensor networks and distributed imaging applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we show statistical analyses of several types of traffic sources in a 3G network, namely voice, video and data sources. For each traffic source type, measurements were collected in order to, on the one hand, gain better understanding of the statistical characteristics of the sources and, on the other hand, enable forecasting traffic behaviour in the network. The latter can be used to estimate service times and quality of service parameters. The probability density function, mean, variance, mean square deviation, skewness and kurtosis of the interarrival times are estimated by Wolfram Mathematica and Crystal Ball statistical tools. Based on evaluation of packet interarrival times, we show how the gamma distribution can be used in network simulations and in evaluation of available capacity in opportunistic systems. As a result, from our analyses, shape and scale parameters of gamma distribution are generated. Data can be applied also in dynamic network configuration in order to avoid potential network congestions or overflows. Copyright © 2013 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Up to 40% of ischaemic strokes are cryptogenic. A strong association between cryptogenic stroke and the prevalence of patent foramen ovale (PFO) suggests paradoxical embolism via PFO as a potential cause. Randomized trials failed to demonstrate superiority of PFO closure over medical therapy. METHODS AND RESULTS Randomized trials comparing percutaneous PFO closure against medical therapy or devices head-to-head published or presented by March 2013 were identified through a systematic search. We performed a network meta-analysis to determine the effectiveness and safety of PFO closure with different devices when compared with medical therapy. We included four randomized trials (2963 patients with 9309 patient-years). Investigated devices were Amplatzer (AMP), STARFlex (STF), and HELEX (HLX). Patients allocated to PFO closure with AMP were less likely to experience a stroke than patients allocated to medical therapy [rate ratio (RR) 0.39; 95% CI: 0.17-0.84]. No significant differences were found for STF (RR 1.01; 95% CI: 0.44-2.41), and HLX (RR, 0.71; 95% CI: 0.17-2.78) when compared with medical therapy. The probability to be best in preventing strokes was 77.1% for AMP, 20.9% for HLX, 1.7% for STF, and 0.4% for medical therapy. No significant differences were found for transient ischaemic attack and death. The risk of new-onset atrial fibrillation was more pronounced for STF (RR 7.67; 95% CI: 3.25-19.63), than AMP (RR 2.14; 95% CI: 1.00-4.62) and HLX (RR 1.33; 95%-CI 0.33-4.50), when compared with medical therapy. CONCLUSIONS The effectiveness of PFO closure depends on the device used. PFO closure with AMP appears superior to medical therapy in preventing strokes in patients with cryptogenic embolism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Let Y be a stochastic process on [0,1] satisfying dY(t)=n 1/2 f(t)dt+dW(t) , where n≥1 is a given scale parameter (`sample size'), W is standard Brownian motion and f is an unknown function. Utilizing suitable multiscale tests, we construct confidence bands for f with guaranteed given coverage probability, assuming that f is isotonic or convex. These confidence bands are computationally feasible and shown to be asymptotically sharp optimal in an appropriate sense.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Non-steroidal anti-inflammatory drugs (NSAIDs) are the backbone of osteoarthritis pain management. We aimed to assess the effectiveness of different preparations and doses of NSAIDs on osteoarthritis pain in a network meta-analysis. METHODS For this network meta-analysis, we considered randomised trials comparing any of the following interventions: NSAIDs, paracetamol, or placebo, for the treatment of osteoarthritis pain. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) and the reference lists of relevant articles for trials published between Jan 1, 1980, and Feb 24, 2015, with at least 100 patients per group. The prespecified primary and secondary outcomes were pain and physical function, and were extracted in duplicate for up to seven timepoints after the start of treatment. We used an extension of multivariable Bayesian random effects models for mixed multiple treatment comparisons with a random effect at the level of trials. For the primary analysis, a random walk of first order was used to account for multiple follow-up outcome data within a trial. Preparations that used different total daily dose were considered separately in the analysis. To assess a potential dose-response relation, we used preparation-specific covariates assuming linearity on log relative dose. FINDINGS We identified 8973 manuscripts from our search, of which 74 randomised trials with a total of 58 556 patients were included in this analysis. 23 nodes concerning seven different NSAIDs or paracetamol with specific daily dose of administration or placebo were considered. All preparations, irrespective of dose, improved point estimates of pain symptoms when compared with placebo. For six interventions (diclofenac 150 mg/day, etoricoxib 30 mg/day, 60 mg/day, and 90 mg/day, and rofecoxib 25 mg/day and 50 mg/day), the probability that the difference to placebo is at or below a prespecified minimum clinically important effect for pain reduction (effect size [ES] -0·37) was at least 95%. Among maximally approved daily doses, diclofenac 150 mg/day (ES -0·57, 95% credibility interval [CrI] -0·69 to -0·46) and etoricoxib 60 mg/day (ES -0·58, -0·73 to -0·43) had the highest probability to be the best intervention, both with 100% probability to reach the minimum clinically important difference. Treatment effects increased as drug dose increased, but corresponding tests for a linear dose effect were significant only for celecoxib (p=0·030), diclofenac (p=0·031), and naproxen (p=0·026). We found no evidence that treatment effects varied over the duration of treatment. Model fit was good, and between-trial heterogeneity and inconsistency were low in all analyses. All trials were deemed to have a low risk of bias for blinding of patients. Effect estimates did not change in sensitivity analyses with two additional statistical models and accounting for methodological quality criteria in meta-regression analysis. INTERPRETATION On the basis of the available data, we see no role for single-agent paracetamol for the treatment of patients with osteoarthritis irrespective of dose. We provide sound evidence that diclofenac 150 mg/day is the most effective NSAID available at present, in terms of improving both pain and function. Nevertheless, in view of the safety profile of these drugs, physicians need to consider our results together with all known safety information when selecting the preparation and dose for individual patients. FUNDING Swiss National Science Foundation (grant number 405340-104762) and Arco Foundation, Switzerland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The selection of metrics for ecosystem restoration programs is critical for improving the quality of monitoring programs and characterizing project success. Moreover it is oftentimes very difficult to balance the importance of multiple ecological, social, and economical metrics. Metric selection process is a complex and must simultaneously take into account monitoring data, environmental models, socio-economic considerations, and stakeholder interests. We propose multicriteria decision analysis (MCDA) methods, broadly defined, for the selection of optimal sets of metrics to enhance evaluation of ecosystem restoration alternatives. Two MCDA methods, a multiattribute utility analysis (MAUT), and a probabilistic multicriteria acceptability analysis (ProMAA), are applied and compared for a hypothetical case study of a river restoration involving multiple stakeholders. Overall, the MCDA results in a systematic, unbiased, and transparent solution, informing restoration alternatives evaluation. The two methods provide comparable results in terms of selected metrics. However, because ProMAA can consider probability distributions for weights and utility values of metrics for each criteria, it is suggested as the best option if data uncertainty is high. Despite the increase in complexity in the metric selection process, MCDA improves upon the current ad-hoc decision practice based on the consultations with stakeholders and experts, and encourages transparent and quantitative aggregation of data and judgement, increasing the transparency of decision making in restoration projects. We believe that MCDA can enhance the overall sustainability of ecosystem by enhancing both ecological and societal needs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a new mechanism linking innovation and network in developing economies to detect explicit production and information linkages and investigates the testable implications of these linkages using survey data gathered from manufacturing firms in East Asia. We found that firms with more information linkages tend to innovate more, have a higher probability of introducing new goods, introducing new goods to new markets using new technologies, and finding new partners located in remote areas. We also found that firms that dispatched engineers to customers achieved more innovations than firms that did not. These findings support the hypothesis that production linkages and face‐to‐face communication encourage product and process innovation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning the structure of a graphical model from data is a common task in a wide range of practical applications. In this paper, we focus on Gaussian Bayesian networks, i.e., on continuous data and directed acyclic graphs with a joint probability density of all variables given by a Gaussian. We propose to work in an equivalence class search space, specifically using the k-greedy equivalence search algorithm. This, combined with regularization techniques to guide the structure search, can learn sparse networks close to the one that generated the data. We provide results on some synthetic networks and on modeling the gene network of the two biological pathways regulating the biosynthesis of isoprenoids for the Arabidopsis thaliana plant

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An aerodynamic optimization of the train aerodynamic characteristics in term of front wind action sensitivity is carried out in this paper. In particular, a genetic algorithm (GA) is used to perform a shape optimization study of a high-speed train nose. The nose is parametrically defined via Bézier Curves, including a wider range of geometries in the design space as possible optimal solutions. Using a GA, the main disadvantage to deal with is the large number of evaluations need before finding such optimal. Here it is proposed the use of metamodels to replace Navier-Stokes solver. Among all the posibilities, Rsponse Surface Models and Artificial Neural Networks (ANN) are considered. Best results of prediction and generalization are obtained with ANN and those are applied in GA code. The paper shows the feasibility of using GA in combination with ANN for this problem, and solutions achieved are included.