40 resultados para Accelerated failure time Model. Correlated data. Imputation. Residuals analysis
Resumo:
Context: Caveolin-1 (CAV1) is an inhibitor of tissue fibrosis.
Objective: To study the association of CAV1 gene variation with kidney transplant outcome, using kidney transplantation as a model of accelerated fibrosis.
Design, Setting, and Patients: Candidate gene association and validation study. Genomic DNA from 785 white kidney transplant donors and their respective recipients (transplantations in Birmingham, England, between 1996 and 2006; median followup, 81 months) were analyzed for common variation in CAV1 using a singlenucleotide polymorphism (SNP) tagging approach. Validation of positive findings was sought in an independent kidney transplant donor-recipient cohort (transplantations in Belfast, Northern Ireland, between 1986 and 2005; n=697; median follow-up, 69 months). Association between genotype and allograft failure was initially assessed by Kaplan-Meier analysis, then in an adjusted Cox model.
Main Outcome Measure: Death-censored allograft failure, defined as a return to dialysis or retransplantation.
Results: The presence of donor AA genotype for the CAV1 rs4730751 SNP was associated with increased risk of allograft failure in the Birmingham group (donor AA vs non-AA genotype in adjusted Cox model, hazard ratio [HR], 1.97; 95% confidence interval [CI], 1.29-3.16; P=.002). No other tag SNPs showed a significant association. This finding was validated in the Belfast cohort (in adjusted Cox model, HR, 1.56; 95% CI, 1.07-2.27; P=.02). Overall graft failure rates were as follows: for the Birmingham cohort, donor genotype AA, 22 of 57 (38.6%); genotype CC, 96 of 431 (22.3%); and genotype AC, 66 of 297 (22.2%); and for the Belfast cohort, donor genotype AA, 32 of 48 (67%); genotype CC, 150 of 358 (42%); and genotype AC, 119 of 273 (44%).
Conclusion: Among kidney transplant donors, the CAV1 rs4730751 SNP was significantly associated with allograft failure in 2 independent cohorts.
Resumo:
In studies of radiation-induced DNA fragmentation and repair, analytical models may provide rapid and easy-to-use methods to test simple hypotheses regarding the breakage and rejoining mechanisms involved. The random breakage model, according to which lesions are distributed uniformly and independently of each other along the DNA, has been the model most used to describe spatial distribution of radiation-induced DNA damage. Recently several mechanistic approaches have been proposed that model clustered damage to DNA. In general, such approaches focus on the study of initial radiation-induced DNA damage and repair, without considering the effects of additional (unwanted and unavoidable) fragmentation that may take place during the experimental procedures. While most approaches, including measurement of total DNA mass below a specified value, allow for the occurrence of background experimental damage by means of simple subtractive procedures, a more detailed analysis of DNA fragmentation necessitates a more accurate treatment. We have developed a new, relatively simple model of DNA breakage and the resulting rejoining kinetics of broken fragments. Initial radiation-induced DNA damage is simulated using a clustered breakage approach, with three free parameters: the number of independently located clusters, each containing several DNA double-strand breaks (DSBs), the average number of DSBs within a cluster (multiplicity of the cluster), and the maximum allowed radius within which DSBs belonging to the same cluster are distributed. Random breakage is simulated as a special case of the DSB clustering procedure. When the model is applied to the analysis of DNA fragmentation as measured with pulsed-field gel electrophoresis (PFGE), the hypothesis that DSBs in proximity rejoin at a different rate from that of sparse isolated breaks can be tested, since the kinetics of rejoining of fragments of varying size may be followed by means of computer simulations. The problem of how to account for background damage from experimental handling is also carefully considered. We have shown that the conventional procedure of subtracting the background damage from the experimental data may lead to erroneous conclusions during the analysis of both initial fragmentation and DSB rejoining. Despite its relative simplicity, the method presented allows both the quantitative and qualitative description of radiation-induced DNA fragmentation and subsequent rejoining of double-stranded DNA fragments. (C) 2004 by Radiation Research Society.
Resumo:
An intralaminar damage model, based on a continuum damage mechanics approach, is presented to model the damage mechanisms occurring in carbon fibre composite structures incorporating fibre tensile and compressive breakage, matrix tensile and compressive fracture, and shear failure. The damage model, together with interface elements for capturing interlaminar failure, is implemented in a finite element package and used in a detailed finite element model to simulate the response of a stiffened composite panel to low-velocity impact. Contact algorithms and friction between delaminated plies were included, to better simulate the impact event. Analyses were executed on a high performance computer (HPC) cluster to reduce the actual time required for this detailed numerical analysis. Numerical results relating to the various observed interlaminar damage mechanisms, delamination initiation and propagation, as well as the models ability to capture post-impact permanent indentation in the panel are discussed. Very good agreement was achieved with experimentally obtained data of energy absorbed and impactor force versus time. The extent of damage predicted around the impact site also corresponded well with the damage detected by non destructive evaluation of the tested panel.
Plasma total homocysteine and carotid intima-media thickness in type 1 diabetes: A prospective study
Resumo:
Objective: Plasma total homocysteine (tHcy) has been positively associated with carotid intima-media thickness (IMT) in non-diabetic populations and in a few cross-sectional studies of diabetic patients. We investigated cross-sectional and prospective associations of a single measure of tHcy with common and internal carotid IMT over a 6-year period in type 1 diabetes. Research design and methods: tHcy levels were measured once, in plasma obtained in 1997–1999 from patients (n = 599) in the Epidemiology of Diabetes Interventions and Complications (EDIC) study, the observational follow-up of the Diabetes Control and Complications Trial (DCCT). Common and internal carotid IMT were determined twice, in EDIC “Year 6” (1998–2000) and “Year 12” (2004–2006), using B-mode ultra-sonography. Results: After adjustment, plasma tHcy [median (interquartile range): 6.2 (5.1, 7.5) μmol/L] was significantly correlated with age, diastolic blood pressure, renal dysfunction, and smoking (all p < 0.05). In an unadjusted model only, increasing quartiles of tHcy correlated with common and internal carotid IMT, again at both EDIC time-points (p < 0.01). However, multivariate logistic regression revealed no significant associations between increasing quartiles of tHcy and the 6-year change in common and internal carotid IMT (highest vs. lowest quintile) when adjusted for conventional risk factors. Conclusions: In a type 1 diabetes cohort from the EDIC study, plasma tHcy measured in samples drawn in 1997–1999 was associated with measures of common and internal carotid IMT measured both one and seven years later, but not with IMT progression between the two time-points. The data do not support routine measurement of tHcy in people with Type 1 diabetes.
Resumo:
A significant portion of UK’s infrastructures earthworks was built more than 100 years ago, without modern construction standards: poor maintenance and the change of precipitations pattern experienced in the past decades are currently compromising their stability, leading to an increasing number of failures. To address the need for a reliable and time-efficient monitoring of earthworks at risk of failure we propose here the use of two established seismic techniques for the characterization of the near surface, MASW and P-wave refraction. We have regularly collected MASW and P-wave refraction data, from March 2014 to February 2015, along 4 reduced-scale seismic lines located on the flanks of a heritage railway embankment located in Broadway, SW of England. We have observed a definite temporal variability in terms of phase velocities of SW dispersion curves and of P-wave travel times. The accurate choice of ad-hoc inversion strategies has allowed to reconstruct reliable VP and VS models through which it is potentially possible to track the temporal variations of geo-mechanical properties of the embankment slopes. The variability over time of seismic data and seismic velocities seems to correlate well with rainfall data recorded in the days immediately preceding the date of acquisition.
Resumo:
Since 1999, the rapid, inexpensive and non-destructive use of Th/K and Th/U ratios from spectral gamma ray measurements have been used as a proxy for changes in palaeo-hinterland weathering. This model is tested here by analysis of in situ palaeoweathering horizons where clay mineral contents are well-known. A residual palaeoweathered horizon of Palaeogene laterite (developed on basalt) has been logged at 14 locations across N. Ireland using spectral gamma ray detectors. The results are compared to published elemental and mineralogical data. While the model of K and U loss during the early stages of weathering to smectite and kaolinite is supported, the formation (during progressively more advanced weathering) of gibbsite and iron oxides has reversed the predicted pattern and caused U and Th retention in the weathering profile. The severity (duration, humidity) of weathering and palaeoweathering may be estimated using Th/K ratios as a proxy. The use of Th/U ratios is more problematic should detrital gibbsite (or similar clays) or iron oxides be detected. Mineralogical analysis is needed in order to evaluate the hosts for K, U and Th: nonetheless, the spectral gamma ray machine offers a real-time, inexpensive and effective tool for the preliminary or conjunctive assessment of degrees of weathering or palaeoweathering.
Resumo:
The application of custom classification techniques and posterior probability modeling (PPM) using Worldview-2 multispectral imagery to archaeological field survey is presented in this paper. Research is focused on the identification of Neolithic felsite stone tool workshops in the North Mavine region of the Shetland Islands in Northern Scotland. Sample data from known workshops surveyed using differential GPS are used alongside known non-sites to train a linear discriminant analysis (LDA) classifier based on a combination of datasets including Worldview-2 bands, band difference ratios (BDR) and topographical derivatives. Principal components analysis is further used to test and reduce dimensionality caused by redundant datasets. Probability models were generated by LDA using principal components and tested with sites identified through geological field survey. Testing shows the prospective ability of this technique and significance between 0.05 and 0.01, and gain statistics between 0.90 and 0.94, higher than those obtained using maximum likelihood and random forest classifiers. Results suggest that this approach is best suited to relatively homogenous site types, and performs better with correlated data sources. Finally, by combining posterior probability models and least-cost analysis, a survey least-cost efficacy model is generated showing the utility of such approaches to archaeological field survey.
Resumo:
A method for simulation of acoustical bores, useful in the context of sound synthesis by physical modeling of woodwind instruments, is presented. As with previously developed methods, such as digital waveguide modeling (DWM) [Smith, Comput. Music J. 16, pp 74-91 (1992)] and the multi convolution algorithm (MCA) [Martinez et al., J. Acoust. Soc. Am. 84, pp 1620-1627 (1988)], the approach is based on a one-dimensional model of wave propagation in the bore. Both the DWM method and the MCA explicitly compute the transmission and reflection of wave variables that represent actual traveling pressure waves. The method presented in this report, the wave digital modeling (WDM) method, avoids the typical limitations associated with these methods by using a more general definition of the wave variables. An efficient and spatially modular discrete-time model is constructed from the digital representations of elemental bore units such as cylindrical sections, conical sections, and toneholes. Frequency-dependent phenomena, such as boundary losses, are approximated with digital filters. The stability of a simulation of a complete acoustic bore is investigated empirically. Results of the simulation of a full clarinet show that a very good concordance with classic transmission-line theory is obtained.
Resumo:
We present a numerical and theoretical study of intense-field single-electron ionization of helium at 390 nm and 780 nm. Accurate ionization rates (over an intensity range of (0.175-34) X10^14 W/ cm^2 at 390 nm, and (0.275 - 14.4) X 10^14 W /cm^2 at 780 nm) are obtained from full-dimensionality integrations of the time-dependent helium-laser Schroedinger equation. We show that the power law of lowest order perturbation theory, modified with a ponderomotive-shifted ionization potential, is capable of modelling the ionization rates over an intensity range that extends up to two orders of magnitude higher than that applicable to perturbation theory alone. Writing the modified perturbation theory in terms of scaled wavelength and intensity variables, we obtain to first approximation a single ionization law for both the 390 nm and 780 nm cases. To model the data in the high intensity limit as well as in the low, a new function is introduced for the rate. This function has, in part, a resemblance to that derived from tunnelling theory but, importantly, retains the correct frequency-dependence and scaling behaviour derived from the perturbative-like models at lower intensities. Comparison with the predictions of classical ADK tunnelling theory confirms that ADK performs poorly in the frequency and intensity domain treated here.
Resumo:
An artificial neural network (ANN) model is developed for the analysis and simulation of the correlation between the properties of maraging steels and composition, processing and working conditions. The input parameters of the model consist of alloy composition, processing parameters (including cold deformation degree, ageing temperature, and ageing time), and working temperature. The outputs of the ANN model include property parameters namely: ultimate tensile strength, yield strength, elongation, reduction in area, hardness, notched tensile strength, Charpy impact energy, fracture toughness, and martensitic transformation start temperature. Good performance of the ANN model is achieved. The model can be used to calculate properties of maraging steels as functions of alloy composition, processing parameters, and working condition. The combined influence of Co and Mo on the properties of maraging steels is simulated using the model. The results are in agreement with experimental data. Explanation of the calculated results from the metallurgical point of view is attempted. The model can be used as a guide for further alloy development.
Resumo:
This study explores using artificial neural networks to predict the rheological and mechanical properties of underwater concrete (UWC) mixtures and to evaluate the sensitivity of such properties to variations in mixture ingredients. Artificial neural networks (ANN) mimic the structure and operation of biological neurons and have the unique ability of self-learning, mapping, and functional approximation. Details of the development of the proposed neural network model, its architecture, training, and validation are presented in this study. A database incorporating 175 UWC mixtures from nine different studies was developed to train and test the ANN model. The data are arranged in a patterned format. Each pattern contains an input vector that includes quantity values of the mixture variables influencing the behavior of UWC mixtures (that is, cement, silica fume, fly ash, slag, water, coarse and fine aggregates, and chemical admixtures) and a corresponding output vector that includes the rheological or mechanical property to be modeled. Results show that the ANN model thus developed is not only capable of accurately predicting the slump, slump-flow, washout resistance, and compressive strength of underwater concrete mixtures used in the training process, but it can also effectively predict the aforementioned properties for new mixtures designed within the practical range of the input parameters used in the training process with an absolute error of 4.6, 10.6, 10.6, and 4.4%, respectively.
Resumo:
This paper discusses the monitoring of complex nonlinear and time-varying processes. Kernel principal component analysis (KPCA) has gained significant attention as a monitoring tool for nonlinear systems in recent years but relies on a fixed model that cannot be employed for time-varying systems. The contribution of this article is the development of a numerically efficient and memory saving moving window KPCA (MWKPCA) monitoring approach. The proposed technique incorporates an up- and downdating procedure to adapt (i) the data mean and covariance matrix in the feature space and (ii) approximates the eigenvalues and eigenvectors of the Gram matrix. The article shows that the proposed MWKPCA algorithm has a computation complexity of O(N2), whilst batch techniques, e.g. the Lanczos method, are of O(N3). Including the adaptation of the number of retained components and an l-step ahead application of the MWKPCA monitoring model, the paper finally demonstrates the utility of the proposed technique using a simulated nonlinear time-varying system and recorded data from an industrial distillation column.
Resumo:
This work analyzes the relationship between large food webs describing potential feeding relations between species and smaller sub-webs thereof describing relations actually realized in local communities of various sizes. Special attention is given to the relationships between patterns of phylogenetic correlations encountered in large webs and sub-webs. Based on the current theory of food-web topology as implemented in the matching model, it is shown that food webs are scale invariant in the following sense: given a large web described by the model, a smaller, randomly sampled sub-web thereof is described by the model as well. A stochastic analysis of model steady states reveals that such a change in scale goes along with a re-normalization of model parameters. Explicit formulae for the renormalized parameters are derived. Thus, the topology of food webs at all scales follows the same patterns, and these can be revealed by data and models referring to the local scale alone. As a by-product of the theory, a fast algorithm is derived which yields sample food webs from the exact steady state of the matching model for a high-dimensional trophic niche space in finite time. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Recently polymeric adsorbents have been emerging as highly effective alternatives to activated carbons for pollutant removal from industrial effluents. Poly(methyl methacrylate) (PMMA), polymerized using the atom transfer radical polymerization (ATRP) technique has been investigated for its feasibility to remove phenol from aqueous solution. Adsorption equilibrium and kinetic investigations were undertaken to evaluate the effect of contact time, initial concentration (10-90 mg/L), and temperature (25-55 degrees C). Phenol uptake was found to increase with increase in initial concentration and agitation time. The adsorption kinetics were found to follow the pseudo-second-order kinetic model. The intra-particle diffusion analysis indicated that film diffusion may be the rate controlling step in the removal process. Experimental equilibrium data were fitted to five different isotherm models namely Langmuir, Freundlich, Dubinin-Radushkevich, Temkin and Redlich-Peterson by non-linear least square regression and their goodness-of-fit evaluated in terms of mean relative error (MRE) and standard error of estimate (SEE). The adsorption equilibrium data were best represented by Freundlich and Redlich-Peterson isotherms. Thermodynamic parameters such as Delta G degrees and Delta H degrees indicated that the sorption process is exothermic and spontaneous in nature and that higher ambient temperature results in more favourable adsorption. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
A novel non-linear dimensionality reduction method, called Temporal Laplacian Eigenmaps, is introduced to process efficiently time series data. In this embedded-based approach, temporal information is intrinsic to the objective function, which produces description of low dimensional spaces with time coherence between data points. Since the proposed scheme also includes bidirectional mapping between data and embedded spaces and automatic tuning of key parameters, it offers the same benefits as mapping-based approaches. Experiments on a couple of computer vision applications demonstrate the superiority of the new approach to other dimensionality reduction method in term of accuracy. Moreover, its lower computational cost and generalisation abilities suggest it is scalable to larger datasets. © 2010 IEEE.