922 resultados para truncated regression
Resumo:
Purpose: To determine whether constriction of proximal arterial vessels precedes involution of the distal hyaloid vasculature in the mouse, under normal conditions, and whether this vasoconstriction is less pronounced when the distal hyaloid network persists, as it does in oxygen-induced retinopathy (OIR). Methods: Photomicrographs of the vasa hyaloidea propria were analysed from pre-term pups (1-2 days prior to birth), and on Days 1-11 post-birth. The OIR model involved exposing pups to similar to 90% O-2 from D1-5, followed by return to ambient air. At sampling times pups were anaesthetised and perfused with india ink. Retinal flatmounts were also incubated with FITC-lectin (BS-1, G. simplicifolia,); this labels all vessels, allowing identification of vessels not patent to the perfusate. Results: Mean diameter of proximal hyaloid vessels in preterm pups was 25.44 +/- 1.98 mum; +/-1 SEM). Within 3-12 hrs of birth, significant vasoconstriction was evident (diameter:12.45 +/- 0.88 mum), and normal hyaloid regression subsequently occurred. Similar vasoconstriction occurred in the O-2-treated group, but this was reversed upon return to room air, with significant dilation of proximal vessels by D7 (diameter: 31.75 +/- 11.99 mum) and distal hyaloid vessels subsequently became enlarged and tortuous. Conclusions: Under normal conditions, vasoconstriction of proximal hyaloid vessels occurs at birth, preceding attenuation of distal hyaloid vessels. Vasoconstriction also occurs in O-2-treated pups during treatment, but upon return to room air, the remaining hyaloid vessels dilate proximally, and the distal vessels become dilated and tortuous. These observations support the contention that regression of the hyaloid network is dependent, in the first instance, on proximal arterial vasoconstriction.
Resumo:
Recombinant forms of the dengue 2 virus NS3 protease linked to a 40-residue co-factor, corresponding to part of NS2B, have been expressed in Escherichia coli and shown to be active against para-nitroanilide substrates comprising the P6-P1 residues of four substrate cleavage sequences. The enzyme is inactive alone or after the addition of a putative 13-residue co-factor peptide but is active when fused to the 40-residue co-factor, by either a cleavable or a noncleavable glycine linker. The NS4B/NS5 cleavage site was processed most readily, with optimal processing conditions being pH 9, I = 10 mm, 1 mm CHAPS, 20% glycerol. A longer 10-residue peptide corresponding to the NS2B/NS3 cleavage site (P6-P4') was a poorer substrate than the hexapeptide (P6-P1) para-nitroanilide substrate under these conditions, suggesting that the prime side substrate residues did not contribute significantly to protease binding. We also report the first inhibitors of a co-factor-complexed, catalytically active flavivirus NS3 protease. Aprotinin was the only standard serine protease inhibitor to be active, whereas a number of peptide substrate analogues were found to be competitive inhibitors at micromolar concentrations.
Resumo:
Binning and truncation of data are common in data analysis and machine learning. This paper addresses the problem of fitting mixture densities to multivariate binned and truncated data. The EM approach proposed by McLachlan and Jones (Biometrics, 44: 2, 571-578, 1988) for the univariate case is generalized to multivariate measurements. The multivariate solution requires the evaluation of multidimensional integrals over each bin at each iteration of the EM procedure. Naive implementation of the procedure can lead to computationally inefficient results. To reduce the computational cost a number of straightforward numerical techniques are proposed. Results on simulated data indicate that the proposed methods can achieve significant computational gains with no loss in the accuracy of the final parameter estimates. Furthermore, experimental results suggest that with a sufficient number of bins and data points it is possible to estimate the true underlying density almost as well as if the data were not binned. The paper concludes with a brief description of an application of this approach to diagnosis of iron deficiency anemia, in the context of binned and truncated bivariate measurements of volume and hemoglobin concentration from an individual's red blood cells.
Resumo:
To address the hypothesis that certain disease-associated mutants of the breast-ovarian cancer susceptibility gene BRCA1 have biological activity in vivo, we have expressed a truncated Brca1 protein (trBrca1) in cell-lines and in the mammary gland of transgenic mice. Immunofluorescent analysis of transfected cell-lines indicates that trBRCA1 is a stable protein and that it is localized in the cell cytoplasm. Functional analysis of these cell-lines indicates that expression of trBRCA1 confers an increased radiosensitivity phenotype on mammary epithelial cells, consistent with abrogation of the BRCA1 pathway. MMTV-trBrca1 transgenic mice from two independent lines displayed a delay in lactational mammary gland development, as demonstrated by altered histological profiles of lobuloalveolar structures. Cellular and molecular analyses indicate that this phenotype results from a defect in differentiation, rather than altered rates of proliferation or apoptosis. The results presented in this paper are consistent with trBrca1 possessing dominant-negative activity and playing an important role in regulating normal mammary development. They may also have implications for germline carriers of BRCA1 mutations.
Resumo:
In this paper, we consider testing for additivity in a class of nonparametric stochastic regression models. Two test statistics are constructed and their asymptotic distributions are established. We also conduct a small sample study for one of the test statistics through a simulated example. (C) 2002 Elsevier Science (USA).
Resumo:
We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
Long-term contractual decisions are the basis of an efficient risk management. However those types of decisions have to be supported with a robust price forecast methodology. This paper reports a different approach for long-term price forecast which tries to give answers to that need. Making use of regression models, the proposed methodology has as main objective to find the maximum and a minimum Market Clearing Price (MCP) for a specific programming period, and with a desired confidence level α. Due to the problem complexity, the meta-heuristic Particle Swarm Optimization (PSO) was used to find the best regression parameters and the results compared with the obtained by using a Genetic Algorithm (GA). To validate these models, results from realistic data are presented and discussed in detail.
Resumo:
We propose a 3D-2D image registration method that relates image features of 2D projection images to the transformation parameters of the 3D image by nonlinear regression. The method is compared with a conventional registration method based on iterative optimization. For evaluation, simulated X-ray images (DRRs) were generated from coronary artery tree models derived from 3D CTA scans. Registration of nine vessel trees was performed, and the alignment quality was measured by the mean target registration error (mTRE). The regression approach was shown to be slightly less accurate, but much more robust than the method based on an iterative optimization approach.
Resumo:
Amulti-residue methodology based on a solid phase extraction followed by gas chromatography–tandem mass spectrometry was developed for trace analysis of 32 compounds in water matrices, including estrogens and several pesticides from different chemical families, some of them with endocrine disrupting properties. Matrix standard calibration solutions were prepared by adding known amounts of the analytes to a residue-free sample to compensate matrix-induced chromatographic response enhancement observed for certain pesticides. Validation was done mainly according to the International Conference on Harmonisation recommendations, as well as some European and American validation guidelines with specifications for pesticides analysis and/or GC–MS methodology. As the assumption of homoscedasticity was not met for analytical data, weighted least squares linear regression procedure was applied as a simple and effective way to counteract the greater influence of the greater concentrations on the fitted regression line, improving accuracy at the lower end of the calibration curve. The method was considered validated for 31 compounds after consistent evaluation of the key analytical parameters: specificity, linearity, limit of detection and quantification, range, precision, accuracy, extraction efficiency, stability and robustness.
Resumo:
The prediction of the time and the efficiency of the remediation of contaminated soils using soil vapor extraction remain a difficult challenge to the scientific community and consultants. This work reports the development of multiple linear regression and artificial neural network models to predict the remediation time and efficiency of soil vapor extractions performed in soils contaminated separately with benzene, toluene, ethylbenzene, xylene, trichloroethylene, and perchloroethylene. The results demonstrated that the artificial neural network approach presents better performances when compared with multiple linear regression models. The artificial neural network model allowed an accurate prediction of remediation time and efficiency based on only soil and pollutants characteristics, and consequently allowing a simple and quick previous evaluation of the process viability.
Resumo:
Radiotherapy is one of the main treatments used against cancer. Radiotherapy uses radiation to destroy cancerous cells trying, at the same time, to minimize the damages in healthy tissues. The planning of a radiotherapy treatment is patient dependent, resulting in a lengthy trial and error procedure until a treatment complying as most as possible with the medical prescription is found. Intensity Modulated Radiation Therapy (IMRT) is one technique of radiation treatment that allows the achievement of a high degree of conformity between the area to be treated and the dose absorbed by healthy tissues. Nevertheless, it is still not possible to eliminate completely the potential treatments’ side-effects. In this retrospective study we use the clinical data from patients with head-and-neck cancer treated at the Portuguese Institute of Oncology of Coimbra and explore the possibility of classifying new and untreated patients according to the probability of xerostomia 12 months after the beginning of IMRT treatments by using a logistic regression approach. The results obtained show that the classifier presents a high discriminative ability in predicting the binary response “at risk for xerostomia at 12 months”
Resumo:
An individual experiences double coverage when he bene ts from more than one health insurance plan at the same time. This paper examines the impact of such supplementary insurance on the demand for health care services. Its novelty is that within the context of count data modelling and without imposing restrictive parametric assumptions, the analysis is carried out for di¤erent points of the conditional distribution, not only for its mean location. Results indicate that moral hazard is present across the whole outcome distribution for both public and private second layers of health insurance coverage but with greater magnitude in the latter group. By looking at di¤erent points we unveil that stronger double coverage e¤ects are smaller for high levels of usage. We use data for Portugal, taking advantage of particular features of the public and private protection schemes on top of the statutory National Health Service. By exploring the last Portuguese Health Survey, we were able to evaluate their impacts on the consumption of doctor visi
Resumo:
In the last two decades, small strain shear modulus became one of the most important geotechnical parameters to characterize soil stiffness. Finite element analysis have shown that in-situ stiffness of soils and rocks is much higher than what was previously thought and that stress-strain behaviour of these materials is non-linear in most cases with small strain levels, especially in the ground around retaining walls, foundations and tunnels, typically in the order of 10−2 to 10−4 of strain. Although the best approach to estimate shear modulus seems to be based in measuring seismic wave velocities, deriving the parameter through correlations with in-situ tests is usually considered very useful for design practice.The use of Neural Networks for modeling systems has been widespread, in particular within areas where the great amount of available data and the complexity of the systems keeps the problem very unfriendly to treat following traditional data analysis methodologies. In this work, the use of Neural Networks and Support Vector Regression is proposed to estimate small strain shear modulus for sedimentary soils from the basic or intermediate parameters derived from Marchetti Dilatometer Test. The results are discussed and compared with some of the most common available methodologies for this evaluation.
Resumo:
In the last two decades, small strain shear modulus became one of the most important geotechnical parameters to characterize soil stiffness. Finite element analysis have shown that in-situ stiffness of soils and rocks is much higher than what was previously thought and that stress-strain behaviour of these materials is non-linear in most cases with small strain levels, especially in the ground around retaining walls, foundations and tunnels, typically in the order of 10−2 to 10−4 of strain. Although the best approach to estimate shear modulus seems to be based in measuring seismic wave velocities, deriving the parameter through correlations with in-situ tests is usually considered very useful for design practice.The use of Neural Networks for modeling systems has been widespread, in particular within areas where the great amount of available data and the complexity of the systems keeps the problem very unfriendly to treat following traditional data analysis methodologies. In this work, the use of Neural Networks and Support Vector Regression is proposed to estimate small strain shear modulus for sedimentary soils from the basic or intermediate parameters derived from Marchetti Dilatometer Test. The results are discussed and compared with some of the most common available methodologies for this evaluation.
Resumo:
In health related research it is common to have multiple outcomes of interest in a single study. These outcomes are often analysed separately, ignoring the correlation between them. One would expect that a multivariate approach would be a more efficient alternative to individual analyses of each outcome. Surprisingly, this is not always the case. In this article we discuss different settings of linear models and compare the multivariate and univariate approaches. We show that for linear regression models, the estimates of the regression parameters associated with covariates that are shared across the outcomes are the same for the multivariate and univariate models while for outcome-specific covariates the multivariate model performs better in terms of efficiency.