929 resultados para Single Equation Models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The immune system comprises an integrated network of cellular interactions. Some responses are predictable, while others are more stochastic. While in vitro the outcome of stimulating a single type of cell may be stereotyped and reproducible, in vivo this is often not the case. This phenomenon often merits the use of animal models in predicting the impact of immunosuppressant drugs. A heavy burden of responsibility lies on the shoulders of the investigator when using animal models to study immunosuppressive agents. The principles of the three R׳s: refine (less suffering,), reduce (lower animal numbers) and replace (alternative in vitro assays) must be applied, as described elsewhere in this issue. Well designed animal model experiments have allowed us to develop all the immunosuppressive agents currently available for treating autoimmune disease and transplant recipients. In this review, we examine the common animal models used in developing immunosuppressive agents, focusing on drugs used in transplant surgery. Autoimmune diseases, such as multiple sclerosis, are covered elsewhere in this issue. We look at the utility and limitations of small and large animal models in measuring potency and toxicity of immunosuppressive therapies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Colorectal cancer is the second most common cause of cancer-related death in the United States. Recent studies showed that interleukin-8 (IL-8) and its receptors (CXCR1 and CXCR2) are significantly upregulated in both the tumor and its microenvironment, and act as key regulators of proliferation, angiogenesis, and metastasis. Our previous study showed that IL-8 overexpression in colorectal cancer cells triggers the upregulation of the CXCR2-mediated proliferative pathway. The aim of this study was to investigate whether the CXCR2 antagonist, SCH-527123, inhibits colorectal cancer proliferation and if it can sensitize colorectal cancer cells to oxaliplatin both in vitro and in vivo. SCH-527123 showed concentration-dependent antiproliferative effects in HCT116, Caco2, and their respective IL-8-overexpressing variants colorectal cancer cell lines. Moreover, SCH-527123 was able to suppress CXCR2-mediated signal transduction as shown through decreased phosphorylation of the NF-κB/mitogen-activated protein kinase (MAPK)/AKT pathway. These findings corresponded with decreased cell migration and invasion, while increased apoptosis in colorectal cancer cell lines. In vivo results verified that SCH-527123 treatment decreased tumor growth and microvessel density when compared with vehicle-treated tumors. Importantly, these preclinical studies showed that the combination of SCH-527123 and oxaliplatin resulted in a greater decrease in cell proliferation, tumor growth, apoptosis, and angiogenesis that was superior to single-agent treatment. Taken together, these findings suggest that targeting CXCR2 may block tumor proliferation, migration, invasion, and angiogenesis. In addition, CXCR2 blockade may further sensitize colorectal cancer to oxaliplatin treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that the X-ray line flux of the Mn Kα line at 5.9 keV from the decay of 55Fe is a promising diagnostic to distinguish between Type Ia supernova (SN Ia) explosion models. Using radiation transport calculations, we compute the line flux for two three-dimensional explosion models: a near-Chandrasekhar mass delayed detonation and a violent merger of two (1.1 and 0.9 M⊙) white dwarfs. Both models are based on solar metallicity zero-age main-sequence progenitors. Due to explosive nuclear burning at higher density, the delayed-detonation model synthesizes ˜3.5 times more radioactive 55Fe than the merger model. As a result, we find that the peak Mn Kα line flux of the delayed-detonation model exceeds that of the merger model by a factor of ˜4.5. Since in both models the 5.9-keV X-ray flux peaks five to six years after the explosion, a single measurement of the X-ray line emission at this time can place a constraint on the explosion physics that is complementary to those derived from earlier phase optical spectra or light curves. We perform detector simulations of current and future X-ray telescopes to investigate the possibilities of detecting the X-ray line at 5.9 keV. Of the currently existing telescopes, XMM-Newton/pn is the best instrument for close (≲1-2 Mpc), non-background limited SNe Ia because of its large effective area. Due to its low instrumental background, Chandra/ACIS is currently the best choice for SNe Ia at distances above ˜2 Mpc. For the delayed-detonation scenario, a line detection is feasible with Chandra up to ˜3 Mpc for an exposure time of 106 s. We find that it should be possible with currently existing X-ray instruments (with exposure times ≲5 × 105 s) to detect both of our models at sufficiently high S/N to distinguish between them for hypothetical events within the Local Group. The prospects for detection will be better with future missions. For example, the proposed Athena/X-IFU instrument could detect our delayed-detonation model out to a distance of ˜5 Mpc. This would make it possible to study future events occurring during its operational life at distances comparable to those of the recent supernovae SN 2011fe (˜6.4 Mpc) and SN 2014J (˜3.5 Mpc).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Age-related macular degeneration (AMD) is a leading cause of vision loss in the elderly mostly due to the development of neovascular AMD (nAMD) or geographic atrophy (GA). Intravitreal injections of anti-vascular endothelial growth factor (VEGF) agents are an effective therapeutic option for nAMD. Following anti-VEGF treatments, increased atrophy of the retinal pigment epithelium (RPE) and choriocapillaries that resembles GA has been reported. We sought to evaluate the underlying genetic influences that may contribute to this process. Methods: We selected 68 single nucleotide polymorphisms (SNPs) from genes previously identified as susceptibility factors in AMD, along with 43 SNPs from genes encoding the VEGF protein and its cognate receptors as this pathway is targeted by treatment. We enrolled 467 consecutive patients (Feb 2009 to October 2011) with nAMD who received anti-VEGF therapy. The acutely presenting eye was designated as the study eye and retinal tomograms graded for macular atrophy at study exit. Statistical analysis was performed using PLINK to identify SNPs with a P value < 0.01. Logistic regression models with macular atrophy as dependent variable were fitted with age, gender, smoking status, common genetic risk factors and the identified SNPs as explanatory variables. Results: Grading for macular atrophy was available in 304 study eyes and 70% (214) were classified as showing macular atrophy. In the unadjusted analysis we observed significant associations between macular atrophy and two independent SNPs in the APCS gene: rs6695377: odds ratio (OR) = 1.98; 95% confidence intervals (CI): 1.23, 3.19; P = 0.004; rs1446965: OR = 2.49, CI: 1.29, 4.82; P = 0.006 and these associations remained significant after adjustment for covariates. Conclusions: VEGF is a mitogen and growth factor for choroidal blood vessels and the RPE and its inhibition could lead to atrophy of these key tissues. Anti-VEGF treatment can interfere with ocular vascular maintenance and may be associated with RPE and choroidal atrophy. As such, these medications, which block the effects of VEGF, may influence the development of GA. The top associated SNPs are found in the APCS gene, a highly conserved glycoprotein that encodes Serum amyloid P (SAP) which opsonizes apoptotic cells. SAP can bind to and activate complement components via binding to C1q, a mechanism by which SAP may remove cellular debris, affecting regulation of the three complement pathways.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hidden Markov models (HMMs) are widely used probabilistic models of sequential data. As with other probabilistic models, they require the specification of local conditional probability distributions, whose assessment can be too difficult and error-prone, especially when data are scarce or costly to acquire. The imprecise HMM (iHMM) generalizes HMMs by allowing the quantification to be done by sets of, instead of single, probability distributions. iHMMs have the ability to suspend judgment when there is not enough statistical evidence, and can serve as a sensitivity analysis tool for standard non-stationary HMMs. In this paper, we consider iHMMs under the strong independence interpretation, for which we develop efficient inference algorithms to address standard HMM usage such as the computation of likelihoods and most probable explanations, as well as performing filtering and predictive inference. Experiments with real data show that iHMMs produce more reliable inferences without compromising the computational efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As técnicas estatísticas são fundamentais em ciência e a análise de regressão linear é, quiçá, uma das metodologias mais usadas. É bem conhecido da literatura que, sob determinadas condições, a regressão linear é uma ferramenta estatística poderosíssima. Infelizmente, na prática, algumas dessas condições raramente são satisfeitas e os modelos de regressão tornam-se mal-postos, inviabilizando, assim, a aplicação dos tradicionais métodos de estimação. Este trabalho apresenta algumas contribuições para a teoria de máxima entropia na estimação de modelos mal-postos, em particular na estimação de modelos de regressão linear com pequenas amostras, afetados por colinearidade e outliers. A investigação é desenvolvida em três vertentes, nomeadamente na estimação de eficiência técnica com fronteiras de produção condicionadas a estados contingentes, na estimação do parâmetro ridge em regressão ridge e, por último, em novos desenvolvimentos na estimação com máxima entropia. Na estimação de eficiência técnica com fronteiras de produção condicionadas a estados contingentes, o trabalho desenvolvido evidencia um melhor desempenho dos estimadores de máxima entropia em relação ao estimador de máxima verosimilhança. Este bom desempenho é notório em modelos com poucas observações por estado e em modelos com um grande número de estados, os quais são comummente afetados por colinearidade. Espera-se que a utilização de estimadores de máxima entropia contribua para o tão desejado aumento de trabalho empírico com estas fronteiras de produção. Em regressão ridge o maior desafio é a estimação do parâmetro ridge. Embora existam inúmeros procedimentos disponíveis na literatura, a verdade é que não existe nenhum que supere todos os outros. Neste trabalho é proposto um novo estimador do parâmetro ridge, que combina a análise do traço ridge e a estimação com máxima entropia. Os resultados obtidos nos estudos de simulação sugerem que este novo estimador é um dos melhores procedimentos existentes na literatura para a estimação do parâmetro ridge. O estimador de máxima entropia de Leuven é baseado no método dos mínimos quadrados, na entropia de Shannon e em conceitos da eletrodinâmica quântica. Este estimador suplanta a principal crítica apontada ao estimador de máxima entropia generalizada, uma vez que prescinde dos suportes para os parâmetros e erros do modelo de regressão. Neste trabalho são apresentadas novas contribuições para a teoria de máxima entropia na estimação de modelos mal-postos, tendo por base o estimador de máxima entropia de Leuven, a teoria da informação e a regressão robusta. Os estimadores desenvolvidos revelam um bom desempenho em modelos de regressão linear com pequenas amostras, afetados por colinearidade e outliers. Por último, são apresentados alguns códigos computacionais para estimação com máxima entropia, contribuindo, deste modo, para um aumento dos escassos recursos computacionais atualmente disponíveis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Communication and cooperation between billions of neurons underlie the power of the brain. How do complex functions of the brain arise from its cellular constituents? How do groups of neurons self-organize into patterns of activity? These are crucial questions in neuroscience. In order to answer them, it is necessary to have solid theoretical understanding of how single neurons communicate at the microscopic level, and how cooperative activity emerges. In this thesis we aim to understand how complex collective phenomena can arise in a simple model of neuronal networks. We use a model with balanced excitation and inhibition and complex network architecture, and we develop analytical and numerical methods for describing its neuronal dynamics. We study how interaction between neurons generates various collective phenomena, such as spontaneous appearance of network oscillations and seizures, and early warnings of these transitions in neuronal networks. Within our model, we show that phase transitions separate various dynamical regimes, and we investigate the corresponding bifurcations and critical phenomena. It permits us to suggest a qualitative explanation of the Berger effect, and to investigate phenomena such as avalanches, band-pass filter, and stochastic resonance. The role of modular structure in the detection of weak signals is also discussed. Moreover, we find nonlinear excitations that can describe paroxysmal spikes observed in electroencephalograms from epileptic brains. It allows us to propose a method to predict epileptic seizures. Memory and learning are key functions of the brain. There are evidences that these processes result from dynamical changes in the structure of the brain. At the microscopic level, synaptic connections are plastic and are modified according to the dynamics of neurons. Thus, we generalize our cortical model to take into account synaptic plasticity and we show that the repertoire of dynamical regimes becomes richer. In particular, we find mixed-mode oscillations and a chaotic regime in neuronal network dynamics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design of neuro-fuzzy models is still a complex problem, as it involves not only the determination of the model parameters, but also its structure. Of special importance is the incorporation of a priori information in the design process. In this paper two known design algorithms for B-spline models will be updated to account for function and derivatives equality restrictions, which are important when the neural model is used for performing single or multi-objective optimization on-line.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adhesively-bonded joints are extensively used in several fields of engineering. Cohesive Zone Models (CZM) have been used for the strength prediction of adhesive joints, as an add-in to Finite Element (FE) analyses that allows simulation of damage growth, by consideration of energetic principles. A useful feature of CZM is that different shapes can be developed for the cohesive laws, depending on the nature of the material or interface to be simulated, allowing an accurate strength prediction. This work studies the influence of the CZM shape (triangular, exponential or trapezoidal) used to model a thin adhesive layer in single-lap adhesive joints, for an estimation of its influence on the strength prediction under different material conditions. By performing this study, guidelines are provided on the possibility to use a CZM shape that may not be the most suited for a particular adhesive, but that may be more straightforward to use/implement and have less convergence problems (e.g. triangular shaped CZM), thus attaining the solution faster. The overall results showed that joints bonded with ductile adhesives are highly influenced by the CZM shape, and that the trapezoidal shape fits best the experimental data. Moreover, the smaller is the overlap length (LO), the greater is the influence of the CZM shape. On the other hand, the influence of the CZM shape can be neglected when using brittle adhesives, without compromising too much the accuracy of the strength predictions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Joining of components with structural adhesives is currently one of the most widespread techniques for advanced structures (e.g., aerospace or aeronautical). Adhesive bonding does not involve drilling operations and it distributes the load over a larger area than mechanical joints. However, peak stresses tend to develop near the overlap edges because of differential straining of the adherends and load asymmetry. As a result, premature failures can be expected, especially for brittle adhesives. Moreover, bonded joints are very sensitive to the surface treatment of the material, service temperature, humidity and ageing. To surpass these limitations, the combination of adhesive bonding with spot-welding is a choice to be considered, adding a few advantages like superior static strength and stiffness, higher peeling and fatigue strength and easier fabrication, as fixtures during the adhesive curing are not needed. The experimental and numerical study presented here evaluates hybrid spot-welded/bonded single-lap joints in comparison with the purely spot-welded and bonded equivalents. A parametric study on the overlap length (LO) allowed achieving different strength advantages, up to 58% compared to spot-welded joints and 24% over bonded joints. The Finite Element Method (FEM) and Cohesive Zone Models (CZM) for damage growth were also tested in Abaqus® to evaluate this technique for strength prediction, showing accurate estimations for all kinds of joints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The structural integrity of multi-component structures is usually determined by the strength and durability of their unions. Adhesive bonding is often chosen over welding, riveting and bolting, due to the reduction of stress concentrations, reduced weight penalty and easy manufacturing, amongst other issues. In the past decades, the Finite Element Method (FEM) has been used for the simulation and strength prediction of bonded structures, by strength of materials or fracture mechanics-based criteria. Cohesive-zone models (CZMs) have already proved to be an effective tool in modelling damage growth, surpassing a few limitations of the aforementioned techniques. Despite this fact, they still suffer from the restriction of damage growth only at predefined growth paths. The eXtended Finite Element Method (XFEM) is a recent improvement of the FEM, developed to allow the growth of discontinuities within bulk solids along an arbitrary path, by enriching degrees of freedom with special displacement functions, thus overcoming the main restriction of CZMs. These two techniques were tested to simulate adhesively bonded single- and double-lap joints. The comparative evaluation of the two methods showed their capabilities and/or limitations for this specific purpose.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertation to obtain the degree of Doctor in Electrical and Computer Engineering, specialization of Collaborative Networks

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIM: To confirm the accuracy of sentinel node biopsy (SNB) procedure and its morbidity, and to investigate predictive factors for SN status and prognostic factors for disease-free survival (DFS) and disease-specific survival (DSS). MATERIALS AND METHODS: Between October 1997 and December 2004, 327 consecutive patients in one centre with clinically node-negative primary skin melanoma underwent an SNB by the triple technique, i.e. lymphoscintigraphy, blue-dye and gamma-probe. Multivariate logistic regression analyses as well as the Kaplan-Meier were performed. RESULTS: Twenty-three percent of the patients had at least one metastatic SN, which was significantly associated with Breslow thickness (p<0.001). The success rate of SNB was 99.1% and its morbidity was 7.6%. With a median follow-up of 33 months, the 5-year DFS/DSS were 43%/49% for patients with positive SN and 83.5%/87.4% for patients with negative SN, respectively. The false-negative rate of SNB was 8.6% and sensitivity 91.4%. On multivariate analysis, DFS was significantly worsened by Breslow thickness (RR=5.6, p<0.001), positive SN (RR=5.0, p<0.001) and male sex (RR=2.9, p=0.001). The presence of a metastatic SN (RR=8.4, p<0.001), male sex (RR=6.1, p<0.001), Breslow thickness (RR=3.2, p=0.013) and ulceration (RR=2.6, p=0.015) were significantly associated with a poorer DSS. CONCLUSION: SNB is a reliable procedure with high sensitivity (91.4%) and low morbidity. Breslow thickness was the only statistically significant parameter predictive of SN status. DFS was worsened in decreasing order by Breslow thickness, metastatic SN and male gender. Similarly DSS was significantly worsened by a metastatic SN, male gender, Breslow thickness and ulceration. These data reinforce the SN status as a powerful staging procedure

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This investigation comprises a comparison of experimental and theoretical dechanneling of MeV protons in copper single crystals. Dechanneling results when an ion's transverse energy increases to the value where the ion can undergo small impact parameter collisions with individual atoms. Depth dependent dechanneling rates were determined as functions of lattice temperature, ion beam energy and crystal axis orientation. Ion beam energies were IMeV and 2MeV,temperatures ranged from 35 K to 280 K and the experiment was carried out along both the (lOa) and <110) axes. Experimental data took the form of aligned and random Rutherford backscattered energy spectra. Dechanneling rates were extracted from these spectra using a single scattering theory that took explicit account of the different stopping powers experienced by channeled and dechanneled ions and also included a correction factor to take into account multiple scattering effects along the ion's trajectory. The assumption of statistical equilibrium and small angle scattering of the channeled ions allows a description of dechanneling in terms of the solution of a diffusion like equation which contains a so called diffusion function. The diffusion function is shown to be related to the increase in average transverse energy. Theoretical treatments of increase in average transverse energy due to collisions of projectiles with channel electrons and thermal perturbations in the lattice potential are reviewed. Using the diffusion equation and the electron density in the channel centre as a fitting parameter dechanneling rates are extracted. Excellent agreement between theory and experiment has been demonstrated. Electron densities determined in the fitting procedure appear to be realistic. The surface parameters show themselves to be good indicators of the quality of the crystal.