975 resultados para Matrix Analytic Methods
Resumo:
"December 22, 1959."
Resumo:
Purpose: The phenotype of vascular smooth muscle cells (SMCs) is altered in several arterial pathologies, including the neointima formed after acute arterial injury. This study examined the time course of this phenotypic change in relation to changes in the amount and distribution of matrix glycosaminoglycans. Methods: The immunochemical staining of heparan sulphates (HS) and chondroitin sulphates (CS) in the extracellular matrix of the arterial wall was examined at early points after balloon catheter injury of the rabbit carotid artery. SMC phenotype was assessed by means of ultrastructural morphometry of the cytoplasmic volume fraction of myofilaments. The proportions of cell and matrix components in the media were analyzed with similar morphometric techniques. Results: HS and CS were shown in close association with SMCs of the uninjured arterial media as well as being more widespread within the matrix. Within 6 hours after arterial injury, there was loss of the regular pericellular distribution of both HS and CS, which was associated with a significant expansion in the extracellular space. This preceded the change in ultrastructural phenotype of the SMCs. The glycosaminoglycan loss was most exaggerated at 4 days, after which time the HS and CS reappeared around the medial SMCs. SMCs of the recovering media were able to rapidly replace their glycosaminoglycans, whereas SMCs of the developing neointima failed to produce HS as readily as they produced CS. Conclusions: These studies indicate that changes in glycosaminoglycans of the extracellular matrix precede changes in SMC phenotype after acute arterial injury. In the recovering arterial media, SMCs replace their matrix glycosaminoglycans rapidly, whereas the newly established neointima fails to produce similar amounts of heparan sulphates.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Contexte. Les études cas-témoins sont très fréquemment utilisées par les épidémiologistes pour évaluer l’impact de certaines expositions sur une maladie particulière. Ces expositions peuvent être représentées par plusieurs variables dépendant du temps, et de nouvelles méthodes sont nécessaires pour estimer de manière précise leurs effets. En effet, la régression logistique qui est la méthode conventionnelle pour analyser les données cas-témoins ne tient pas directement compte des changements de valeurs des covariables au cours du temps. Par opposition, les méthodes d’analyse des données de survie telles que le modèle de Cox à risques instantanés proportionnels peuvent directement incorporer des covariables dépendant du temps représentant les histoires individuelles d’exposition. Cependant, cela nécessite de manipuler les ensembles de sujets à risque avec précaution à cause du sur-échantillonnage des cas, en comparaison avec les témoins, dans les études cas-témoins. Comme montré dans une étude de simulation précédente, la définition optimale des ensembles de sujets à risque pour l’analyse des données cas-témoins reste encore à être élucidée, et à être étudiée dans le cas des variables dépendant du temps. Objectif: L’objectif général est de proposer et d’étudier de nouvelles versions du modèle de Cox pour estimer l’impact d’expositions variant dans le temps dans les études cas-témoins, et de les appliquer à des données réelles cas-témoins sur le cancer du poumon et le tabac. Méthodes. J’ai identifié de nouvelles définitions d’ensemble de sujets à risque, potentiellement optimales (le Weighted Cox model and le Simple weighted Cox model), dans lesquelles différentes pondérations ont été affectées aux cas et aux témoins, afin de refléter les proportions de cas et de non cas dans la population source. Les propriétés des estimateurs des effets d’exposition ont été étudiées par simulation. Différents aspects d’exposition ont été générés (intensité, durée, valeur cumulée d’exposition). Les données cas-témoins générées ont été ensuite analysées avec différentes versions du modèle de Cox, incluant les définitions anciennes et nouvelles des ensembles de sujets à risque, ainsi qu’avec la régression logistique conventionnelle, à des fins de comparaison. Les différents modèles de régression ont ensuite été appliqués sur des données réelles cas-témoins sur le cancer du poumon. Les estimations des effets de différentes variables de tabac, obtenues avec les différentes méthodes, ont été comparées entre elles, et comparées aux résultats des simulations. Résultats. Les résultats des simulations montrent que les estimations des nouveaux modèles de Cox pondérés proposés, surtout celles du Weighted Cox model, sont bien moins biaisées que les estimations des modèles de Cox existants qui incluent ou excluent simplement les futurs cas de chaque ensemble de sujets à risque. De plus, les estimations du Weighted Cox model étaient légèrement, mais systématiquement, moins biaisées que celles de la régression logistique. L’application aux données réelles montre de plus grandes différences entre les estimations de la régression logistique et des modèles de Cox pondérés, pour quelques variables de tabac dépendant du temps. Conclusions. Les résultats suggèrent que le nouveau modèle de Cox pondéré propose pourrait être une alternative intéressante au modèle de régression logistique, pour estimer les effets d’expositions dépendant du temps dans les études cas-témoins
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The use of guided ultrasonic waves (GUW) has increased considerably in the fields of non-destructive (NDE) testing and structural health monitoring (SHM) due to their ability to perform long range inspections, to probe hidden areas as well as to provide a complete monitoring of the entire waveguide. Guided waves can be fully exploited only once their dispersive properties are known for the given waveguide. In this context, well stated analytical and numerical methods are represented by the Matrix family methods and the Semi Analytical Finite Element (SAFE) methods. However, while the former are limited to simple geometries of finite or infinite extent, the latter can model arbitrary cross-section waveguides of finite domain only. This thesis is aimed at developing three different numerical methods for modelling wave propagation in complex translational invariant systems. First, a classical SAFE formulation for viscoelastic waveguides is extended to account for a three dimensional translational invariant static prestress state. The effect of prestress, residual stress and applied loads on the dispersion properties of the guided waves is shown. Next, a two-and-a-half Boundary Element Method (2.5D BEM) for the dispersion analysis of damped guided waves in waveguides and cavities of arbitrary cross-section is proposed. The attenuation dispersive spectrum due to material damping and geometrical spreading of cavities with arbitrary shape is shown for the first time. Finally, a coupled SAFE-2.5D BEM framework is developed to study the dispersion characteristics of waves in viscoelastic waveguides of arbitrary geometry embedded in infinite solid or liquid media. Dispersion of leaky and non-leaky guided waves in terms of speed and attenuation, as well as the radiated wavefields, can be computed. The results obtained in this thesis can be helpful for the design of both actuation and sensing systems in practical application, as well as to tune experimental setup.
Resumo:
The aim of this thesis is to study the mechanisms of instability that occur in swept wings when the angle of attack increases. For this, a simplified model for the a simplified model for the non-orthogonal swept leading edge boundary layer has been used as well as different numerical techniques in order to solve the linear stability problem that describes the behavior of perturbations superposed upon this base flow. Two different approaches, matrix-free and matrix forming methods, have been validated using direct numerical simulations with spectral resolution. In this way, flow instability in the non-orthogonal swept attachment-line boundary layer is addressed in a linear analysis framework via the solution of the pertinent global (Bi-Global) PDE-based eigenvalue problem. Subsequently, a simple extension of the extended G¨ortler-H¨ammerlin ODEbased polynomial model proposed by Theofilis, Fedorov, Obrist & Dallmann (2003) for orthogonal flow, which includes previous models as particular cases and recovers global instability analysis results, is presented for non-orthogonal flow. Direct numerical simulations have been used to verify the stability results and unravel the limits of validity of the basic flow model analyzed. The effect of the angle of attack, AoA, on the critical conditions of the non-orthogonal problem has been documented; an increase of the angle of attack, from AoA = 0 (orthogonal flow) up to values close to _/2 which make the assumptions under which the basic flow is derived questionable, is found to systematically destabilize the flow. The critical conditions of non-orthogonal flows at 0 _ AoA _ _/2 are shown to be recoverable from those of orthogonal flow, via a simple analytical transformation involving AoA. These results can help to understand the mechanisms of destabilization that occurs in the attachment line of wings at finite angles of attack. Studies taking into account variations of the pressure field in the basic flow or the extension to compressible flows are issues that remain open. El objetivo de esta tesis es estudiar los mecanismos de la inestabilidad que se producen en ciertos dispositivos aerodinámicos cuando se aumenta el ángulo de ataque. Para ello se ha utilizado un modelo simplificado del flujo de base, así como diferentes técnicas numéricas, con el fin de resolver el problema de estabilidad lineal asociado que describe el comportamiento de las perturbaciones. Estos métodos; sin y con formación de matriz, se han validado utilizando simulaciones numéricas directas con resolución espectral. De esta manera, la inestabilidad del flujo de capa límite laminar oblicuo entorno a la línea de estancamiento se aborda en un marco de análisis lineal por medio del método Bi-Global de resolución del problema de valores propios en derivadas parciales. Posteriormente se propone una extensión simple para el flujo no-ortogonal del modelo polinomial de ecuaciones diferenciales ordinarias, G¨ortler-H¨ammerlin extendido, propuesto por Theofilis et al. (2003) para el flujo ortogonal, que incluye los modelos previos como casos particulares y recupera los resultados del analisis global de estabilidad lineal. Se han realizado simulaciones directas con el fin de verificar los resultados del análisis de estabilidad así como para investigar los límites de validez del modelo de flujo base utilizado. En este trabajo se ha documentado el efecto del ángulo de ataque AoA en las condiciones críticas del problema no ortogonal obteniendo que el incremento del ángulo de ataque, de AoA = 0 (flujo ortogonal) hasta valores próximos a _/2, en el cual las hipótesis sobre las que se basa el flujo base dejan de ser válidas, tiende sistemáticamente a desestabilizar el flujo. Las condiciones críticas del caso no ortogonal 0 _ AoA _ _/2 pueden recuperarse a partir del caso ortogonal mediante el uso de una transformación analítica simple que implica el ángulo de ataque AoA. Estos resultados pueden ayudar a comprender los mecanismos de desestabilización que se producen en el borde de ataque de las alas de los aviones a ángulos de ataque finitos. Como tareas pendientes quedaría realizar estudios que tengan en cuenta variaciones del campo de presión en el flujo base así como la extensión de éste al caso de flujos compresibles.
Resumo:
Mode of access: Internet.
Resumo:
Matrix application continues to be a critical step in sample preparation for matrix-assisted laser desorption/ionization (MALDI) mass spectrometry imaging (MSI). Imaging of small molecules such as drugs and metabolites is particularly problematic because the commonly used washing steps to remove salts are usually omitted as they may also remove the analyte, and analyte spreading is more likely with conventional wet matrix application methods. We have developed a method which uses the application of matrix as a dry, finely divided powder, here referred to as dry matrix application, for the imaging of drug compounds. This appears to offer a complementary method to wet matrix application for the MALDI-MSI of small molecules, with the alternative matrix application techniques producing different ion profiles, and allows the visualization of compounds not observed using wet matrix application methods. We demonstrate its value in imaging clozapine from rat kidney and 4-bromophenyl-1,4-diazabicyclo(3.2.2)nonane-4-carboxylic acid from rat brain. In addition, exposure of the dry matrix coated sample to a saturated moist atmosphere appears to enhance the visualization of a different set of molecules.
Resumo:
This research began with an attempt to solve a practical problem, namely, the prediction of the rate at which an operator will learn a task. From a review of the literature, communications with researchers in this area and the study of psychomotor learning in factories it was concluded that a more fundamental approach was required which included the development of a task taxonomy. This latter objective had been researched for over twenty years by E. A. Fleishman and his approach was adopted. Three studies were carried out to develop and extend Fleishman's approach to the industrial area. However, the results of these studies were not in accord with FIeishman's conclusions and suggested that a critical re-assessment was required of the arguments, methods and procedures used by Fleishman and his co-workers. It was concluded that Fleishman's findings were to some extent an artifact of the approximate methods and procedures which he used in the original factor analyses and that using the more modern computerised factor analytic methods a reliable ability taxonomy could be developed to describe the abilities involved in the learning of psychomotor tasks. The implications for a changing-task or changing-subject model were drawn and it was concluded that a changing task and subject model needs to be developed.
Resumo:
The purpose of this paper is to discuss the linear solution of equality constrained problems by using the Frontal solution method without explicit assembling. Design/methodology/approach - Re-written frontal solution method with a priori pivot and front sequence. OpenMP parallelization, nearly linear (in elimination and substitution) up to 40 threads. Constraints enforced at the local assembling stage. Findings - When compared with both standard sparse solvers and classical frontal implementations, memory requirements and code size are significantly reduced. Research limitations/implications - Large, non-linear problems with constraints typically make use of the Newton method with Lagrange multipliers. In the context of the solution of problems with large number of constraints, the matrix transformation methods (MTM) are often more cost-effective. The paper presents a complete solution, with topological ordering, for this problem. Practical implications - A complete software package in Fortran 2003 is described. Examples of clique-based problems are shown with large systems solved in core. Social implications - More realistic non-linear problems can be solved with this Frontal code at the core of the Newton method. Originality/value - Use of topological ordering of constraints. A-priori pivot and front sequences. No need for symbolic assembling. Constraints treated at the core of the Frontal solver. Use of OpenMP in the main Frontal loop, now quantified. Availability of Software.
Resumo:
BACKGROUND: Acute renal failure is a serious complication in critically ill patients and frequently requires renal replacement therapy, which alters trace element and vitamin metabolism. OBJECTIVE: The objective was to study trace element balances during continuous renal replacement therapy (CRRT) in intensive care patients. DESIGN: In a prospective randomized crossover trial, patients with acute renal failure received CRRT with either sodium bicarbonate (Bic) or sodium lactate (Lac) as a buffering agent over 2 consecutive 24-h periods. Copper, selenium, zinc, and thiamine were measured with highly sensitive analytic methods in plasma, replacement solutions, and effluent during 8-h periods. Balances were calculated as the difference between fluids administered and effluent losses and were compared with the recommended intakes (RI) from parenteral nutrition. RESULTS: Nineteen sessions were conducted in 11 patients aged 65 +/- 10 y. Baseline plasma concentrations of copper were normal, whereas those of selenium and zinc were below reference ranges; glutathione peroxidase was in the lower range of normal. The replacement solutions contained no detectable copper, 0.01 micromol Se/L (Bic and Lac), and 1.42 (Bic) and 0.85 (Lac) micromol Zn/L. Micronutrients were detectable in all effluents, and losses were stable in each patient; no significant differences were found between the Bic and Lac groups. The 24-h balances were negative for selenium (-0.97 micromol, or 2 times the daily RI), copper (-6.54 micromol, or 0.3 times the daily RI), and thiamine (-4.12 mg, or 1.5 times the RI) and modestly positive for zinc (20.7 micromol, or 0.2 times the RI). CONCLUSIONS: CRRT results in significant losses and negative balances of selenium, copper, and thiamine, which contribute to low plasma concentrations. Prolonged CRRT is likely to result in selenium and thiamine depletion despite supplementation at recommended amounts.
Resumo:
A number of experimental methods have been reported for estimating the number of genes in a genome, or the closely related coding density of a genome, defined as the fraction of base pairs in codons. Recently, DNA sequence data representative of the genome as a whole have become available for several organisms, making the problem of estimating coding density amenable to sequence analytic methods. Estimates of coding density for a single genome vary widely, so that methods with characterized error bounds have become increasingly desirable. We present a method to estimate the protein coding density in a corpus of DNA sequence data, in which a ‘coding statistic’ is calculated for a large number of windows of the sequence under study, and the distribution of the statistic is decomposed into two normal distributions, assumed to be the distributions of the coding statistic in the coding and noncoding fractions of the sequence windows. The accuracy of the method is evaluated using known data and application is made to the yeast chromosome III sequence and to C.elegans cosmid sequences. It can also be applied to fragmentary data, for example a collection of short sequences determined in the course of STS mapping.
Resumo:
The method of stochastic dynamic programming is widely used in ecology of behavior, but has some imperfections because of use of temporal limits. The authors presented an alternative approach based on the methods of the theory of restoration. Suggested method uses cumulative energy reserves per time unit as a criterium, that leads to stationary cycles in the area of states. This approach allows to study the optimal feeding by analytic methods.