902 resultados para Markov chains, uniformization, inexact methods, relaxed matrix-vector
Resumo:
BACKGROUND To evaluate in patients with aggressive periodontitis (AgP) the effect of nonsurgical periodontal treatment in conjunction with either additional administration of systemic antibiotics (AB) or application of photodynamic therapy (PDT) on the gingival crevicular fluid (GCF) concentration of matrix metalloproteinases 8 and 9 (MMP-8 and -9). METHODS Thirty-six patients with AgP were included in the study. Patients were randomly assigned to treatment with either scaling and root planing (SRP) followed by systemic administration of AB (e.g. Amoxicillin + Metronidazole) or SRP + PDT. The analysis of MMP-8 and -9 GCF concentrations was performed at baseline and at 3 and 6 months after treatment. Nonparametric U-Mann-Whitney test was used for comparison between groups. Changes from baseline to 3 and 6 months were analyzed with the Friedman's ANOVA test with Kendall's index of consistency. RESULTS In the AB group, patients showed a statistically significant (p = 0.01) decrease of MMP-8 GCF level at both 3 and 6 months post treatment. In the PDT group, the change of MMP-8 GCF level was not statistically significant. Both groups showed at 3 and 6 months a decrease in MMP-9 levels. However, this change did not reach statistical significance. CONCLUSIONS Within the limits of the present study, it may be suggested that in patients with AgP, nonsurgical periodontal therapy in conjunction with adjunctive systemic administration of amoxicilin and metronidazole is more effective in reducing GCF MMP-8 levels compared to the adjunctive use of PDT.
Resumo:
INTRODUCTION The transcription factor activating enhancer binding protein 2 epsilon (AP-2ε) was recently shown to be expressed during chondrogenesis as well as in articular chondrocytes of humans and mice. Furthermore, expression of AP-2ε was found to be upregulated in affected cartilage of patients with osteoarthritis (OA). Despite these findings, adult mice deficient for AP-2ε (Tfap2e(-/-)) do not exhibit an obviously abnormal cartilaginous phenotype. We therefore analyzed embryogenesis of Tfap2e(-/-) mice to elucidate potential transient abnormalities that provide information on the influence of AP-2ε on skeletal development. In a second part, we aimed to define potential influences of AP-2ε on articular cartilage function and gene expression, as well as on OA progression, in adult mice. METHODS Murine embryonic development was accessed via in situ hybridization, measurement of skeletal parameters and micromass differentiation of mesenchymal cells. To reveal discrepancies in articular cartilage of adult wild-type (WT) and Tfap2e(-/-) mice, light and electron microscopy, in vitro culture of cartilage explants, and quantification of gene expression via real-time PCR were performed. OA was induced via surgical destabilization of the medial meniscus in both genotypes, and disease progression was monitored on histological and molecular levels. RESULTS Only minor differences between WT and embryos deficient for AP-2ε were observed, suggesting that redundancy mechanisms effectively compensate for the loss of AP-2ε during skeletal development. Surprisingly, though, we found matrix metalloproteinase 13 (Mmp13), a major mediator of cartilage destruction, to be significantly upregulated in articular cartilage of adult Tfap2e(-/-) mice. This finding was further confirmed by increased Mmp13 activity and extracellular matrix degradation in Tfap2e(-/-) cartilage explants. OA progression was significantly enhanced in the Tfap2e(-/-) mice, which provided evidence for in vivo relevance. This finding is most likely attributable to the increased basal Mmp13 expression level in Tfap2e(-/-) articular chondrocytes that results in a significantly higher total Mmp13 expression rate during OA as compared with the WT. CONCLUSIONS We reveal a novel role of AP-2ε in the regulation of gene expression in articular chondrocytes, as well as in OA development, through modulation of Mmp13 expression and activity.
Resumo:
BACKGROUND Enamel matrix derivatives (EMDs) have been used clinically for more than a decade for the regeneration of periodontal tissues. The aim of the present study is to analyze the effect on cell growth of EMDs in a gel carrier in comparison to EMDs in a liquid carrier. EMDs in a liquid carrier have been shown to adsorb better to bone graft materials. METHODS Primary human osteoblasts and periodontal ligament (PDL) cells were exposed to EMDs in both gel and liquid carriers and compared for their ability to induce cell proliferation and differentiation. Alizarin red staining and real-time polymerase chain reaction for expression of genes encoding collagen 1, osteocalcin, and runt-related transcription factor 2, as well as bone morphogenetic protein 2 (BMP2), transforming growth factor (TGF)-β1, and interleukin (IL)-1β, were assessed. RESULTS EMDs in both carriers significantly increased cell proliferation of both osteoblasts and PDL cells in a similar manner. Both formulations also significantly upregulated the expression of genes encoding BMP2 and TGF-β1 as well as decreased the expression of IL-1β. EMDs in the liquid carrier further retained similar differentiation potential of both osteoblasts and PDL cells by demonstrating increased collagen and osteocalcin gene expression and significantly higher alizarin red staining. CONCLUSIONS The results from the present study indicate that the new formulation of EMDs in a liquid carrier is equally as potent as EMDs in a gel carrier in inducing osteoblast and PDL activity. Future study combining EMDs in a liquid carrier with bone grafting materials is required to further evaluate its potential for combination therapies.
Resumo:
Based on an order-theoretic approach, we derive sufficient conditions for the existence, characterization, and computation of Markovian equilibrium decision processes and stationary Markov equilibrium on minimal state spaces for a large class of stochastic overlapping generations models. In contrast to all previous work, we consider reduced-form stochastic production technologies that allow for a broad set of equilibrium distortions such as public policy distortions, social security, monetary equilibrium, and production nonconvexities. Our order-based methods are constructive, and we provide monotone iterative algorithms for computing extremal stationary Markov equilibrium decision processes and equilibrium invariant distributions, while avoiding many of the problems associated with the existence of indeterminacies that have been well-documented in previous work. We provide important results for existence of Markov equilibria for the case where capital income is not increasing in the aggregate stock. Finally, we conclude with examples common in macroeconomics such as models with fiat money and social security. We also show how some of our results extend to settings with unbounded state spaces.
Resumo:
With the recognition of the importance of evidence-based medicine, there is an emerging need for methods to systematically synthesize available data. Specifically, methods to provide accurate estimates of test characteristics for diagnostic tests are needed to help physicians make better clinical decisions. To provide more flexible approaches for meta-analysis of diagnostic tests, we developed three Bayesian generalized linear models. Two of these models, a bivariate normal and a binomial model, analyzed pairs of sensitivity and specificity values while incorporating the correlation between these two outcome variables. Noninformative independent uniform priors were used for the variance of sensitivity, specificity and correlation. We also applied an inverse Wishart prior to check the sensitivity of the results. The third model was a multinomial model where the test results were modeled as multinomial random variables. All three models can include specific imaging techniques as covariates in order to compare performance. Vague normal priors were assigned to the coefficients of the covariates. The computations were carried out using the 'Bayesian inference using Gibbs sampling' implementation of Markov chain Monte Carlo techniques. We investigated the properties of the three proposed models through extensive simulation studies. We also applied these models to a previously published meta-analysis dataset on cervical cancer as well as to an unpublished melanoma dataset. In general, our findings show that the point estimates of sensitivity and specificity were consistent among Bayesian and frequentist bivariate normal and binomial models. However, in the simulation studies, the estimates of the correlation coefficient from Bayesian bivariate models are not as good as those obtained from frequentist estimation regardless of which prior distribution was used for the covariance matrix. The Bayesian multinomial model consistently underestimated the sensitivity and specificity regardless of the sample size and correlation coefficient. In conclusion, the Bayesian bivariate binomial model provides the most flexible framework for future applications because of its following strengths: (1) it facilitates direct comparison between different tests; (2) it captures the variability in both sensitivity and specificity simultaneously as well as the intercorrelation between the two; and (3) it can be directly applied to sparse data without ad hoc correction. ^
Resumo:
The discrete-time Markov chain is commonly used in describing changes of health states for chronic diseases in a longitudinal study. Statistical inferences on comparing treatment effects or on finding determinants of disease progression usually require estimation of transition probabilities. In many situations when the outcome data have some missing observations or the variable of interest (called a latent variable) can not be measured directly, the estimation of transition probabilities becomes more complicated. In the latter case, a surrogate variable that is easier to access and can gauge the characteristics of the latent one is usually used for data analysis. ^ This dissertation research proposes methods to analyze longitudinal data (1) that have categorical outcome with missing observations or (2) that use complete or incomplete surrogate observations to analyze the categorical latent outcome. For (1), different missing mechanisms were considered for empirical studies using methods that include EM algorithm, Monte Carlo EM and a procedure that is not a data augmentation method. For (2), the hidden Markov model with the forward-backward procedure was applied for parameter estimation. This method was also extended to cover the computation of standard errors. The proposed methods were demonstrated by the Schizophrenia example. The relevance of public health, the strength and limitations, and possible future research were also discussed. ^
Resumo:
Two sets of mass spectrometry-based methods were developed specifically for the in vivo study of extracellular neuropeptide biochemistry. First, an integrated micro-concentration/desalting/matrix-addition device was constructed for matrix-assisted laser desorption ionization mass spectrometry (MALDI MS) to achieve attomole sensitivity for microdialysis samples. Second, capillary electrophoresis (CE) was incorporated into the above micro-liquid chromatography (LC) and MALDI MS system to provide two-dimensional separation and identification (i.e. electrophoretic mobility and molecular mass) for the analysis of complex mixtures. The latter technique includes two parts of instrumentation: (1) the coupling of a preconcentration LC column to the inlet of a CE capillary, and (2) the utilization of a matrix-precoated membrane target for continuous CE effluent deposition and for automatic MALDI MS analysis (imaging) of the CE track.^ Initial in vivo data reveals a carboxypeptidase A (CPA) activity in rat brain involved in extracellular neurotensin metabolism. Benzylsuccinic acid, a CPA inhibitor, inhibited neurotensin metabolite NT1-12 formation by 70%, while inhibitors of other major extracellular peptide metabolizing enzymes increased NT1-12 formation. CPA activity has not been observed in previous in vitro experiments. Next, the validity of the methodology was demonstrated in the detection and structural elucidation of an endogenous neuropeptide, (L)VV-hemorphin-7, in rat brain upon ATP stimulation. Finally, the combined micro-LC/CE/MALDI MS was used in the in vivo metabolic study of peptide E, a mu-selective opioid peptide with 25 amino acid residues. Profiles of 88 metabolites were obtained, their identity being determined by their mass-to-charge ratio and electrophoretic mobility. The results indicate that there are several primary cleavage sites in vivo for peptide E in the release of its enkephalin-containing fragments. ^
Resumo:
Distribution, accumulation and diagenesis of surficial sediments in coastal and continental shelf systems follow complex chains of localized processes and form deposits of great spatial variability. Given the environmental and economic relevance of ocean margins, there is growing need for innovative geophysical exploration methods to characterize seafloor sediments by more than acoustic properties. A newly conceptualized benthic profiling and data processing approach based on controlled source electromagnetic (CSEM) imaging permits to coevally quantify the magnetic susceptibility and the electric conductivity of shallow marine deposits. The two physical properties differ fundamentally insofar as magnetic susceptibility mostly assesses solid particle characteristics such as terrigenous or iron mineral content, redox state and contamination level, while electric conductivity primarily relates to the fluid-filled pore space and detects salinity, porosity and grain-size variations. We develop and validate a layered half-space inversion algorithm for submarine multifrequency CSEM with concentric sensor configuration. Guided by results of modeling, we modified a commercial land CSEM sensor for submarine application, which was mounted into a nonconductive and nonmagnetic bottom-towed sled. This benthic EM profiler Neridis II achieves 25 soundings/second at 3-4 knots over continuous profiles of up to hundred kilometers. Magnetic susceptibility is determined from the 75 Hz in-phase response (90% signal originates from the top 50 cm), while electric conductivity is derived from the 5 kHz out-of-phase (quadrature) component (90% signal from the top 92 cm). Exemplary survey data from the north-west Iberian margin underline the excellent sensitivity, functionality and robustness of the system in littoral (~0-50 m) and neritic (~50-300 m) environments. Susceptibility vs. porosity cross-plots successfully identify known lithofacies units and their transitions. All presently available data indicate an eminent potential of CSEM profiling for assessing the complex distribution of shallow marine surficial sediments and for revealing climatic, hydrodynamic, diagenetic and anthropogenic factors governing their formation.
Resumo:
Firms that are expanding their cross-border activities, such as vertical specialization trade, outsourcing, and fragmentation productions, have brought dramatic changes to the global economy during the last two decades. In an attempt to understand the evolution of the interaction among countries or country groups, many trade-statistics-based indicators have been developed. However, most of these statistics focus on showing the direct trade-specific-relationship among countries, rather than considering the roles that intercountry and interindustrial production networks play in a global economy. This paper uses the concepts of trade in value added as measured by the input–output tables of OECD and IDE-JETRO to provide alternative indicators that show the evolution of regional economic integration and global value chains for more than 50 economies. In addition, this paper provides thoughts on how to evaluate comparative advantages on the basis of value added using an international input–output model.
Resumo:
Attempts to understand China’s role in global value chains have often noted the case of Apple's iPhone production, in particular the fact that the value added during the Chinese portion of the iPhone’s supply chain is no more than 4%. However, when we examine the Chinese economy as a whole in global production networks, China’s share in total induced value added by China’s exports of final products to the USA is about 75% in 2005. This leads us to investigate how Chinese value added is created and distributed not only internationally but also domestically. To elucidate the increasing complexity of China’s domestic production networks, this paper focuses on the measure of Domestic Value Chains (DVCs) across regions and their linkages with global markets. By using China’s 1997 and 2007 interregional input-output tables, we can understand in detail the structural changes in domestic trade in terms of value added, as well as the position and degree of participation of different regions within the DVCs.
Resumo:
In this study, we apply the inter-regional input–output model to explain the relationship between China’s inter-regional spillover of CO2 emissions and domestic supply chains for 2002 and 2007. Based on this model, we propose alternative indicators such as the trade in CO2 emissions, CO2 emissions in trade, regional trade balances, and comparative advantage of CO2 emissions. The empirical results not only reveal the nature and significance of inter-regional environmental spillover within China’s domestic regions but also demonstrate how CO2 emissions are created and distributed across regions via domestic production networks. The main finding shows that a region’s CO2 emissions depend on not only its intra-regional production technique, energy use efficiency but also its position and participation degree in domestic and global supply chains.
Resumo:
Despite the fact that input–output (IO) tables form a central part of the System of National Accounts, each individual country's national IO table exhibits more or less different features and characteristics, reflecting the country's socioeconomic idiosyncrasies. Consequently, the compilers of a multi-regional input–output table (MRIOT) are advised to thoroughly examine the conceptual as well as methodological differences among countries in the estimation of basic statistics for national IO tables and, if necessary, to carry out pre-adjustment of these tables into a common format prior to the MRIOT compilation. The objective of this study is to provide a practical guide for harmonizing national IO tables to construct a consistent MRIOT, referring to the adjustment practices used by the Institute of Developing Economies, JETRO (IDE-JETRO) in compiling the Asian International Input–Output Table.
Resumo:
This article presents a probabilistic method for vehicle detection and tracking through the analysis of monocular images obtained from a vehicle-mounted camera. The method is designed to address the main shortcomings of traditional particle filtering approaches, namely Bayesian methods based on importance sampling, for use in traffic environments. These methods do not scale well when the dimensionality of the feature space grows, which creates significant limitations when tracking multiple objects. Alternatively, the proposed method is based on a Markov chain Monte Carlo (MCMC) approach, which allows efficient sampling of the feature space. The method involves important contributions in both the motion and the observation models of the tracker. Indeed, as opposed to particle filter-based tracking methods in the literature, which typically resort to observation models based on appearance or template matching, in this study a likelihood model that combines appearance analysis with information from motion parallax is introduced. Regarding the motion model, a new interaction treatment is defined based on Markov random fields (MRF) that allows for the handling of possible inter-dependencies in vehicle trajectories. As for vehicle detection, the method relies on a supervised classification stage using support vector machines (SVM). The contribution in this field is twofold. First, a new descriptor based on the analysis of gradient orientations in concentric rectangles is dened. This descriptor involves a much smaller feature space compared to traditional descriptors, which are too costly for real-time applications. Second, a new vehicle image database is generated to train the SVM and made public. The proposed vehicle detection and tracking method is proven to outperform existing methods and to successfully handle challenging situations in the test sequences.
Resumo:
Interface discontinuity factors based on the Generalized Equivalence Theory are commonly used in nodal homogenized diffusion calculations so that diffusion average values approximate heterogeneous higher order solutions. In this paper, an additional form of interface correction factors is presented in the frame of the Analytic Coarse Mesh Finite Difference Method (ACMFD), based on a correction of the modal fluxes instead of the physical fluxes. In the ACMFD formulation, implemented in COBAYA3 code, the coupled multigroup diffusion equations inside a homogenized region are reduced to a set of uncoupled modal equations through diagonalization of the multigroup diffusion matrix. Then, physical fluxes are transformed into modal fluxes in the eigenspace of the diffusion matrix. It is possible to introduce interface flux discontinuity jumps as the difference of heterogeneous and homogeneous modal fluxes instead of introducing interface discontinuity factors as the ratio of heterogeneous and homogeneous physical fluxes. The formulation in the modal space has been implemented in COBAYA3 code and assessed by comparison with solutions using classical interface discontinuity factors in the physical space
Resumo:
This paper outlines the problems found in the parallelization of SPH (Smoothed Particle Hydrodynamics) algorithms using Graphics Processing Units. Different results of some parallel GPU implementations in terms of the speed-up and the scalability compared to the CPU sequential codes are shown. The most problematic stage in the GPU-SPH algorithms is the one responsible for locating neighboring particles and building the vectors where this information is stored, since these specific algorithms raise many dificulties for a data-level parallelization. Because of the fact that the neighbor location using linked lists does not show enough data-level parallelism, two new approaches have been pro- posed to minimize bank conflicts in the writing and subsequent reading of the neighbor lists. The first strategy proposes an efficient coordination between CPU-GPU, using GPU algorithms for those stages that allow a straight forward parallelization, and sequential CPU algorithms for those instructions that involve some kind of vector reduction. This coordination provides a relatively orderly reading of the neighbor lists in the interactions stage, achieving a speed-up factor of x47 in this stage. However, since the construction of the neighbor lists is quite expensive, it is achieved an overall speed-up of x41. The second strategy seeks to maximize the use of the GPU in the neighbor's location process by executing a specific vector sorting algorithm that allows some data-level parallelism. Al- though this strategy has succeeded in improving the speed-up on the stage of neighboring location, the global speed-up on the interactions stage falls, due to inefficient reading of the neighbor vectors. Some changes to these strategies are proposed, aimed at maximizing the computational load of the GPU and using the GPU texture-units, in order to reach the maximum speed-up for such codes. Different practical applications have been added to the mentioned GPU codes. First, the classical dam-break problem is studied. Second, the wave impact of the sloshing fluid contained in LNG vessel tanks is also simulated as a practical example of particle methods