958 resultados para Well-Posed Problem


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The problem of spectra formation in hydrodynamic approach to A + A collisions is considered within the Boltzmann equations. It is shown analytically and illustrated by numerical calculations that the particle momentum spectra can be presented in the Cooper-R-ye form despite freeze-out is not sharp and has the finite temporal width. The latter is equal to the inverse of the particle collision rate at points (t(sigma) (r, p), r) of the maximal emission at a fixed momentum p. The set of these points forms the hypersurfaces t(sigma)(r,p) which strongly depend on the values of p and typically do not enclose completely the initially dense matter. This is an important difference from the standard Cooper-Frye prescription (CFp), with a common freeze-out hypersurface for all p, that affects significantly the predicted spectra. Also, the well known problem of CFp as for negative contributions to the spectra from non-space-like parts of the freeze-out hypersurface is naturally eliminated in this improved prescription.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background. Researchers have proposed the restoration of abfraction lesions, but limited information is available about the effects of occlusal loading on the margins of such restorations. Because abfraction is a well-recognized problem, the authors conducted a study to assess the effects of occlusal loading on the margins of cervical restorations. Methods. The authors prepared 40 wedge-shaped cavities in extracted premolars and restored them with a resin-based composite. They subjected specimens to occlusal loading (150 newtons, 101 cycles) on the buccal cusp, on the central fossa or on the lingual cusp, and they stored 1 the control group, specimens in deionized water. The authors used fluorescein to delimit marginal defects and evaluated the defects by using laser scanning confocal microscopy. Results. Results of chi(2) and Kruskal-Wallis tests (P < .05) showed that specimens subjected to occlusal loading had a higher percentage of marginal gaps (53.3 percent) than did the control specimens (10.0 percent). There were no differences between groups in marginal defect formation or in defect location, length or width. Conclusions. Occlusal loading led to a significant increase in gap formation at the margins of cervical resin-based composite restorations. Clinical Implications. The clinician cannot underestimate the effects of occlusal loading When restoring teeth with cervical wedge-shaped lesions. If occlusal loading is the main factor contributing to lesion formation, the clinician should identify and treat it before placing the restoration or otherwise run the risk that the restorative treatment will fail because of marginal gap formation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Petrov-Galerkin methods are known to be versatile techniques for the solution of a wide variety of convection-dispersion transport problems, including those involving steep gradients. but have hitherto received little attention by chemical engineers. We illustrate the technique by means of the well-known problem of simultaneous diffusion and adsorption in a spherical sorbent pellet comprised of spherical, non-overlapping microparticles of uniform size and investigate the uptake dynamics. Solutions to adsorption problems exhibit steep gradients when macropore diffusion controls or micropore diffusion controls, and the application of classical numerical methods to such problems can present difficulties. In this paper, a semi-discrete Petrov-Galerkin finite element method for numerically solving adsorption problems with steep gradients in bidisperse solids is presented. The numerical solution was found to match the analytical solution when the adsorption isotherm is linear and the diffusivities are constant. Computed results for the Langmuir isotherm and non-constant diffusivity in microparticle are numerically evaluated for comparison with results of a fitted-mesh collocation method, which was proposed by Liu and Bhatia (Comput. Chem. Engng. 23 (1999) 933-943). The new method is simple, highly efficient, and well-suited to a variety of adsorption and desorption problems involving steep gradients. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cluster analysis for categorical data has been an active area of research. A well-known problem in this area is the determination of the number of clusters, which is unknown and must be inferred from the data. In order to estimate the number of clusters, one often resorts to information criteria, such as BIC (Bayesian information criterion), MML (minimum message length, proposed by Wallace and Boulton, 1968), and ICL (integrated classification likelihood). In this work, we adopt the approach developed by Figueiredo and Jain (2002) for clustering continuous data. They use an MML criterion to select the number of clusters and a variant of the EM algorithm to estimate the model parameters. This EM variant seamlessly integrates model estimation and selection in a single algorithm. For clustering categorical data, we assume a finite mixture of multinomial distributions and implement a new EM algorithm, following a previous version (Silvestre et al., 2008). Results obtained with synthetic datasets are encouraging. The main advantage of the proposed approach, when compared to the above referred criteria, is the speed of execution, which is especially relevant when dealing with large data sets.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Compositional schedulability analysis of hierarchical realtime systems is a well-studied problem. Various techniques have been developed to abstract resource requirements of components in such systems, and schedulability has been addressed using these abstract representations (also called component interfaces). These approaches for compositional analysis incur resource overheads when they abstract components into interfaces. In this talk, we define notions of resource schedulability and optimality for component interfaces, and compare various approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we study a delay mathematical model for the dynamics of HIV in HIV-specific CD4 + T helper cells. We modify the model presented by Roy and Wodarz in 2012, where the HIV dynamics is studied, considering a single CD4 + T cell population. Non-specific helper cells are included as alternative target cell population, to account for macrophages and dendritic cells. In this paper, we include two types of delay: (1) a latent period between the time target cells are contacted by the virus particles and the time the virions enter the cells and; (2) virus production period for new virions to be produced within and released from the infected cells. We compute the reproduction number of the model, R0, and the local stability of the disease free equilibrium and of the endemic equilibrium. We find that for values of R0<1, the model approaches asymptotically the disease free equilibrium. For values of R0>1, the model approximates asymptotically the endemic equilibrium. We observe numerically the phenomenon of backward bifurcation for values of R0⪅1. This statement will be proved in future work. We also vary the values of the latent period and the production period of infected cells and free virus. We conclude that increasing these values translates in a decrease of the reproduction number. Thus, a good strategy to control the HIV virus should focus on drugs to prolong the latent period and/or slow down the virus production. These results suggest that the model is mathematically and epidemiologically well-posed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação de Mestrado apresentada ao ISPA - Instituto Universitário

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The dynamical analysis of large biological regulatory networks requires the development of scalable methods for mathematical modeling. Following the approach initially introduced by Thomas, we formalize the interactions between the components of a network in terms of discrete variables, functions, and parameters. Model simulations result in directed graphs, called state transition graphs. We are particularly interested in reachability properties and asymptotic behaviors, which correspond to terminal strongly connected components (or "attractors") in the state transition graph. A well-known problem is the exponential increase of the size of state transition graphs with the number of network components, in particular when using the biologically realistic asynchronous updating assumption. To address this problem, we have developed several complementary methods enabling the analysis of the behavior of large and complex logical models: (i) the definition of transition priority classes to simplify the dynamics; (ii) a model reduction method preserving essential dynamical properties, (iii) a novel algorithm to compact state transition graphs and directly generate compressed representations, emphasizing relevant transient and asymptotic dynamical properties. The power of an approach combining these different methods is demonstrated by applying them to a recent multilevel logical model for the network controlling CD4+ T helper cell response to antigen presentation and to a dozen cytokines. This model accounts for the differentiation of canonical Th1 and Th2 lymphocytes, as well as of inflammatory Th17 and regulatory T cells, along with many hybrid subtypes. All these methods have been implemented into the software GINsim, which enables the definition, the analysis, and the simulation of logical regulatory graphs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The estimation of camera egomotion is a well established problem in computer vision. Many approaches have been proposed based on both the discrete and the differential epipolar constraint. The discrete case is mainly used in self-calibrated stereoscopic systems, whereas the differential case deals with a unique moving camera. The article surveys several methods for mobile robot egomotion estimation covering more than 0.5 million samples using synthetic data. Results from real data are also given

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In models where privately informed agents interact, agents may need to formhigher order expectations, i.e. expectations of other agents' expectations. This paper develops a tractable framework for solving and analyzing linear dynamic rational expectationsmodels in which privately informed agents form higher order expectations. The frameworkis used to demonstrate that the well-known problem of the infinite regress of expectationsidentified by Townsend (1983) can be approximated to an arbitrary accuracy with a finitedimensional representation under quite general conditions. The paper is constructive andpresents a fixed point algorithm for finding an accurate solution and provides weak conditions that ensure that a fixed point exists. To help intuition, Singleton's (1987) asset pricingmodel with disparately informed traders is used as a vehicle for the paper.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The singularity in the Hawking-Turok model of open inflation has some appealing properties, such as the fact that its action is integrable. Also, if one thinks of the singularity as the boundary of spacetime, then the Gibbons-Hawking term is nonvanishing and finite. Here, we consider a model where the gravitational and scalar fields are coupled to a dynamical membrane. The singular instanton can then be obtained as the limit of a family of no-boundary solutions where both the geometry and the scalar field are regular. Using this procedure, the contribution of the singularity to the Euclidean action is just 1/3 of the Gibbons-Hawking term. Unrelated to this issue, we also point out that the singularity acts as a reflecting boundary for scalar perturbations and gravity waves. Therefore, the quantization of cosmological perturbations seems to be well posed in this background.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article describes a method for determining the polydispersity index Ip2=Mz/Mw of the molecular weight distribution (MWD) of linear polymeric materials from linear viscoelastic data. The method uses the Mellin transform of the relaxation modulus of a simple molecular rheological model. One of the main features of this technique is that it enables interesting MWD information to be obtained directly from dynamic shear experiments. It is not necessary to achieve the relaxation spectrum, so the ill-posed problem is avoided. Furthermore, a determinate shape of the continuous MWD does not have to be assumed in order to obtain the polydispersity index. The technique has been developed to deal with entangled linear polymers, whatever the form of the MWD is. The rheological information required to obtain the polydispersity index is the storage G′(ω) and loss G″(ω) moduli, extending from the terminal zone to the plateau region. The method provides a good agreement between the proposed theoretical approach and the experimental polydispersity indices of several linear polymers for a wide range of average molecular weights and polydispersity indices. It is also applicable to binary blends.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Conventional magnetic resonance imaging (MRI) techniques are highly sensitive to detect multiple sclerosis (MS) plaques, enabling a quantitative assessment of inflammatory activity and lesion load. In quantitative analyses of focal lesions, manual or semi-automated segmentations have been widely used to compute the total number of lesions and the total lesion volume. These techniques, however, are both challenging and time-consuming, being also prone to intra-observer and inter-observer variability.Aim: To develop an automated approach to segment brain tissues and MS lesions from brain MRI images. The goal is to reduce the user interaction and to provide an objective tool that eliminates the inter- and intra-observer variability.Methods: Based on the recent methods developed by Souplet et al. and de Boer et al., we propose a novel pipeline which includes the following steps: bias correction, skull stripping, atlas registration, tissue classification, and lesion segmentation. After the initial pre-processing steps, a MRI scan is automatically segmented into 4 classes: white matter (WM), grey matter (GM), cerebrospinal fluid (CSF) and partial volume. An expectation maximisation method which fits a multivariate Gaussian mixture model to T1-w, T2-w and PD-w images is used for this purpose. Based on the obtained tissue masks and using the estimated GM mean and variance, we apply an intensity threshold to the FLAIR image, which provides the lesion segmentation. With the aim of improving this initial result, spatial information coming from the neighbouring tissue labels is used to refine the final lesion segmentation.Results:The experimental evaluation was performed using real data sets of 1.5T and the corresponding ground truth annotations provided by expert radiologists. The following values were obtained: 64% of true positive (TP) fraction, 80% of false positive (FP) fraction, and an average surface distance of 7.89 mm. The results of our approach were quantitatively compared to our implementations of the works of Souplet et al. and de Boer et al., obtaining higher TP and lower FP values.Conclusion: Promising MS lesion segmentation results have been obtained in terms of TP. However, the high number of FP which is still a well-known problem of all the automated MS lesion segmentation approaches has to be improved in order to use them for the standard clinical practice. Our future work will focus on tackling this issue.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The regulation of electricity transmission and distribution business is an essential issue for any electricity market; it is widely introduced in developed electricity markets of Great Britain, Scandinavian countries and United States of America and other. Those markets which were liberalized recently also need well planned regulation model to be chosen and implemented. In open electricity markets the sectors of electricity distribution and transmission remain monopolies, so called "natural monopolies", as introducing the competition into these sectors in most cases appears to be inefficient. Thatis why regulation becomes very important as its main tasks are: to set reasonable tariffs for customers, to ensure non-discriminating process of electricity transmission and distribution, at the same time to provide distribution companies with incentives to operate efficiently and the owners of the companies with reasonable profits as well; the problem of power quality should be solved at the same time. It should be mentioned also, that there is no incentive scheme which will be suitable for any conditions, that is why it is essential to study differentregulation models in order to form the best one for concrete situation. The aim of this Master's Thesis is to give an overview over theregulation of electricity transmission and distribution in Russia. First, the general information about theory of regulation of natural monopolies will be described; the situation in Russian network business and the importance of regulation process for it will be discussed next. Then there is a detailed description ofexisting regulatory system and the process of tariff calculation with an example. And finally, in the work there is a brief analysis of problems of present scheme of regulation, an attempt to predict the following development of regulationin Russia and the perspectives and risks connected to regulation which could face the companies that try to enter Russian electricity market (such as FORTUM OY).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis the X-ray tomography is discussed from the Bayesian statistical viewpoint. The unknown parameters are assumed random variables and as opposite to traditional methods the solution is obtained as a large sample of the distribution of all possible solutions. As an introduction to tomography an inversion formula for Radon transform is presented on a plane. The vastly used filtered backprojection algorithm is derived. The traditional regularization methods are presented sufficiently to ground the Bayesian approach. The measurements are foton counts at the detector pixels. Thus the assumption of a Poisson distributed measurement error is justified. Often the error is assumed Gaussian, altough the electronic noise caused by the measurement device can change the error structure. The assumption of Gaussian measurement error is discussed. In the thesis the use of different prior distributions in X-ray tomography is discussed. Especially in severely ill-posed problems the use of a suitable prior is the main part of the whole solution process. In the empirical part the presented prior distributions are tested using simulated measurements. The effect of different prior distributions produce are shown in the empirical part of the thesis. The use of prior is shown obligatory in case of severely ill-posed problem.