994 resultados para quasi-linear utility


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present two new stabilized high-resolution numerical methods for the convection–diffusion–reaction (CDR) and the Helmholtz equations respectively. The work embarks upon a priori analysis of some consistency recovery procedures for some stabilization methods belonging to the Petrov–Galerkin framework. It was found that the use of some standard practices (e.g. M-Matrices theory) for the design of essentially non-oscillatory numerical methods is not feasible when consistency recovery methods are employed. Hence, with respect to convective stabilization, such recovery methods are not preferred. Next, we present the design of a high-resolution Petrov–Galerkin (HRPG) method for the 1D CDR problem. The problem is studied from a fresh point of view, including practical implications on the formulation of the maximum principle, M-Matrices theory, monotonicity and total variation diminishing (TVD) finite volume schemes. The current method is next in line to earlier methods that may be viewed as an upwinding plus a discontinuity-capturing operator. Finally, some remarks are made on the extension of the HRPG method to multidimensions. Next, we present a new numerical scheme for the Helmholtz equation resulting in quasi-exact solutions. The focus is on the approximation of the solution to the Helmholtz equation in the interior of the domain using compact stencils. Piecewise linear/bilinear polynomial interpolation are considered on a structured mesh/grid. The only a priori requirement is to provide a mesh/grid resolution of at least eight elements per wavelength. No stabilization parameters are involved in the definition of the scheme. The scheme consists of taking the average of the equation stencils obtained by the standard Galerkin finite element method and the classical finite difference method. Dispersion analysis in 1D and 2D illustrate the quasi-exact properties of this scheme. Finally, some remarks are made on the extension of the scheme to unstructured meshes by designing a method within the Petrov–Galerkin framework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper focuses on the argumentative process through which new international norms prohibiting the use of weapons causing severe civilian harm emerge. It examines the debate surrounding the use and usefulness of landmines and cluster munitions and traces the process through which NGOs change conceptions of military utility and effectiveness of certain weapons by highlighting their humanitarian problems and questioning their military value. By challenging military thinking on these issues, NGOs redefine the terms of the debate – from a commonplace practice, the use of such weapons becomes controversial and military decisions need to be justified. The argument-counterargument dynamic shifts the burden of proof of the necessity and safety of the weapons to the users. The process witnesses the ability of NGOs to influence debates on military issues despite their disadvantaged position in hard security issue areas. It also challenges realist assumptions that only weapons that are obsolete or low-cost force equalizers for weak actors can be banned. To the contrary, the paper shows that in the case of landmines and cluster munitions, defining the military (in)effectiveness of the weapons is part and parcel of the struggle for their prohibition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In principle, we should be glad that Eric Kmiec and his colleagues published in Science's STKE (1) a detailed experimental protocol of their gene repair method (2, 3). However, a careful reading of their contribution raises more doubts about the method. The research published in Science five years ago by Kmiec and his colleagues was said to demonstrate that chimeric RNA-DNA oligonucleotides could correct the mutation responsible for sickle cell anemia with 50% efficiency (4). Such a remarkable result prompted many laboratories to attempt to replicate the research or utilize the method on their own systems. However, if the method worked at all, which it rarely did, the achieved efficiency was usually lower by several orders of magnitude. Now, in the Science's STKE protocol, we are given crucial information about the method and why it is so important to utilize these expensive chimeric RNA-DNA constructs. In the introduction we are told that the RNA-DNA duplex is more stable than a DNA-DNA duplex and so extends the half-life of the complexes formed between the targeted DNA and the chimeric RNA-DNA oligonucleotides. This logical explanation, however, conflicts with the statement in the section entitled "Transfection with Oligonucleotides and Plasmid DNA" that Kmiec and colleagues have recently demonstrated that classical single-stranded DNA oligonucleotides with a few protective phosphothioate linkages have a "gene repair conversion frequency rivaling that of the RNA/DNA chimera". Indeed, the research cited for that result actually states that single-stranded DNA oligonucleotides are in fact several-fold more efficient (3.7-fold) than the RNA-DNA chimeric constructs (5). If that is the case, it raises the question of why Kmiec and colleagues emphasize the importance of the RNA in their original chimeric constructs. Their own new results show that modified single-stranded DNA oligonucleotides are more effective than the expensive RNA-DNA hybrids. Moreover, the current efficiency of the gene repair by RNA-DNA hybrids, according to Kmiec and colleagues in their recent paper is only 4×10-4 even after several hours of pre-selection permitting multiplification of bacterial cells with the corrected plasmid (5). This efficiency is much lower than the 50% value reported five years ago, but is assuredly much closer to the reality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we establish lower and upper Gaussian bounds for the probability density of the mild solution to the stochastic heat equation with multiplicative noise and in any space dimension. The driving perturbation is a Gaussian noise which is white in time with some spatially homogeneous covariance. These estimates are obtained using tools of the Malliavin calculus. The most challenging part is the lower bound, which is obtained by adapting a general method developed by Kohatsu-Higa to the underlying spatially homogeneous Gaussian setting. Both lower and upper estimates have the same form: a Gaussian density with a variance which is equal to that of the mild solution of the corresponding linear equation with additive noise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In economic literature, information deficiencies and computational complexities have traditionally been solved through the aggregation of agents and institutions. In inputoutput modelling, researchers have been interested in the aggregation problem since the beginning of 1950s. Extending the conventional input-output aggregation approach to the social accounting matrix (SAM) models may help to identify the effects caused by the information problems and data deficiencies that usually appear in the SAM framework. This paper develops the theory of aggregation and applies it to the social accounting matrix model of multipliers. First, we define the concept of linear aggregation in a SAM database context. Second, we define the aggregated partitioned matrices of multipliers which are characteristic of the SAM approach. Third, we extend the analysis to other related concepts, such as aggregation bias and consistency in aggregation. Finally, we provide an illustrative example that shows the effects of aggregating a social accounting matrix model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Graph pebbling is a network model for studying whether or not a given supply of discrete pebbles can satisfy a given demand via pebbling moves. A pebbling move across an edge of a graph takes two pebbles from one endpoint and places one pebble at the other endpoint; the other pebble is lost in transit as a toll. It has been shown that deciding whether a supply can meet a demand on a graph is NP-complete. The pebbling number of a graph is the smallest t such that every supply of t pebbles can satisfy every demand of one pebble. Deciding if the pebbling number is at most k is NP 2 -complete. In this paper we develop a tool, called theWeight Function Lemma, for computing upper bounds and sometimes exact values for pebbling numbers with the assistance of linear optimization. With this tool we are able to calculate the pebbling numbers of much larger graphs than in previous algorithms, and much more quickly as well. We also obtain results for many families of graphs, in many cases by hand, with much simpler and remarkably shorter proofs than given in previously existing arguments (certificates typically of size at most the number of vertices times the maximum degree), especially for highly symmetric graphs. Here we apply theWeight Function Lemma to several specific graphs, including the Petersen, Lemke, 4th weak Bruhat, Lemke squared, and two random graphs, as well as to a number of infinite families of graphs, such as trees, cycles, graph powers of cycles, cubes, and some generalized Petersen and Coxeter graphs. This partly answers a question of Pachter, et al., by computing the pebbling exponent of cycles to within an asymptotically small range. It is conceivable that this method yields an approximation algorithm for graph pebbling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of finding a feasible solution to a linear inequality system arises in numerous contexts. In [12] an algorithm, called extended relaxation method, that solves the feasibility problem, has been proposed by the authors. Convergence of the algorithm has been proven. In this paper, we onsider a class of extended relaxation methods depending on a parameter and prove their convergence. Numerical experiments have been provided, as well.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

IPH welcomes the Regulator’s Social Action Plan as one of a range of policy measures needed to tackle escalating fuel poverty in Northern Ireland. The Social Action Plan relates to how energy suppliers and networks respond to the needs of vulnerable customers. The submission discusses the definition of vulnerable customers used by energy suppliers and calls for special consideration of householders with multiple vulnerabilities. IPH also calls for special attention to be paid to the development of appropriate social tarrifs and supports for debt management. Key messages •    The Institute of Public Health in Ireland (IPH) views this social action plan as a welcome contribution to the range of policy measures needed to tackle escalating fuel poverty in Northern Ireland. •    The activities and ethos of energy suppliers plays a significant role in alleviating fuel poverty and the threats posed to health when living in a cold, damp and energy inefficient home. •    IPH shares the view of the World Health Organisation that more evidence is needed to demonstrate the real impact of corporate social responsibility in the provision of goods and services vital to health and well-being, such as fuel and water.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study preconditioning techniques for discontinuous Galerkin discretizations of isotropic linear elasticity problems in primal (displacement) formulation. We propose subspace correction methods based on a splitting of the vector valued piecewise linear discontinuous finite element space, that are optimal with respect to the mesh size and the Lamé parameters. The pure displacement, the mixed and the traction free problems are discussed in detail. We present a convergence analysis of the proposed preconditioners and include numerical examples that validate the theory and assess the performance of the preconditioners.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: To determine the local control and complication rates for children with papillary and/or macular retinoblastoma progressing after chemotherapy and undergoing stereotactic radiotherapy (SRT) with a micromultileaf collimator. METHODS AND MATERIALS: Between 2004 and 2008, 11 children (15 eyes) with macular and/or papillary retinoblastoma were treated with SRT. The mean age was 19 months (range, 2-111). Of the 15 eyes, 7, 6, and 2 were classified as International Classification of Intraocular Retinoblastoma Group B, C, and E, respectively. The delivered dose of SRT was 50.4 Gy in 28 fractions using a dedicated micromultileaf collimator linear accelerator. RESULTS: The median follow-up was 20 months (range, 13-39). Local control was achieved in 13 eyes (87%). The actuarial 1- and 2-year local control rates were both 82%. SRT was well tolerated. Late adverse events were reported in 4 patients. Of the 4 patients, 2 had developed focal microangiopathy 20 months after SRT; 1 had developed a transient recurrence of retinal detachment; and 1 had developed bilateral cataracts. No optic neuropathy was observed. CONCLUSIONS: Linear accelerator-based SRT for papillary and/or macular retinoblastoma in children resulted in excellent tumor control rates with acceptable toxicity. Additional research regarding SRT and its intrinsic organ-at-risk sparing capability is justified in the framework of prospective trials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the scale of a field site represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed downscaling procedure based on a non-linear Bayesian sequential simulation approach. The main objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity logged at collocated wells and surface resistivity measurements, which are available throughout the studied site. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariatekernel density function. Then a stochastic integration of low-resolution, large-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities is applied. The overall viability of this downscaling approach is tested and validated by comparing flow and transport simulation through the original and the upscaled hydraulic conductivity fields. Our results indicate that the proposed procedure allows obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces local distance-based generalized linear models. These models extend (weighted) distance-based linear models firstly with the generalized linear model concept, then by localizing. Distances between individuals are the only predictor information needed to fit these models. Therefore they are applicable to mixed (qualitative and quantitative) explanatory variables or when the regressor is of functional type. Models can be fitted and analysed with the R package dbstats, which implements several distancebased prediction methods.