944 resultados para Almost Optimal Density Function
Resumo:
Vectorial Boolean function, almost bent, almost perfect nonlinear, affine equivalence, CCZ-equivalence
Resumo:
The presence of subcentres cannot be captured by an exponential function. Cubic spline functions seem more appropriate to depict the polycentricity pattern of modern urban systems. Using data from Barcelona Metropolitan Region, two possible population subcentre delimitation procedures are discussed. One, taking an estimated derivative equal to zero, the other, a density gradient equal to zero. It is argued that, in using a cubic spline function, a delimitation strategy based on derivatives is more appropriate than one based on gradients because the estimated density can be negative in sections with very low densities and few observations, leading to sudden changes in estimated gradients. It is also argued that using as a criteria for subcentre delimitation a second derivative with value zero allow us to capture a more restricted subcentre area than using as a criteria a first derivative zero. This methodology can also be used for intermediate ring delimitation.
Resumo:
We study the existence theory for parabolic variational inequalities in weighted L2 spaces with respect to excessive measures associated with a transition semigroup. We characterize the value function of optimal stopping problems for finite and infinite dimensional diffusions as a generalized solution of such a variational inequality. The weighted L2 setting allows us to cover some singular cases, such as optimal stopping for stochastic equations with degenerate diffusion coeficient. As an application of the theory, we consider the pricing of American-style contingent claims. Among others, we treat the cases of assets with stochastic volatility and with path-dependent payoffs.
Resumo:
Measuring productive efficiency provides information on the likely effects of regulatory reform. We present a Data Envelopment Analysis (DEA) of a sample of 38 vehicle inspection units under a concession regime, between the years 2000 and 2004. The differences in efficiency scores show the potential technical efficiency benefit of introducing some form of incentive regulation or of progressing towards liberalization. We also compute scale efficiency scores, showing that only units in territories with very low population density operate at a sub-optimal scale. Among those that operate at an optimal scale, there are significant differences in size; the largest ones operate in territories with the highest population density. This suggests that the introduction of new units in the most densely populated territories (a likely effect of some form of liberalization) would not be detrimental in terms of scale efficiency. We also find that inspection units belonging to a large, diversified firm show higher technical efficiency, reflecting economies of scale or scope at the firm level. Finally, we show that between 2002 and 2004, a period of high regulatory uncertainty in the sample’s region, technical change was almost zero. Regulatory reform should take due account of scale and diversification effects, while at the same time avoiding regulatory uncertainty.
Resumo:
We study the screening problem that arises in a framework where, initially, the agent is privately informed about both the expected production cost and the cost variability and, at a later stage, he learns privately the cost realization. The speci c set of relevant incentive constraints, and so the characteristics of the optimal mechanism, depend nely upon the curvature of the principal s marginal surplus function as well as the relative importance of the two initial information problems. Pooling of production levels is optimally induced with respect to the cost variability when the principal's knowledge imperfection about the latter is sufficiently less important than that about the expected cost.
Resumo:
In the theoretical macroeconomics literature, fiscal policy is almost uniformly taken to mean taxing and spending by a ‘benevolent government’ that exploits the potential aggregate demand externalities inherent in the imperfectly competitive nature of goods markets. Whilst shown to raise aggregate output and employment, these policies crowd-out private consumption and hence typically reduce welfare. In this paper we consider the use of ‘tax-and-subsidise’ instead of ‘taxand- spend’ policies on account of their widespread use by governments, even in the recent recession, to stimulate economic activity. Within a static general equilibrium macro-model with imperfectly competitive good markets we examine the effect of wage and output subsidies and show that, for a small open economy, positive tax and subsidy rates exist which maximise welfare, rendering no intervention as a suboptimal state. We also show that, within a two-country setting, a Nash non-cooperative symmetric equilibrium with positive tax and subsidy rates exists, and that cooperation between trading partners in setting these rates is more expansionary and leads to an improvement upon the non-cooperative solution.
Resumo:
In a market in which sellers compete by posting mechanisms, we study how the properties of the meeting technology affect the mechanism that sellers select. In general, sellers have incentive to use mechanisms that are socially efficient. In our environment, sellers achieve this by posting an auction with a reserve price equal to their own valuation, along with a transfer that is paid by (or to) all buyers with whom the seller meets. However, we define a novel condition on meeting technologies, which we call “invariance,” and show that the transfer is equal to zero if and only if the meeting technology satisfies this condition.
Resumo:
Time-inconsistency is an essential feature of many policy problems (Kydland and Prescott, 1977). This paper presents and compares three methods for computing Markov-perfect optimal policies in stochastic nonlinear business cycle models. The methods considered include value function iteration, generalized Euler-equations, and parameterized shadow prices. In the context of a business cycle model in which a scal authority chooses government spending and income taxation optimally, while lacking the ability to commit, we show that the solutions obtained using value function iteration and generalized Euler equations are somewhat more accurate than that obtained using parameterized shadow prices. Among these three methods, we show that value function iteration can be applied easily, even to environments that include a risk-sensitive scal authority and/or inequality constraints on government spending. We show that the risk-sensitive scal authority lowers government spending and income-taxation, reducing the disincentive households face to accumulate wealth.
Resumo:
We study the lysis timing of a bacteriophage population by means of a continuously infection-age-structured population dynamics model. The features of the model are the infection process of bacteria, the natural death process, and the lysis process which means the replication of bacteriophage viruses inside bacteria and the destruction of them. We consider that the length of the lysis timing (or latent period) is distributed according to a general probability distribution function. We have carried out an optimization procedure and we have found the latent period corresponding to the maximal fitness (i.e. maximal growth rate) of the bacteriophage population.
Resumo:
When using a polynomial approximating function the most contentious aspect of the Heat Balance Integral Method is the choice of power of the highest order term. In this paper we employ a method recently developed for thermal problems, where the exponent is determined during the solution process, to analyse Stefan problems. This is achieved by minimising an error function. The solution requires no knowledge of an exact solution and generally produces significantly better results than all previous HBI models. The method is illustrated by first applying it to standard thermal problems. A Stefan problem with an analytical solution is then discussed and results compared to the approximate solution. An ablation problem is also analysed and results compared against a numerical solution. In both examples the agreement is excellent. A Stefan problem where the boundary temperature increases exponentially is analysed. This highlights the difficulties that can be encountered with a time dependent boundary condition. Finally, melting with a time-dependent flux is briefly analysed without applying analytical or numerical results to assess the accuracy.
Resumo:
Gastroschisis is a common congenital abdominal wall defect. It is almost always diagnosed prenatally thanks to routine maternal serum screening and ultrasound screening programs. In the majority of cases, the condition is isolated (i.e. not associated with chromosomal or other anatomical anomalies). Prenatal diagnosis allows for planning the timing, mode and location of delivery. Controversies persist concerning the optimal antenatal monitoring strategy. Compelling evidence supports elective delivery at 37 weeks' gestation in a tertiary pediatric center. Cesarean section should be reserved for routine obstetrical indications. Prognosis of infants with gastroschisis is primarily determined by the degree of bowel injury, which is difficult to assess antenatally. Prenatal counseling usually addresses gastroschisis issues. However, parental concerns are mainly focused on long-term postnatal outcomes including gastrointestinal function and neurodevelopment. Although infants born with gastroschisis often endure a difficult neonatal course, they experience few long-term complications. This manuscript, which is structured around common parental questions and concerns, reviews the evidence pertaining to the antenatal, neonatal and long-term implications of a fetal gastroschisis diagnosis and is aimed at helping healthcare professionals counsel expecting parents. © 2013 John Wiley & Sons, Ltd.
Resumo:
High-altitude destinations are visited by increasing numbers of children and adolescents. High-altitude hypoxia triggers pulmonary hypertension that in turn may have adverse effects on cardiac function and may induce life-threatening high-altitude pulmonary edema (HAPE), but there are limited data in this young population. We, therefore, assessed in 118 nonacclimatized healthy children and adolescents (mean ± SD; age: 11 ± 2 yr) the effects of rapid ascent to high altitude on pulmonary artery pressure and right and left ventricular function by echocardiography. Pulmonary artery pressure was estimated by measuring the systolic right ventricular to right atrial pressure gradient. The echocardiography was performed at low altitude and 40 h after rapid ascent to 3,450 m. Pulmonary artery pressure was more than twofold higher at high than at low altitude (35 ± 11 vs. 16 ± 3 mmHg; P < 0.0001), and there existed a wide variability of pulmonary artery pressure at high altitude with an estimated upper 95% limit of 52 mmHg. Moreover, pulmonary artery pressure and its altitude-induced increase were inversely related to age, resulting in an almost twofold larger increase in the 6- to 9- than in the 14- to 16-yr-old participants (24 ± 12 vs. 13 ± 8 mmHg; P = 0.004). Even in children with the most severe altitude-induced pulmonary hypertension, right ventricular systolic function did not decrease, but increased, and none of the children developed HAPE. HAPE appears to be a rare event in this young population after rapid ascent to this altitude at which major tourist destinations are located.
Resumo:
The quantity of interest for high-energy photon beam therapy recommended by most dosimetric protocols is the absorbed dose to water. Thus, ionization chambers are calibrated in absorbed dose to water, which is the same quantity as what is calculated by most treatment planning systems (TPS). However, when measurements are performed in a low-density medium, the presence of the ionization chamber generates a perturbation at the level of the secondary particle range. Therefore, the measured quantity is close to the absorbed dose to a volume of water equivalent to the chamber volume. This quantity is not equivalent to the dose calculated by a TPS, which is the absorbed dose to an infinitesimally small volume of water. This phenomenon can lead to an overestimation of the absorbed dose measured with an ionization chamber of up to 40% in extreme cases. In this paper, we propose a method to calculate correction factors based on the Monte Carlo simulations. These correction factors are obtained by the ratio of the absorbed dose to water in a low-density medium □D(w,Q,V1)(low) averaged over a scoring volume V₁ for a geometry where V₁ is filled with the low-density medium and the absorbed dose to water □D(w,QV2)(low) averaged over a volume V₂ for a geometry where V₂ is filled with water. In the Monte Carlo simulations, □D(w,QV2)(low) is obtained by replacing the volume of the ionization chamber by an equivalent volume of water, according to the definition of the absorbed dose to water. The method is validated in two different configurations which allowed us to study the behavior of this correction factor as a function of depth in phantom, photon beam energy, phantom density and field size.
Resumo:
Exogenous oxidized cholesterol disturbs both lipid metabolism and immune functions. Therefore, it may perturb these modulations with ageing. Effects of the dietary protein type on oxidized cholesterol-induced modulations of age-related changes in lipid metabolism and immune function was examined using differently aged (4 weeks versus 8 months) male Sprague-Dawley rats when casein, soybean protein or milk whey protein isolate (WPI) was the dietary protein source, respectively. The rats were given one of the three proteins in diet containing 0.2% oxidized cholesterols mixture. Soybean protein, as compared with the other two proteins, significantly lowered both the serum thiobarbituric acid reactive substances value and cholesterol, whereas it elevated the ratio of high density lipoprotein-cholesterol/cholesterol in young rats, but not in adult. Moreover, soybean protein, but not casein and WPI, suppressed the elevation of Delta6 desaturation indices of phospholipids in both liver and spleen, particularly in young. On the other hand, WPI, compared to the other two proteins, inhibited the leukotriene B4 production of spleen, irrespective of age. Soybean protein reduced the ratio of CD4(+)/CD8(+) T-cells in splenic lymphocytes. Therefore, the levels of immunoglobulin (Ig)A, IgE and IgG in serum were lowered in rats given soybean protein in both age groups except for IgA in adult, although these observations were not shown in rats given other proteins. Thus, various perturbations of lipid metabolism and immune function caused by oxidized cholesterol were modified depending on the type of dietary protein. The moderation by soybean protein on the change of lipid metabolism seems to be susceptible in young rats whose homeostatic ability is immature. These observations may be exerted through both the promotion of oxidized cholesterol excretion to feces and the change of hormonal release, while WPI may suppress the disturbance of immune function by oxidized cholesterol in both ages. This alleviation may be associated with a large amount of lactoglobulin in WPI. These results thus showed a possibility that oxidized cholesterol-induced perturbations of age-related changes of lipid metabolism and immune function can be moderated by both the selection and combination of dietary protein.
Resumo:
Functional Data Analysis (FDA) deals with samples where a whole function is observedfor each individual. A particular case of FDA is when the observed functions are densityfunctions, that are also an example of infinite dimensional compositional data. In thiswork we compare several methods for dimensionality reduction for this particular typeof data: functional principal components analysis (PCA) with or without a previousdata transformation and multidimensional scaling (MDS) for diferent inter-densitiesdistances, one of them taking into account the compositional nature of density functions. The difeerent methods are applied to both artificial and real data (householdsincome distributions)