955 resultados para Convex Polygon
Resumo:
In a bankruptcy situation, not all claimants are affected in the same way. In particular, some depositors may enter into a situation of personal bankruptcy if they lose part of their investments. Events of this kind may lead to a social catastrophe. We propose discrimination among the claimants as a possible solution. A fact considered in the American bankruptcy law (among others) that establishes some discrimination on the claimants, or the Santander Bank that in the Madoff’s case reimbursed only the deposits to its particular customers. Moreover, the necessity of discriminating has already been mentioned in different contexts by Young (1988), Bossert (1995), Thomson (2003) and Pulido et al. (2002, 2007), for instance. In this paper, we take a bankruptcy solution as the reference point. Given this initial allocation, we make transfers from richer to poorer with the purpose of distributing not only the personal incurred losses as evenly as possible but also the transfers in a progressive way. The agents are divided into two groups depending on their personal monetary value (wealth, net-income, GDP or any other characteristic). Then, we impose a set of Axioms that bound the maximal transfer that each net-contributor can make and each net-receiver can obtain. Finally, we define a value discriminant solution, and we characterize it by means of the Lorenz criterion. Endogenous convex combinations between solutions are also considered. Keywords: Bankruptcy, Discrimination, Compensation, Rules JEL classification: C71, D63, D71.
Resumo:
In this paper, we study the average inter-crossing number between two random walks and two random polygons in the three-dimensional space. The random walks and polygons in this paper are the so-called equilateral random walks and polygons in which each segment of the walk or polygon is of unit length. We show that the mean average inter-crossing number ICN between two equilateral random walks of the same length n is approximately linear in terms of n and we were able to determine the prefactor of the linear term, which is a = (3 In 2)/(8) approximate to 0.2599. In the case of two random polygons of length n, the mean average inter-crossing number ICN is also linear, but the prefactor of the linear term is different from that of the random walks. These approximations apply when the starting points of the random walks and polygons are of a distance p apart and p is small compared to n. We propose a fitting model that would capture the theoretical asymptotic behaviour of the mean average ICN for large values of p. Our simulation result shows that the model in fact works very well for the entire range of p. We also study the mean ICN between two equilateral random walks and polygons of different lengths. An interesting result is that even if one random walk (polygon) has a fixed length, the mean average ICN between the two random walks (polygons) would still approach infinity if the length of the other random walk (polygon) approached infinity. The data provided by our simulations match our theoretical predictions very well.
Resumo:
The problem of stability analysis for a class of neutral systems with mixed time-varying neutral, discrete and distributed delays and nonlinear parameter perturbations is addressed. By introducing a novel Lyapunov-Krasovskii functional and combining the descriptor model transformation, the Leibniz-Newton formula, some free-weighting matrices, and a suitable change of variables, new sufficient conditions are established for the stability of the considered system, which are neutral-delay-dependent, discrete-delay-range dependent, and distributeddelay-dependent. The conditions are presented in terms of linear matrix inequalities (LMIs) and can be efficiently solved using convex programming techniques. Two numerical examples are given to illustrate the efficiency of the proposed method
Resumo:
Immobile location-allocation (LA) problems is a type of LA problem that consists in determining the service each facility should offer in order to optimize some criterion (like the global demand), given the positions of the facilities and the customers. Due to the complexity of the problem, i.e. it is a combinatorial problem (where is the number of possible services and the number of facilities) with a non-convex search space with several sub-optimums, traditional methods cannot be applied directly to optimize this problem. Thus we proposed the use of clustering analysis to convert the initial problem into several smaller sub-problems. By this way, we presented and analyzed the suitability of some clustering methods to partition the commented LA problem. Then we explored the use of some metaheuristic techniques such as genetic algorithms, simulated annealing or cuckoo search in order to solve the sub-problems after the clustering analysis
Resumo:
Sackung is a widespread post-glacial morphological feature affecting Alpine mountains and creating characteristic geomorphological expression that can be detected from topography. Over long time evolution, internal deformation can lead to the formation of rapidly moving phenomena such as a rock-slide or rock avalanche. In this study, a detailed description of the Sierre rock-avalanche (SW Switzerland) is presented. This convex-shaped postglacial instability is one of the larger rock-avalanche in the Alps, involving more than 1.5 billion m3 with a run-out distance of about 14 km and extremely low Fahrböschung angle. This study presents comprehensive analyses of the structural and geological characteristics leading to the development of the Sierre rock-avalanche. In particular, by combining field observations, digital elevation model analyses and numerical modelling, the strong influence of both ductile and brittle tectonic structures on the failure mechanism and on the failure surface geometry is highlighted. The detection of pre-failure deformation indicates that the development of the rock avalanche corresponds to the last evolutionary stage of a pre-existing deep seated gravitational slope instability. These analyses accompanied by the dating and the characterization of rock avalanche deposits, allow the proposal of a destabilization model that clarifies the different phases leading to the development of the Sierre rock avalanche.
Resumo:
Olivine nephelinites commonly contain macrocrysts of olivine and clinopyroxene. Some of these macrocrysts might represent fragments of the source region of the host magma transported to the Earth surface. If this hypothesis is correct these fragments can be used to characterize the composition of the source region and to put constraints on the magma generation process. In this study, we investigate the origin of macrocrysts and mineral aggregates from an olivine nephelinite from the Kaiserstuhl, Germany. We focus on clinopyroxenes (Cpx), which can be divided into three groups. Cpx I is relict Cpx from aggregates with deformed olivine that is depleted in Ca and characterized by strong light rare earth element (LREE) fractionation, low Ti/Eu and negative high field strength element (HFSE) anomalies. Its geochemical signature is consistent with formation by carbonatite metasomatism and with equilibration in the Presence of orthopyroxene. Cpx II is Ca-rich Cpx, forming both aggregates with deformed olivine and individual macrocrysts. The LREE, as for Cpx I, are strongly fractionated. Convex REE patterns may be present. The depletion in HFSE is less pronounced. Cpx III is oscillatory zoned Cpx phenociysis showing enrichment in Ca, convex REE patterns and no HFSE anomalies. The transition in the trace element abundances between the Cpx of the three groups is gradual. However, Cpx I and H did not crystallize from the host magma, as demonstrated by the presence of kink-bands and undulose extinction in the associated olivine and by the composition of alkali aluminosilicate glass inclusions in Cpx H. Based on the Cpx relationships, we interpret the studied suite of macrocrysts and mineral aggregates as a mixture of disintegrated fragments of the source region of the host olivine nephelinite. The process of melt generation was multi-stage. A primary carbonatite melt ascending from deeper levels in the mantle, probably from the dolomite-garnet peridotite stability field, reacted with mantle peridotite along the solidus ledge in the system lherzolite-CO2 (< 20-22 kbar) and started to crystallize carbonate minerals. Because of its low solidus temperature, the resulting carbonate-wehrlite assemblage melted incongruently with the formation of additional clinopyroxene. The carbonatite melt evolved during crystallization of carbonate minerals and concomitant incongruent melting of the carbonate-wehrlite, accompanied by the segregation of incipient alkali aluminosilicate melts. As a consequence of fast reaction rates in the presence of a carbonatite melt, this process probably took place under disequilibrium conditions. Further melting of the assemblage wehrlite + alkali aluminosilicate melt led to the generation of the olivine nephelinite magma. It entrained fragments of the wehrlite and brought them to the surface.
Resumo:
Richer and healthier agents tend to hold riskier portfolios and spend proportionally less on health expenditures. Potential explanations include health and wealth effects on preferences, expected longevity or disposable total wealth. Using HRS data, we perform a structural estimation of a dynamic model of consumption, portfolio and health expenditure choices with recursive utility, as well as health-dependent income and mortality risk. Our estimates of the deep parameters highlight the importance of health capital, mortality risk control, convex health and mortality adjustment costs and binding liquidity constraints to rationalize the stylized facts. They also provide new perspectives on expected longevity and on the values of life and health.
Resumo:
We investigate the problem of finding minimum-distortion policies for streaming delay-sensitive but distortion-tolerant data. We consider cross-layer approaches which exploit the coupling between presentation and transport layers. We make the natural assumption that the distortion function is convex and decreasing. We focus on a single source-destination pair and analytically find the optimum transmission policy when the transmission is done over an error-free channel. This optimum policy turns out to be independent of the exact form of the convex and decreasing distortion function. Then, for a packet-erasure channel, we analytically find the optimum open-loop transmission policy, which is also independent of the form of the convex distortion function. We then find computationally efficient closed-loop heuristic policies and show, through numerical evaluation, that they outperform the open-loop policy and have near optimal performance.
Resumo:
We analyse credit market equilibrium when banks screen loan applicants. When banks have a convex cost function of screening, a pure strategy equilibrium exists where banks optimally set interest rates at the same level as their competitors. This result complements Broecker s (1990) analysis, where he demonstrates that no pure strategy equilibrium exists when banks have zero screening costs. In our set up we show that interest rate on loans are largely independent of marginal costs, a feature consistent with the extant empirical evidence. In equilibrium, banks make positive profits in our model in spite of the threat of entry by inactive banks. Moreover, an increase in the number of active banks increases credit risk and so does not improve credit market effciency: this point has important regulatory implications. Finally, we extend our analysis to the case where banks have differing screening abilities.
Resumo:
We introduce simple nonparametric density estimators that generalize theclassical histogram and frequency polygon. The new estimators are expressed as linear combination of density functions that are piecewisepolynomials, where the coefficients are optimally chosen in order to minimize the integrated square error of the estimator. We establish the asymptotic behaviour of the proposed estimators, and study theirperformance in a simulation study.
Resumo:
We characterize the Walrasian allocations correspondence by means offour axioms: consistency, replica invariance, individual rationality andPareto optimality. It is shown that for any given class of exchange economiesany solution that satisfies the axioms is a selection from the Walrasianallocations with slack. Preferences are assumed to be smooth, but may besatiated and non--convex. A class of economies is defined as all economieswhose agents' preferences belong to an arbitrary family (finite or infinite)of types. The result can be modified to characterize equal budget Walrasianallocations with slack by replacing individual rationality with individualrationality from equal division. The results are valid also for classes ofeconomies in which core--Walras equivalence does not hold.
Resumo:
We characterize the Walrasian allocations correspondence, in classesof exchange economies with smooth and convex preferences, by means of consistency requirements and other axioms. We present three characterizationresults; all of which require consistency, converse consistency and standard axioms. Two characterizations hold also on domains with a finite number ofpotential agents, one of them requires envy freeness (with respect to trades) and the other--core selection; a third characterization, that requires coreselection, applies only to a variable number of agents domain, but is validalso when the domain includes only a small variety of preferences.
Resumo:
We consider a dynamic multifactor model of investment with financing imperfections,adjustment costs and fixed and variable capital. We use the model to derive a test offinancing constraints based on a reduced form variable capital equation. Simulation resultsshow that this test correctly identifies financially constrained firms even when the estimationof firms investment opportunities is very noisy. In addition, the test is well specified inthe presence of both concave and convex adjustment costs of fixed capital. We confirmempirically the validity of this test on a sample of small Italian manufacturing companies.
Resumo:
We present a new unifying framework for investigating throughput-WIP(Work-in-Process) optimal control problems in queueing systems,based on reformulating them as linear programming (LP) problems withspecial structure: We show that if a throughput-WIP performance pairin a stochastic system satisfies the Threshold Property we introducein this paper, then we can reformulate the problem of optimizing alinear objective of throughput-WIP performance as a (semi-infinite)LP problem over a polygon with special structure (a thresholdpolygon). The strong structural properties of such polygones explainthe optimality of threshold policies for optimizing linearperformance objectives: their vertices correspond to the performancepairs of threshold policies. We analyze in this framework theversatile input-output queueing intensity control model introduced byChen and Yao (1990), obtaining a variety of new results, including (a)an exact reformulation of the control problem as an LP problem over athreshold polygon; (b) an analytical characterization of the Min WIPfunction (giving the minimum WIP level required to attain a targetthroughput level); (c) an LP Value Decomposition Theorem that relatesthe objective value under an arbitrary policy with that of a giventhreshold policy (thus revealing the LP interpretation of Chen andYao's optimality conditions); (d) diminishing returns and invarianceproperties of throughput-WIP performance, which underlie thresholdoptimality; (e) a unified treatment of the time-discounted andtime-average cases.
Resumo:
Although the histogram is the most widely used density estimator, itis well--known that the appearance of a constructed histogram for a given binwidth can change markedly for different choices of anchor position. In thispaper we construct a stability index $G$ that assesses the potential changesin the appearance of histograms for a given data set and bin width as theanchor position changes. If a particular bin width choice leads to an unstableappearance, the arbitrary choice of any one anchor position is dangerous, anda different bin width should be considered. The index is based on the statisticalroughness of the histogram estimate. We show via Monte Carlo simulation thatdensities with more structure are more likely to lead to histograms withunstable appearance. In addition, ignoring the precision to which the datavalues are provided when choosing the bin width leads to instability. We provideseveral real data examples to illustrate the properties of $G$. Applicationsto other binned density estimators are also discussed.