148 resultados para Armington Assumption
Resumo:
Supported by IEEE 802.15.4 standardization activities, embedded networks have been gaining popularity in recent years. The focus of this paper is to quantify the behavior of key networking metrics of IEEE 802.15.4 beacon-enabled nodes under typical operating conditions, with the inclusion of packet retransmissions. We corrected and extended previous analyses by scrutinizing the assumptions on which the prevalent Markovian modeling is generally based. By means of a comparative study, we singled out which of the assumptions impact each of the performance metrics (throughput, delay, power consumption, collision probability, and packet-discard probability). In particular, we showed that - unlike what is usually assumed - the probability that a node senses the channel busy is not constant for all the stages of the backoff procedure and that these differences have a noticeable impact on backoff delay, packet-discard probability, and power consumption. Similarly, we showed that - again contrary to common assumption - the probability of obtaining transmission access to the channel depends on the number of nodes that is simultaneously sensing it. We evidenced that ignoring this dependence has a significant impact on the calculated values of throughput and collision probability. Circumventing these and other assumptions, we rigorously characterize, through a semianalytical approach, the key metrics in a beacon-enabled IEEE 802.15.4 system with retransmissions.
Resumo:
Background and purpose: Individual rupture risk assessment of intracranial aneurysms is a major issue in the clinical management of asymptomatic aneurysms. Aneurysm rupture occurs when wall tension exceeds the strength limit of the wall tissue. At present, aneurysmal wall mechanics are poorly understood and thus, risk assessment involving mechanical properties is inexistent. Aneurysm computational hemodynamics studies make the assumption of rigid walls, an arguable simplification. We therefore aim to assess mechanical properties of ruptured and unruptured intracranial aneurysms in order to provide the foundation for future patient-specific aneurysmal risk assessment. This work also challenges some of the currently held hypotheses in computational flow hemodynamics research. Methods: A specific conservation protocol was applied to aneurysmal tissues following clipping and resection in order to preserve their mechanical properties. Sixteen intracranial aneurysms (11 female, 5 male) underwent mechanical uniaxial stress tests under physiological conditions, temperature, and saline isotonic solution. These represented 11 unruptured and 5 ruptured aneurysms. Stress/strain curves were then obtained for each sample, and a fitting algorithm was applied following a 3-parameter (C(10), C(01), C(11)) Mooney-Rivlin hyperelastic model. Each aneurysm was classified according to its biomechanical properties and (un)rupture status.Results: Tissue testing demonstrated three main tissue classes: Soft, Rigid, and Intermediate. All unruptured aneurysms presented a more Rigid tissue than ruptured or pre-ruptured aneurysms within each gender subgroup. Wall thickness was not correlated to aneurysmal status (ruptured/unruptured). An Intermediate subgroup of unruptured aneurysms with softer tissue characteristic was identified and correlated with multiple documented risk factors of rupture. Conclusion: There is a significant modification in biomechanical properties between ruptured aneurysm, presenting a soft tissue and unruptured aneurysms, presenting a rigid material. This finding strongly supports the idea that a biomechanical risk factor based assessment should be utilized in the to improve the therapeutic decision making.
Resumo:
We investigate the problem of finding minimum-distortion policies for streaming delay-sensitive but distortion-tolerant data. We consider cross-layer approaches which exploit the coupling between presentation and transport layers. We make the natural assumption that the distortion function is convex and decreasing. We focus on a single source-destination pair and analytically find the optimum transmission policy when the transmission is done over an error-free channel. This optimum policy turns out to be independent of the exact form of the convex and decreasing distortion function. Then, for a packet-erasure channel, we analytically find the optimum open-loop transmission policy, which is also independent of the form of the convex distortion function. We then find computationally efficient closed-loop heuristic policies and show, through numerical evaluation, that they outperform the open-loop policy and have near optimal performance.
Resumo:
Wireless “MIMO” systems, employing multiple transmit and receive antennas, promise a significant increase of channel capacity, while orthogonal frequency-division multiplexing (OFDM) is attracting a good deal of attention due to its robustness to multipath fading. Thus, the combination of both techniques is an attractive proposition for radio transmission. The goal of this paper is the description and analysis of a new and novel pilot-aided estimator of multipath block-fading channels. Typical models leading to estimation algorithms assume the number of multipath components and delays to be constant (and often known), while their amplitudes are allowed to vary with time. Our estimator is focused instead on the more realistic assumption that the number of channel taps is also unknown and varies with time following a known probabilistic model. The estimation problem arising from these assumptions is solved using Random-Set Theory (RST), whereby one regards the multipath-channel response as a single set-valued random entity.Within this framework, Bayesian recursive equations determine the evolution with time of the channel estimator. Due to the lack of a closed form for the solution of Bayesian equations, a (Rao–Blackwellized) particle filter (RBPF) implementation ofthe channel estimator is advocated. Since the resulting estimator exhibits a complexity which grows exponentially with the number of multipath components, a simplified version is also introduced. Simulation results describing the performance of our channel estimator demonstrate its effectiveness.
Resumo:
In this paper, we introduce a pilot-aided multipath channel estimator for Multiple-Input Multiple-Output (MIMO) Orthogonal Frequency Division Multiplexing (OFDM) systems. Typical estimation algorithms assume the number of multipath components and delays to be known and constant, while theiramplitudes may vary in time. In this work, we focus on the more realistic assumption that also the number of channel taps is unknown and time-varying. The estimation problem arising from this assumption is solved using Random Set Theory (RST), which is a probability theory of finite sets. Due to the lack of a closed form of the optimal filter, a Rao-Blackwellized Particle Filter (RBPF) implementation of the channel estimator is derived. Simulation results demonstrate the estimator effectiveness.
Resumo:
This paper derives approximations allowing the estimation of outage probability for standard irregular LDPC codes and full-diversity Root-LDPC codes used over nonergodic block-fading channels. Two separate approaches are discussed: a numerical approximation, obtained by curve fitting, for both code ensembles, and an analytical approximation for Root-LDPC codes, obtained under the assumption that the slope of the iterative threshold curve of a given code ensemble matches the slope of the outage capacity curve in the high-SNR regime.
Resumo:
This paper explores the existence of negative peer-group pressures derived from the concentration of foreigners in French lower secondary schools. Using different dependent variables (number of years spent in lower secondary education, grades in 4th ‘and 3rd year and track election in upper secondary schooling) the analyses indicate that the much disputed existence of significant and negative effects of the concentration of foreign students in schools depends on the method used for the estimation. If we assume that the concentration of foreigners is a random and exogenous process, then the multivariate analyses confirm negative interactions. If, on the contrary, we question the assumption that this contextual information is not end the result of prior sorting mechanisms of individuals across social spaces, the concentration of foreigners has no statistical impact on attainment.
Resumo:
This paper explores an overlooked issue in the literature on federations and federalism: the relationship between federalism and democracy. Starting from the assumption that federalism per se is not enough to guarantee cooperative intergovernmental dynamics between different levels of governments, this article analyzes how democracy reinforces cooperative intergovernmental relations under a federal design. Drawing from empirical evidence of federations in the making – Brazil, India, Malaysia, Mexico, South Africa and Spain – this article shows that in countries where the federal design was built under democratization, namely Brazil, Spain and South Africa, intergovernmental dynamics evolved under an increasingly cooperative mode of interaction.
Resumo:
We analyze a standard environment of adverse selection in credit markets. In our envi- ronment, entrepreneurs who are privately informed about the quality of their projects need to borrow from banks. As is generally the case in economies with adverse selection, the competitive equilibrium of our economy is shown to be ine¢ cient. Under adverse selection, the choices made by one type of agents limit what can be o¤ered to other types in an incentive-compatible manner. This gives rise to an externality, which cannot be internalized in a competitive equilibrium. We show that, in this type of environment, the ine¢ ciency associated to adverse selection is the consequence of one implicit assumption: entrepreneurs can only borrow from banks. If an additional market is added (say, a .security market.), in which entrepreneurs can obtain funds beyond those o¤ered by banks, we show that the e¢ cient allocation is an equilibrium of the economy. In such an equilibrium, all entrepreneurs borrow at a pooling rate in the security market. When they apply to bank loans, though, only entrepreneurs with good projects pledge these additional funds as collateral. This equilibrium thus simultaneously entails cross- subsidization and separation between di¤erent types of entrepreneurs.
Resumo:
We analyze a standard environment of adverse selection in credit markets. In our environment, entrepreneurs who are privately informed about the quality of their projects need to borrow in order to invest. Conventional wisdom says that, in this class of economies, the competitive equilibrium is typically inefficient. We show that this conventional wisdom rests on one implicit assumption: entrepreneurs can only access monitored lending. If a new set of markets is added to provide entrepreneurs with additional funds, efficiency can be attained in equilibrium. An important characteristic of these additional markets is that lending in them must be unmonitored, in the sense that it does not condition total borrowing or investment by entrepreneurs. This makes it possible to attain efficiency by pooling all entrepreneurs in the new markets while separating them in the markets for monitored loans.
Resumo:
It has long been standard in agency theory to search for incentive-compatible mechanisms on the assumption that people care only about their own material wealth. However, this assumption is clearly refuted by numerous experiments, and we feel that it may be useful to consider nonpecuniary utility in mechanism design and contract theory. Accordingly, we devise an experiment to explore optimal contracts in an adverse-selection context. A principal proposes one of three contract menus, each of which offers a choice of two incentive-compatible contracts, to two agents whose types are unknown to the principal. The agents know the set of possible menus, and choose to either accept one of the two contracts offered in the proposed menu or to reject the menu altogether; a rejection by either agent leads to lower (and equal) reservation payoffs for all parties. While all three possible menus favor the principal, they do so to varying degrees. We observe numerous rejections of the more lopsided menus, and approach an equilibrium where one of the more equitable contract menus (which one depends on the reservation payoffs) is proposed and agents accept a contract, selecting actions according to their types. Behavior is largely consistent with all recent models of social preferences, strongly suggesting there is value in considering nonpecuniary utility in agency theory.
Resumo:
We introduce a variation of the proof for weak approximations that issuitable for studying the densities of stochastic processes which areevaluations of the flow generated by a stochastic differential equation on a random variable that maybe anticipating. Our main assumption is that the process and the initial random variable have to be smooth in the Malliavin sense. Furthermore if the inverse of the Malliavin covariance matrix associated with the process under consideration is sufficiently integrable then approximations fordensities and distributions can also be achieved. We apply theseideas to the case of stochastic differential equations with boundaryconditions and the composition of two diffusions.
Resumo:
Sequential randomized prediction of an arbitrary binary sequence isinvestigated. No assumption is made on the mechanism of generating the bit sequence. The goal of the predictor is to minimize its relative loss, i.e., to make (almost) as few mistakes as the best ``expert'' in a fixed, possibly infinite, set of experts. We point out a surprising connection between this prediction problem and empirical process theory. First, in the special case of static (memoryless) experts, we completely characterize the minimax relative loss in terms of the maximum of an associated Rademacher process. Then we show general upper and lower bounds on the minimaxrelative loss in terms of the geometry of the class of experts. As main examples, we determine the exact order of magnitude of the minimax relative loss for the class of autoregressive linear predictors and for the class of Markov experts.
Resumo:
In this paper we study the disability transition probabilities (as well as the mortalityprobabilities) due to concurrent factors to age such as income, gender and education. Althoughit is well known that ageing and socioeconomic status influence the probability ofcausing functional disorders, surprisingly little attention has been paid to the combined effectof those factors along the individuals' life and how this affects the transition from one degreeof disability to another. The assumption that tomorrow's disability state is only a functionof the today's state is very strong, since disability is a complex variable that depends onseveral other elements than time. This paper contributes into the field in two ways: (1) byattending the distinction between the initial disability level and the process that leads tohis course (2) by addressing whether and how education, age and income differentially affectthe disability transitions. Using a Markov chain discrete model and a survival analysis, weestimate the probability by year and individual characteristics that changes the state of disabilityand the duration that it takes its progression in each case. We find that people withan initial state of disability have a higher propensity to change and take less time to transitfrom different stages. Men do that more frequently than women. Education and incomehave negative effects on transition. Moreover, we consider the disability benefits associatedto those changes along different stages of disability and therefore we offer some clues onthe potential savings of preventive actions that may delay or avoid those transitions. Onpure cost considerations, preventive programs for improvement show higher benefits thanthose for preventing deterioration, and in general terms, those focussing individuals below65 should go first. Finally the trend of disability in Spain seems not to change among yearsand regional differences are not found.
Resumo:
We extend Aumann's theorem [Aumann 1987], deriving correlated equilibria as a consequence of common priors and common knowledge of rationality, by explicitly allowing for non-rational behavior. Wereplace the assumption of common knowledge of rationality with a substantially weaker one, joint p-belief of rationality, where agents believe the other agents are rational with probability p or more. We show that behavior in this case constitutes a kind of correlated equilibrium satisfying certain p-belief constraints, and that it varies continuously in the parameters p and, for p sufficiently close to one,with high probability is supported on strategies that survive the iterated elimination of strictly dominated strategies. Finally, we extend the analysis to characterizing rational expectations of interimtypes, to games of incomplete information, as well as to the case of non-common priors.