35 resultados para Problem of nonadditivity in two-way ANOVA
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
In this paper, I consider a general and informationally effcient approach to determine the optimal access rule and show that there exists a simple rule that achieves the Ramsey outcome as the unique equilibrium when networks compete in linear prices without network-based price discrimination. My approach is informationally effcient in the sense that the regulator is required to know only the marginal cost structure, i.e. the marginal cost of making and terminating a call. The approach is general in that access prices can depend not only on the marginal costs but also on the retail prices, which can be observed by consumers and therefore by the regulator as well. In particular, I consider the set of linear access pricing rules which includes any fixed access price, the Efficient Component Pricing Rule (ECPR) and the Modified ECPR as special cases. I show that in this set, there is a unique access rule that achieves the Ramsey outcome as the unique equilibrium as long as there exists at least a mild degree of substitutability among networks' services.
Resumo:
Biological water quality changes in two Mediterranean river basins from a network of 42 sampling sites assessed since 1979 are presented. In order to characterize the biological quality, the index FBILL, designed to characterize these rivers" quality using aquatic macroinvertebrates, is used. When comparing the data from recent years to older ones, only two headwater sites from the 42 had improved their water quality to good or very good conditions. In the middle or low river basin sites or even in headwater localities were river flow is reduced, the important investment to build up sewage water treatment systems and plants (more than 70 in 15 years) allowed for a small recovery from poor or very poor conditions to moderate water quality. Nevertheless still a significant number (25 %) of the localities remain in poor conditions. The evolution of the quality in several points of both basins shows how the main problems for the recovery of the biological quality is due to the water diverted for small hydraulic plants, the presence of saline pollution in the Llobregat River, and the insufficient water depuration. In the smaller rivers, and specially the Besòs the lack of dilution flows from the treatment plants is the main problem for water quality recovery.
Resumo:
We study a retail benchmarking approach to determine access prices for interconnected networks. Instead of considering fixed access charges as in the existing literature, we study access pricing rules that determine the access price that network i pays to network j as a linear function of the marginal costs and the retail prices set by both networks. In the case of competition in linear prices, we show that there is a unique linear rule that implements the Ramsey outcome as the unique equilibrium, independently of the underlying demand conditions. In the case of competition in two-part tariffs, we consider a class of access pricing rules, similar to the optimal one under linear prices but based on average retail prices. We show that firms choose the variable price equal to the marginal cost under this class of rules. Therefore, the regulator (or the competition authority) can choose one among the rules to pursue additional objectives such as consumer surplus, network covera.
Resumo:
This research investigates the phenomenon of translationese in two monolingual comparable corpora of original and translated Catalan texts. Translationese has been defined as the dialect, sub-language or code of translated language. This study aims at giving empirical evidence of translation universals regardless the source language.Traditionally, research conducted on translation strategies has been mainly intuition-based. Computational Linguistics and Natural Language Processing techniques provide reliable information of lexical frequencies, morphological and syntactical distribution in corpora. Therefore, they have been applied to observe which translation strategies occur in these corpora.Results seem to prove the simplification, interference and explicitation hypotheses, whereas no sign of normalization has been detected with the methodology used.The data collected and the resources created for identifying lexical, morphological and syntactic patterns of translations can be useful for Translation Studies teachers, scholars and students: teachers will have more tools to help students avoid the reproduction of translationese patterns. Resources developed will help in detecting non-genuine or inadequate structures in the target language. This fact may imply an improvement in stylistic quality in translations. Translation professionals can also take advantage of these resources to improve their translation quality.
Resumo:
We study a retail benchmarking approach to determine access prices for interconnected networks. Instead of considering fixed access charges as in the existing literature, we study access pricing rules that determine the access price that network i pays to network j as a linear function of the marginal costs and the retail prices set by both networks. In the case of competition in linear prices, we show that there is a unique linear rule that implements the Ramsey outcome as the unique equilibrium, independently of the underlying demand conditions. In the case of competition in two-part tariffs, we consider a class of access pricing rules, similar to the optimal one under linear prices but based on average retail prices. We show that firms choose the variable price equal to the marginal cost under this class of rules. Therefore, the regulator (or the competition authority) can choose one among the rules to pursue additional objectives such as consumer surplus, network coverage or investment: for instance, we show that both static and dynamic e±ciency can be achieved at the same time.
Resumo:
The possibilities of pairing in two-dimensional boson-fermion mixtures are carefully analyzed. It is shown that the boson-induced attraction between two identical fermions dominates the p wave pairing at low density. For a given fermion density, the pairing gap becomes maximal at a certain optimal boson concentration. The conditions for observing pairing in current experiments are discussed.
Resumo:
This paper provides a natural way of reaching an agreement between two prominent proposals in a bankruptcy problem. Particularly, using the fact that such problems can be faced from two different points of views, awards and losses, we justify the average of any pair of dual bankruptcy rules through the definition a double recursive process. Finally, by considering three posible sets of equity principles that a particular society may agree on, we retrieve the average of old and well known bankruptcy rules, the Constrained Equal Awards and the Constrained Equal Losses rules, Piniles’ rule and its dual rule, and the Constrained Egalitarian rule and its dual rule. Keywords: Bankruptcy problems, Midpoint, Bounds, Duality, Recursivity. JEL classification: C71, D63, D71.
Resumo:
When continuous data are coded to categorical variables, two types of coding are possible: crisp coding in the form of indicator, or dummy, variables with values either 0 or 1; or fuzzy coding where each observation is transformed to a set of "degrees of membership" between 0 and 1, using co-called membership functions. It is well known that the correspondence analysis of crisp coded data, namely multiple correspondence analysis, yields principal inertias (eigenvalues) that considerably underestimate the quality of the solution in a low-dimensional space. Since the crisp data only code the categories to which each individual case belongs, an alternative measure of fit is simply to count how well these categories are predicted by the solution. Another approach is to consider multiple correspondence analysis equivalently as the analysis of the Burt matrix (i.e., the matrix of all two-way cross-tabulations of the categorical variables), and then perform a joint correspondence analysis to fit just the off-diagonal tables of the Burt matrix - the measure of fit is then computed as the quality of explaining these tables only. The correspondence analysis of fuzzy coded data, called "fuzzy multiple correspondence analysis", suffers from the same problem, albeit attenuated. Again, one can count how many correct predictions are made of the categories which have highest degree of membership. But here one can also defuzzify the results of the analysis to obtain estimated values of the original data, and then calculate a measure of fit in the familiar percentage form, thanks to the resultant orthogonal decomposition of variance. Furthermore, if one thinks of fuzzy multiple correspondence analysis as explaining the two-way associations between variables, a fuzzy Burt matrix can be computed and the same strategy as in the crisp case can be applied to analyse the off-diagonal part of this matrix. In this paper these alternative measures of fit are defined and applied to a data set of continuous meteorological variables, which are coded crisply and fuzzily into three categories. Measuring the fit is further discussed when the data set consists of a mixture of discrete and continuous variables.
Resumo:
The present study explores the statistical properties of a randomization test based on the random assignment of the intervention point in a two-phase (AB) single-case design. The focus is on randomization distributions constructed with the values of the test statistic for all possible random assignments and used to obtain p-values. The shape of those distributions is investigated for each specific data division defined by the moment in which the intervention is introduced. Another aim of the study consisted in testing the detection of inexistent effects (i.e., production of false alarms) in autocorrelated data series, in which the assumption of exchangeability between observations may be untenable. In this way, it was possible to compare nominal and empirical Type I error rates in order to obtain evidence on the statistical validity of the randomization test for each individual data division. The results suggest that when either of the two phases has considerably less measurement times, Type I errors may be too probable and, hence, the decision making process to be carried out by applied researchers may be jeopardized.
Resumo:
In this letter, we obtain the Maximum LikelihoodEstimator of position in the framework of Global NavigationSatellite Systems. This theoretical result is the basis of a completelydifferent approach to the positioning problem, in contrastto the conventional two-steps position estimation, consistingof estimating the synchronization parameters of the in-viewsatellites and then performing a position estimation with thatinformation. To the authors’ knowledge, this is a novel approachwhich copes with signal fading and it mitigates multipath andjamming interferences. Besides, the concept of Position–basedSynchronization is introduced, which states that synchronizationparameters can be recovered from a user position estimation. Weprovide computer simulation results showing the robustness ofthe proposed approach in fading multipath channels. The RootMean Square Error performance of the proposed algorithm iscompared to those achieved with state-of-the-art synchronizationtechniques. A Sequential Monte–Carlo based method is used todeal with the multivariate optimization problem resulting fromthe ML solution in an iterative way.
Resumo:
Report for the scientific sojourn at the Research Institute for Applied Mathematics and Cybernetics, Nizhny Novgorod, Russia, from July to September 2006. Within the project, bifurcations of orbit behavior in area-preserving and reversible maps with a homoclinic tangency were studied. Finitely smooth normal forms for such maps near saddle fixed points were constructed and it was shown that they coincide in the main order with the analytical Birkhoff-Moser normal form. Bifurcations of single-round periodic orbits for two-dimensional symplectic maps close to a map with a quadratic homoclinic tangency were studied. The existence of one- and two-parameter cascades of elliptic periodic orbits was proved.
Resumo:
In this study I try to explain the systemic problem of the low economic competitiveness of nuclear energy for the production of electricity by carrying out a biophysical analysis of its production process. Given the fact that neither econometric approaches nor onedimensional methods of energy analyses are effective, I introduce the concept of biophysical explanation as a quantitative analysis capable of handling the inherent ambiguity associated with the concept of energy. In particular, the quantities of energy, considered as relevant for the assessment, can only be measured and aggregated after having agreed on a pre-analytical definition of a grammar characterizing a given set of finite transformations. Using this grammar it becomes possible to provide a biophysical explanation for the low economic competitiveness of nuclear energy in the production of electricity. When comparing the various unit operations of the process of production of electricity with nuclear energy to the analogous unit operations of the process of production of fossil energy, we see that the various phases of the process are the same. The only difference is related to characteristics of the process associated with the generation of heat which are completely different in the two systems. Since the cost of production of fossil energy provides the base line of economic competitiveness of electricity, the (lack of) economic competitiveness of the production of electricity from nuclear energy can be studied, by comparing the biophysical costs associated with the different unit operations taking place in nuclear and fossil power plants when generating process heat or net electricity. In particular, the analysis focuses on fossil-fuel requirements and labor requirements for those phases that both nuclear plants and fossil energy plants have in common: (i) mining; (ii) refining/enriching; (iii) generating heat/electricity; (iv) handling the pollution/radioactive wastes. By adopting this approach, it becomes possible to explain the systemic low economic competitiveness of nuclear energy in the production of electricity, because of: (i) its dependence on oil, limiting its possible role as a carbon-free alternative; (ii) the choices made in relation to its fuel cycle, especially whether it includes reprocessing operations or not; (iii) the unavoidable uncertainty in the definition of the characteristics of its process; (iv) its large inertia (lack of flexibility) due to issues of time scale; and (v) its low power level.
Resumo:
The analysis of the phytoplankton and environmental parameters of the time series in Alfacs and Fangar bays (north western Mediterranean) from 1990 to 2009 shows some trends. There is an increase in the average water column temperature, 0.11, 0.01, 0.80 and 0.23 ºC for spring, summer, fall and winter respectively in Alfacs Bay and 1.76, 0.71, 1.33, 0.89 ºC for spring, summer, fall and winter in Fangar Bay. The trends in phytoplankton populations show a shift in the timing of occurrence of Karlodinium spp. blooms and an increase of the Pseudo-nitzschia spp. abundances. There is a lack of correlation between the average seasonal temperatures and the toxic phytoplankton abundances.
Resumo:
What allows an armed group in a civil war to prevent desertion? This paper addresses this question with a focus on control in the rearguard. Most past studies focus on motivations for desertion. They explain desertion in terms of where soldiers stand in relation to the macro themes of the war, or in terms of an inability to provide positive incentives to overcome the collective action problem. However, since individuals decide whether and how to participate in civil wars for multiple reasons, responding to a variety of local conditions in an environment of threat and violence, a focus only on macro-level motivations is incomplete. The opportunities side of the ledger deserves more attention. I therefore turn my attention to how control by an armed group eliminates soldiers’ opportunities to desert. In particular, I consider the control that an armed group maintains over soldiers’ hometowns, treating geographic terrain as an important exogenous indicator of the ease of control. Rough terrain at home affords soldiers and their families and friends advantages in ease of hiding, the difficulty of using force, and local knowledge. Based on an original dataset of soldiers from Santander Province in the Spanish Civil War, gathered from archival sources, I find statistical evidence that the rougher the terrain in a soldier’s home municipality, the more likely he is to desert. I find complementary qualitative evidence indicating that soldiers from rough-terrain communities took active advantage of their greater opportunities for evasion. This finding has important implications for the way observers interpret different soldiers’ decisions to desert or remain fighting, for the prospect that structural factors may shape the cohesion of armed groups, and for the possibility that local knowledge may be a double-edged sword, making soldiers simultaneously good at fighting and good at deserting.