65 resultados para SimPly
Resumo:
In this paper, we consider the ATM networks in which the virtual path concept is implemented. The question of how to multiplex two or more diverse traffic classes while providing different quality of service requirements is a very complicated open problem. Two distinct options are available: integration and segregation. In an integration approach all the traffic from different connections are multiplexed onto one VP. This implies that the most restrictive QOS requirements must be applied to all services. Therefore, link utilization will be decreased because unnecessarily stringent QOS is provided to all connections. With the segregation approach the problem can be much simplified if different types of traffic are separated by assigning a VP with dedicated resources (buffers and links). Therefore, resources may not be efficiently utilized because no sharing of bandwidth can take place across the VP. The probability that the bandwidth required by the accepted connections exceeds the capacity of the link is evaluated with the probability of congestion (PC). Since the PC can be expressed as the CLP, we shall simply carry out bandwidth allocation using the PC. We first focus on the influence of some parameters (CLP, bit rate and burstiness) on the capacity required by a VP supporting a single traffic class using the new convolution approach. Numerical results are presented both to compare the required capacity and to observe which conditions under each approach are preferred
Resumo:
One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By anessential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur inmany compositional situations, such as household budget patterns, time budgets,palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful insuch situations. From consideration of such examples it seems sensible to build up amodel in two stages, the first determining where the zeros will occur and the secondhow the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data
Resumo:
La finalidad de este proyecto es la realización de una aplicación para dispositivos Android. El juego va dirigido a toda persona que quiera aprender o simplemente disfrutar del juego del póquer con personas conocidas. Esta aplicación pretende que jugar al póquer sea una práctica sencilla para todo el mundo. Se consigue también que sea el propio sistema el encargado de aplicar las normas y que sin necesidad de conexión a internet sea capaz de jugar con mas personas a la vez en un mismo dispositivo. Se ha escogido este sistema por la gran influencia que tiene y su fácil uso, además de posibilidades y facilidades que ofrece su compatibilidad con un lenguaje consolidado como es Java y XML.
Resumo:
In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.
Resumo:
Business organisations are excellent representations of what in physics and mathematics are designated "chaotic" systems. Because a culture of innovation will be vital for organisational survival in the 21st century, the present paper proposes that viewing organisations in terms of "complexity theory" may assist leaders in fine-tuning managerial philosophies that provide orderly management emphasizing stability within a culture of organised chaos, for it is on the "boundary of chaos" that the greatest creativity occurs. It is argued that 21st century companies, as chaotic social systems, will no longer be effectively managed by rigid objectives (MBO) nor by instructions (MBI). Their capacity for self-organisation will be derived essentially from how their members accept a shared set of values or principles for action (MBV). Complexity theory deals with systems that show complex structures in time or space, often hiding simple deterministic rules. This theory holds that once these rules are found, it is possible to make effective predictions and even to control the apparent complexity. The state of chaos that self-organises, thanks to the appearance of the "strange attractor", is the ideal basis for creativity and innovation in the company. In this self-organised state of chaos, members are not confined to narrow roles, and gradually develop their capacity for differentiation and relationships, growing continuously toward their maximum potential contribution to the efficiency of the organisation. In this way, values act as organisers or "attractors" of disorder, which in the theory of chaos are equations represented by unusually regular geometric configurations that predict the long-term behaviour of complex systems. In business organisations (as in all kinds of social systems) the starting principles end up as the final principles in the long term. An attractor is a model representation of the behavioral results of a system. The attractor is not a force of attraction or a goal-oriented presence in the system; it simply depicts where the system is headed based on its rules of motion. Thus, in a culture that cultivates or shares values of autonomy, responsibility, independence, innovation, creativity, and proaction, the risk of short-term chaos is mitigated by an overall long-term sense of direction. A more suitable approach to manage the internal and external complexities that organisations are currently confronting is to alter their dominant culture under the principles of MBV.
Resumo:
When continuous data are coded to categorical variables, two types of coding are possible: crisp coding in the form of indicator, or dummy, variables with values either 0 or 1; or fuzzy coding where each observation is transformed to a set of "degrees of membership" between 0 and 1, using co-called membership functions. It is well known that the correspondence analysis of crisp coded data, namely multiple correspondence analysis, yields principal inertias (eigenvalues) that considerably underestimate the quality of the solution in a low-dimensional space. Since the crisp data only code the categories to which each individual case belongs, an alternative measure of fit is simply to count how well these categories are predicted by the solution. Another approach is to consider multiple correspondence analysis equivalently as the analysis of the Burt matrix (i.e., the matrix of all two-way cross-tabulations of the categorical variables), and then perform a joint correspondence analysis to fit just the off-diagonal tables of the Burt matrix - the measure of fit is then computed as the quality of explaining these tables only. The correspondence analysis of fuzzy coded data, called "fuzzy multiple correspondence analysis", suffers from the same problem, albeit attenuated. Again, one can count how many correct predictions are made of the categories which have highest degree of membership. But here one can also defuzzify the results of the analysis to obtain estimated values of the original data, and then calculate a measure of fit in the familiar percentage form, thanks to the resultant orthogonal decomposition of variance. Furthermore, if one thinks of fuzzy multiple correspondence analysis as explaining the two-way associations between variables, a fuzzy Burt matrix can be computed and the same strategy as in the crisp case can be applied to analyse the off-diagonal part of this matrix. In this paper these alternative measures of fit are defined and applied to a data set of continuous meteorological variables, which are coded crisply and fuzzily into three categories. Measuring the fit is further discussed when the data set consists of a mixture of discrete and continuous variables.
Resumo:
Contingent sovereign debt can create important welfare gains. Nonetheless,there is almost no issuance today. Using hand-collected archival data, we examine thefirst known case of large-scale use of state-contingent sovereign debt in history. Philip IIof Spain entered into hundreds of contracts whose value and due date depended onverifiable, exogenous events such as the arrival of silver fleets. We show that this allowedfor effective risk-sharing between the king and his bankers. The data also stronglysuggest that the defaults that occurred were excusable they were simply contingenciesover which Crown and bankers had not contracted previously.
Resumo:
Does fiscal consolidation lead to social unrest? Using cross-country evidencefor the period 1919 to 2008, we examine the extent to which societies becomeunstable after budget cuts. The results show a clear correlation between fiscalretrenchment and instability. Expenditure cuts are particularly potent infueling protests; tax rises have only small and insignificant effects. We test ifthe relationship simply reflects economic downturns, using a recently-developedIMF dataset on exogenous expenditure shocks, and conclude that this is not thecase. While autocracies and democracies show broadly similar responses to budgetcuts, countries with more constraints on the executive are less likely to seeunrest after austerity measures. Growing media penetration does not strengthenthe effect of cut-backs on the level of unrest. We also find that austerity episodesthat result in unrest lead to quick reversals of fiscal policy.
Resumo:
Alan S. Milward was an economic historian who developed an implicit theory ofhistorical change. His interpretation which was neither liberal nor Marxist positedthat social, political, and economic change, for it to be sustainable, had to be agradual process rather than one resulting from a sudden, cataclysmicrevolutionary event occurring in one sector of the economy or society. Benignchange depended much less on natural resource endowment or technologicaldevelopments than on the ability of state institutions to respond to changingpolitical demands from within each society. State bureaucracies were fundamentalto formulating those political demands and advising politicians of ways to meetthem. Since each society was different there was no single model of developmentto be adopted or which could be imposed successfully by one nation-state onothers, either through force or through foreign aid programs. Nor coulddevelopment be promoted simply by copying the model of a more successfuleconomy. Each nation-state had to find its own response to the political demandsarising from within its society. Integration occurred when a number of nation states shared similar political objectives which they could not meet individuallybut could meet collectively. It was not simply the result of their increasinginterdependence. It was how and whether nation-states responded to thesedomestic demands which determined the nature of historical change.
Resumo:
Power transformations of positive data tables, prior to applying the correspondence analysis algorithm, are shown to open up a family of methods with direct connections to the analysis of log-ratios. Two variations of this idea are illustrated. The first approach is simply to power the original data and perform a correspondence analysis this method is shown to converge to unweighted log-ratio analysis as the power parameter tends to zero. The second approach is to apply the power transformation to thecontingency ratios, that is the values in the table relative to expected values based on the marginals this method converges to weighted log-ratio analysis, or the spectral map. Two applications are described: first, a matrix of population genetic data which is inherently two-dimensional, and second, a larger cross-tabulation with higher dimensionality, from a linguistic analysis of several books.
Resumo:
This paper proposes a model of financial markets and corporate finance,with asymmetric information and no taxes, where equity issues, Bankdebt and Bond financing may all co-exist in equilibrium. The paperemphasizes the relationship Banking aspect of financial intermediation:firms turn to banks as a source of investment mainly because banks aregood at helping them through times of financial distress. The debtrestructuring service that banks may offer, however, is costly. Therefore,the firms which do not expect to be financially distressed prefer toobtain a cheaper market source of funding through bond or equity issues.This explains why bank lending and bond financing may co-exist inequilibrium. The reason why firms or banks also issue equity in our modelis simply to avoid bankruptcy. Banks have the additional motive that theyneed to satisfy minimum capital adequacy requeriments. Several types ofequilibria are possible, one of which has all the main characteristics ofa "credit crunch". This multiplicity implies that the channels of monetarypolicy may depend on the type of equilibrium that prevails, leadingsometimes to support a "credit view" and other times the classical "moneyview".
Resumo:
Business organisations are excellent representations of what in physics and mathematics are designated "chaotic" systems. Because a culture of innovation will be vital for organisational survival in the 21st century, the present paper proposes that viewing organisations in terms of "complexity theory" may assist leaders in fine-tuning managerial philosophies that provide orderly management emphasizing stability within a culture of organised chaos, for it is on the "boundary of chaos" that the greatest creativity occurs. It is argued that 21st century companies, as chaotic social systems, will no longer be effectively managed by rigid objectives (MBO) nor by instructions (MBI). Their capacity for self-organisation will be derived essentially from how their members accept a shared set of values or principles for action (MBV). Complexity theory deals with systems that show complex structures in time or space, often hiding simple deterministic rules. This theory holds that once these rules are found, it is possible to make effective predictions and even to control the apparent complexity. The state of chaos that self-organises, thanks to the appearance of the "strange attractor", is the ideal basis for creativity and innovation in the company. In this self-organised state of chaos, members are not confined to narrow roles, and gradually develop their capacity for differentiation and relationships, growing continuously toward their maximum potential contribution to the efficiency of the organisation. In this way, values act as organisers or "attractors" of disorder, which in the theory of chaos are equations represented by unusually regular geometric configurations that predict the long-term behaviour of complex systems. In business organisations (as in all kinds of social systems) the starting principles end up as the final principles in the long term. An attractor is a model representation of the behavioral results of a system. The attractor is not a force of attraction or a goal-oriented presence in the system; it simply depicts where the system is headed based on its rules of motion. Thus, in a culture that cultivates or shares values of autonomy, responsibility, independence, innovation, creativity, and proaction, the risk of short-term chaos is mitigated by an overall long-term sense of direction. A more suitable approach to manage the internal and external complexities that organisations are currently confronting is to alter their dominant culture under the principles of MBV.
Resumo:
An important policy issue in recent years concerns the number of people claimingdisability benefits for reasons of incapacity for work. We distinguish between workdisability , which may have its roots in economic and social circumstances, and healthdisability which arises from clear diagnosed medical conditions. Although there is a linkbetween work and health disability, economic conditions, and in particular the businesscycle and variations in the risk of unemployment over time and across localities, mayplay an important part in explaining both the stock of disability benefit claimants andinflows to and outflow from that stock. We employ a variety of cross?country andcountry?specific household panel data sets, as well as administrative data, to testwhether disability benefit claims rise when unemployment is higher, and also toinvestigate the impact of unemployment rates on flows on and off the benefit rolls. Wefind strong evidence that local variations in unemployment have an importantexplanatory role for disability benefit receipt, with higher total enrolments, loweroutflows from rolls and, often, higher inflows into disability rolls in regions and periodsof above?average unemployment. Although general subjective measures of selfreporteddisability and longstanding illness are also positively associated withunemployment rates, inclusion of self?reported health measures does not eliminate thestatistical relationship between unemployment rates and disability benefit receipt;indeed including general measures of health often strengthens that underlyingrelationship. Intriguingly, we also find some evidence from the United Kingdom and theUnited States that the prevalence of self?reported objective specific indicators ofdisability are often pro?cyclical that is, the incidence of specific forms of disability arepro?cyclical whereas claims for disability benefits given specific health conditions arecounter?cyclical. Overall, the analysis suggests that, for a range of countries and datasets, levels of claims for disability benefits are not simply related to changes in theincidence of health disability in the population and are strongly influenced by prevailingeconomic conditions. We discuss the policy implications of these various findings.
Resumo:
In the analysis of multivariate categorical data, typically the analysis of questionnaire data, it is often advantageous, for substantive and technical reasons, to analyse a subset of response categories. In multiple correspondence analysis, where each category is coded as a column of an indicator matrix or row and column of Burt matrix, it is not correct to simply analyse the corresponding submatrix of data, since the whole geometric structure is different for the submatrix . A simple modification of the correspondence analysis algorithm allows the overall geometric structure of the complete data set to be retained while calculating the solution for the selected subset of points. This strategy is useful for analysing patterns of response amongst any subset of categories and relating these patterns to demographic factors, especially for studying patterns of particular responses such as missing and neutral responses. The methodology is illustrated using data from the International Social Survey Program on Family and Changing Gender Roles in 1994.
Resumo:
The problem of obesity is alarming public health authorities around the world. Therefore, it is important to study its determinants. In this paper we explore the empirical relationship between household income and body mass index (BMI) in nine European Union countries. Our findings suggest that the association is negative for women, but we find no statistically significant relationship for men. However, we show that the different relationship for men and women appears to be driven by the negative relationship for women between BMI and individual income from work. We tentatively conclude that the negative relationship between household income and BMI for women may simply be capturing the wage penalty that obese women suffer in the labor market.