49 resultados para Predicted genotypic values


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We prove that any subanalytic locally Lipschitz function has the Sard property. Such functions are typically nonsmooth and their lack of regularity necessitates the choice of some generalized notion of gradient and of critical point. In our framework these notions are defined in terms of the Clarke and of the convex-stable subdifferentials. The main result of this note asserts that for any subanalytic locally Lipschitz function the set of its Clarke critical values is locally finite. The proof relies on Pawlucki's extension of the Puiseuxlemma. In the last section we give an example of a continuous subanalytic function which is not constant on a segment of "broadly critical" points, that is, points for which we can find arbitrarily short convex combinations of gradients at nearby points.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This text was presented at the 16th International Seminar on Olympic Studies for Postgraduate Students that was organised by the International Olympic Academy in Ancient Olympia, from 1st to 30th July 2008. First here are reported, fundamental concepts on Olympics such as the Olympic values and the educational mandate of Pierre de Coubertin, the Olympic brand and symbols, the sponsorship and the Olympic partner programme. Then there is a chapter regarding the Top sponsors educational initiatives on Olympic values, and specially, describing the Olympic sponsors involvement in education and Top sponsors educational activities. And finally, the author analyses the sponsorship role in the promotion of Olympic Values Education, providing conclusions, comments on future and perspectives and some recommendations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper was presented in the International Symposium on Toward the Creation of New-Sport Cultures, undertaken in Osaka, Japan, in January 28, 1996. The main purpose is to make an interpretation of the cultural values of sport and Olympism in contemporary society, considering the enormous influence that the media have on them.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using theory and empirical data from social psychology to measure for cultural differences between countries, we study the effect of individualism as defined by Hofstede (1980) and egalitarianism as defined by Schwartz (1994, 1999, 2004) on earnings management. We find a significant influence of both cultural measures. In line with Licht et al. (2004), who argue that individualistic societies may be less susceptible to corruption, we find that countries scoring high on individualism tend to have lower levels of earnings management. In addition, we find that egalitarianism, defined as a society's cultural orientation with respect to intolerance for abuses of market and political power, is negatively related with earnings management. Our results are robust to different specifications and controls. The main message of this paper is that besides formal institutions, cultural differences are relevant to explain earnings management behaviour. We think that our work adds to the understanding of the importance of cultural values in managerial behaviour across countries contributing to the literature on earnings management and law and institutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider cooperative environments with externalities (games in partition function form) and provide a recursive definition of dividends for each coalition and any partition of the players it belongs to. We show that with this definition and equal sharing of these dividends the averaged sum of dividends for each player, over all the coalitions that contain the player, coincides with the corresponding average value of the player. We then construct weighted Shapley values by departing from equal division of dividends and finally, for each such value, provide a bidding mechanism implementing it.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examine the interactions between individual behavior, sentiments and the social contract in a model of rational voting over redistribution. Agents have moral "work values". Individuals' self-esteem and social consideration of others are endogenously determined comparing behaviors to moral standards. Attitudes toward redistribution depend on self-interest and social preferences. We characterize the politico-economic equilibria in which sentiments, labor supply and redistribution are determined simultaneously. The equilibria feature different degrees of "social cohesion" and redistribution depending on pre-tax income inequality. In clustered equilibria the poor are held partly responsible for their low income since they work less than the moral standard and hence redistribution is low. The paper proposes a novel explanation for the emergence of different sentiments and social contracts across countries. The predictions appear broadly in line with well-documented differences between the United States and Europe.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background and Purpose Early prediction of motor outcome is of interest in stroke management. We aimed to determine whether lesion location at DTT is predictive of motor outcome after acute stroke and whether this information improves the predictive accuracy of the clinical scores. Methods We evaluated 60 consecutive patients within 12 hours of MCA stroke onset. We used DTT to evaluate CST involvement in the MC and PMC, CS, CR, and PLIC and in combinations of these regions at admission, at day 3, and at day 30. Severity of limb weakness was assessed using the m-NIHSS (5a, 5b, 6a, 6b). We calculated volumes of infarct and FA values in the CST of the pons. Results Acute damage to the PLIC was the best predictor associated with poor motor outcome, axonal damage, and clinical severity at admission (P&.001). There was no significant correlation between acute infarct volume and motor outcome at day 90 (P=.176, r=0.485). The sensitivity, specificity, and positive and negative predictive values of acute CST involvement at the level of the PLIC for 4 motor outcome at day 90 were 73.7%, 100%, 100%, and 89.1%, respectively. In the acute stage, DTT predicted motor outcome at day 90 better than the clinical scores (R2=75.50, F=80.09, P&.001). Conclusions In the acute setting, DTT is promising for stroke mapping to predict motor outcome. Acute CST damage at the level of the PLIC is a significant predictor of unfavorable motor outcome.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given an observed test statistic and its degrees of freedom, one may compute the observed P value with most statistical packages. It is unknown to what extent test statistics and P values are congruent in published medical papers. Methods:We checked the congruence of statistical results reported in all the papers of volumes 409–412 of Nature (2001) and a random sample of 63 results from volumes 322–323 of BMJ (2001). We also tested whether the frequencies of the last digit of a sample of 610 test statistics deviated from a uniform distribution (i.e., equally probable digits).Results: 11.6% (21 of 181) and 11.1% (7 of 63) of the statistical results published in Nature and BMJ respectively during 2001 were incongruent, probably mostly due to rounding, transcription, or type-setting errors. At least one such error appeared in 38% and 25% of the papers of Nature and BMJ, respectively. In 12% of the cases, the significance level might change one or more orders of magnitude. The frequencies of the last digit of statistics deviated from the uniform distribution and suggested digit preference in rounding and reporting.Conclusions: this incongruence of test statistics and P values is another example that statistical practice is generally poor, even in the most renowned scientific journals, and that quality of papers should be more controlled and valued

Relevância:

20.00% 20.00%

Publicador:

Resumo:

All of the imputation techniques usually applied for replacing values below thedetection limit in compositional data sets have adverse effects on the variability. In thiswork we propose a modification of the EM algorithm that is applied using the additivelog-ratio transformation. This new strategy is applied to a compositional data set and theresults are compared with the usual imputation techniques

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main objective of this paper aims at developing a methodology that takes into account the human factor extracted from the data base used by the recommender systems, and which allow to resolve the specific problems of prediction and recommendation. In this work, we propose to extract the user's human values scale from the data base of the users, to improve their suitability in open environments, such as the recommender systems. For this purpose, the methodology is applied with the data of the user after interacting with the system. The methodology is exemplified with a case study

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We describe a simple method to automate the geometric optimization of molecular orbital calculations of supermolecules on potential surfaces that are corrected for basis set superposition error using the counterpoise (CP) method. This method is applied to the H-bonding complexes HF/HCN, HF/H2O, and HCCH/H2O using the 6-31G(d,p) and D95 + + (d,p) basis sets at both the Hartree-Fock and second-order Møller-Plesset levels. We report the interaction energies, geometries, and vibrational frequencies of these complexes on the CP-optimized surfaces; and compare them with similar values calculated using traditional methods, including the (more traditional) single point CP correction. Upon optimization on the CP-corrected surface, the interaction energies become more negative (before vibrational corrections) and the H-bonding stretching vibrations decrease in all cases. The extent of the effects vary from extremely small to quite large depending on the complex and the calculational method. The relative magnitudes of the vibrational corrections cannot be predicted from the H-bond stretching frequencies alone

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The behaviour of a new elastoplastic shear link dissipator has been analysed in the first part of this paper. The second part describes experimental and numerical studies for a SDOF non-standard dual system protected with shear dissipators. High and intermediate stiff deal systems with this Device have presented smaller values of the shear base force and the interstory drift when compared to linear and elastic systems response. It has been appreciated that most of introduced energy is dissipated when a low ratio between the main frame stiffness and dissipation system stiffness is hold. It has been also observed that a higher ratio between the dissipator yielding force and the total mass drives to a more reduced structural response. Finally is has been appreciated than the absorbed energy might be predicted using the velocity pseudo-spectra and an effective fundamental period, that has been defined by using the minimum secant stiffness of dual system

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Business organisations are excellent representations of what in physics and mathematics are designated "chaotic" systems. Because a culture of innovation will be vital for organisational survival in the 21st century, the present paper proposes that viewing organisations in terms of "complexity theory" may assist leaders in fine-tuning managerial philosophies that provide orderly management emphasizing stability within a culture of organised chaos, for it is on the "boundary of chaos" that the greatest creativity occurs. It is argued that 21st century companies, as chaotic social systems, will no longer be effectively managed by rigid objectives (MBO) nor by instructions (MBI). Their capacity for self-organisation will be derived essentially from how their members accept a shared set of values or principles for action (MBV). Complexity theory deals with systems that show complex structures in time or space, often hiding simple deterministic rules. This theory holds that once these rules are found, it is possible to make effective predictions and even to control the apparent complexity. The state of chaos that self-organises, thanks to the appearance of the "strange attractor", is the ideal basis for creativity and innovation in the company. In this self-organised state of chaos, members are not confined to narrow roles, and gradually develop their capacity for differentiation and relationships, growing continuously toward their maximum potential contribution to the efficiency of the organisation. In this way, values act as organisers or "attractors" of disorder, which in the theory of chaos are equations represented by unusually regular geometric configurations that predict the long-term behaviour of complex systems. In business organisations (as in all kinds of social systems) the starting principles end up as the final principles in the long term. An attractor is a model representation of the behavioral results of a system. The attractor is not a force of attraction or a goal-oriented presence in the system; it simply depicts where the system is headed based on its rules of motion. Thus, in a culture that cultivates or shares values of autonomy, responsibility, independence, innovation, creativity, and proaction, the risk of short-term chaos is mitigated by an overall long-term sense of direction. A more suitable approach to manage the internal and external complexities that organisations are currently confronting is to alter their dominant culture under the principles of MBV.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When continuous data are coded to categorical variables, two types of coding are possible: crisp coding in the form of indicator, or dummy, variables with values either 0 or 1; or fuzzy coding where each observation is transformed to a set of "degrees of membership" between 0 and 1, using co-called membership functions. It is well known that the correspondence analysis of crisp coded data, namely multiple correspondence analysis, yields principal inertias (eigenvalues) that considerably underestimate the quality of the solution in a low-dimensional space. Since the crisp data only code the categories to which each individual case belongs, an alternative measure of fit is simply to count how well these categories are predicted by the solution. Another approach is to consider multiple correspondence analysis equivalently as the analysis of the Burt matrix (i.e., the matrix of all two-way cross-tabulations of the categorical variables), and then perform a joint correspondence analysis to fit just the off-diagonal tables of the Burt matrix - the measure of fit is then computed as the quality of explaining these tables only. The correspondence analysis of fuzzy coded data, called "fuzzy multiple correspondence analysis", suffers from the same problem, albeit attenuated. Again, one can count how many correct predictions are made of the categories which have highest degree of membership. But here one can also defuzzify the results of the analysis to obtain estimated values of the original data, and then calculate a measure of fit in the familiar percentage form, thanks to the resultant orthogonal decomposition of variance. Furthermore, if one thinks of fuzzy multiple correspondence analysis as explaining the two-way associations between variables, a fuzzy Burt matrix can be computed and the same strategy as in the crisp case can be applied to analyse the off-diagonal part of this matrix. In this paper these alternative measures of fit are defined and applied to a data set of continuous meteorological variables, which are coded crisply and fuzzily into three categories. Measuring the fit is further discussed when the data set consists of a mixture of discrete and continuous variables.