937 resultados para mean profitability
Resumo:
The atomic mean square displacement (MSD) and the phonon dispersion curves (PDC's) of a number of face-centred cubic (fcc) and body-centred cubic (bcc) materials have been calclllated from the quasiharmonic (QH) theory, the lowest order (A2 ) perturbation theory (PT) and a recently proposed Green's function (GF) method by Shukla and Hiibschle. The latter method includes certain anharmonic effects to all orders of anharmonicity. In order to determine the effect of the range of the interatomic interaction upon the anharmonic contributions to the MSD we have carried out our calculations for a Lennard-Jones (L-J) solid in the nearest-neighbour (NN) and next-nearest neighbour (NNN) approximations. These results can be presented in dimensionless units but if the NN and NNN results are to be compared with each other they must be converted to that of a real solid. When this is done for Xe, the QH MSD for the NN and NNN approximations are found to differ from each other by about 2%. For the A2 and GF results this difference amounts to 8% and 7% respectively. For the NN case we have also compared our PT results, which have been calculated exactly, with PT results calculated using a frequency-shift approximation. We conclude that this frequency-shift approximation is a poor approximation. We have calculated the MSD of five alkali metals, five bcc transition metals and seven fcc transition metals. The model potentials we have used include the Morse, modified Morse, and Rydberg potentials. In general the results obtained from the Green's function method are in the best agreement with experiment. However, this improvement is mostly qualitative and the values of MSD calculated from the Green's function method are not in much better agreement with the experimental data than those calculated from the QH theory. We have calculated the phonon dispersion curves (PDC's) of Na and Cu, using the 4 parameter modified Morse potential. In the case of Na, our results for the PDC's are in poor agreement with experiment. In the case of eu, the agreement between the tlleory and experiment is much better and in addition the results for the PDC's calclliated from the GF method are in better agreement with experiment that those obtained from the QH theory.
Resumo:
The Portuguese community is one of the largest diasporic groups in the Greater Toronto Area and the choice of retention and transmission of language and culture to Luso-Canadians is crucial to the development and sustainability of the community. The overall objective of this study is to learn about the factors that influence Luso-Canadian mothers’ inclination to teach Portuguese language and cultural retention to their children. To explore this topic I employed a qualitative research design that included in-depth interviews conducted in 2012 with six Luso-Canadian mothers. Three central arguments emerged from the findings. First, Luso-Canadian mothers interviewed posses a pronounced desire for their children to succeed academically, and to provide opportunities that their children that they did not have. Second, five of the mothers attempt to achieve this mothering objective partly by disconnecting from their Portuguese roots, and by disassociating their children from the Portuguese language and culture. Third, the disconnection they experience and enact is influenced by the divisions evident in the Portuguese community in the GTA that divides regions and hierarchically ranks dialects, and groups. I conclude that the children in these households inevitably bear the prospects of maintaining a vibrant Portuguese community in the GTA and I propose that actions by the community in ranking dialects influence mothers’ decisions about transmitting language and culture to their children.
Resumo:
Presentation at Brock Library Spring Symposium 2015: What's really going on?
Resumo:
Presently, conditions ensuring the validity of bootstrap methods for the sample mean of (possibly heterogeneous) near epoch dependent (NED) functions of mixing processes are unknown. Here we establish the validity of the bootstrap in this context, extending the applicability of bootstrap methods to a class of processes broadly relevant for applications in economics and finance. Our results apply to two block bootstrap methods: the moving blocks bootstrap of Künsch ( 989) and Liu and Singh ( 992), and the stationary bootstrap of Politis and Romano ( 994). In particular, the consistency of the bootstrap variance estimator for the sample mean is shown to be robust against heteroskedasticity and dependence of unknown form. The first order asymptotic validity of the bootstrap approximation to the actual distribution of the sample mean is also established in this heterogeneous NED context.
Resumo:
It is often thought that a tariff reduction, by opening up the domestic market to foreign firms, should lessen the need for a policy aimed at discouraging domestic mergers. This implicitly assumes that the tariff in question is sufficiently high to prevent foreign firms from selling in the domestic market. However, not all tariffs are prohibitive, so that foreign firms may be present in the domestic market before it is abolished. Furthermore, even if the tariff is prohibitive, a merger of domestic firms may render it nonprohibitive, thus inviting foreign firms to penetrate the domestic market. In this paper, we show, using a simple example, that in the latter two cases, abolishing the tariff may in fact make the domestic merger more profitable. Hence, trade liberalization will not necessarily reduce the profitability of domestic mergers.
Resumo:
By reporting his satisfaction with his job or any other experience, an individual does not communicate the number of utils that he feels. Instead, he expresses his posterior preference over available alternatives conditional on acquired knowledge of the past. This new interpretation of reported job satisfaction restores the power of microeconomic theory without denying the essential role of discrepancies between one’s situation and available opportunities. Posterior human wealth discrepancies are found to be the best predictor of reported job satisfaction. Static models of relative utility and other subjective well-being assumptions are all unambiguously rejected by the data, as well as an \"economic\" model in which job satisfaction is a measure of posterior human wealth. The \"posterior choice\" model readily explains why so many people usually report themselves as happy or satisfied, why both younger and older age groups are insensitive to current earning discrepancies, and why the past weighs more heavily than the present and the future.
Resumo:
In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the frame-work of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gib-bons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)], most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken’s mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and mul-tivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors.
Resumo:
Rapport de recherche
Resumo:
Static oligopoly analysis predicts that if a single firm in Cournot equilibrium were to be constrained to contract its production marginally, its profits would fall. on the other hand, if all the firms were simultaneously constrained to reduce their productino, thus moving the industry towards monopoly output, each firm's profit would rise. We show that these very intuitive results may not hold in a dynamic oligopoly.
Resumo:
Étude de cas / Case study
Resumo:
Computational Biology is the research are that contributes to the analysis of biological data through the development of algorithms which will address significant research problems.The data from molecular biology includes DNA,RNA ,Protein and Gene expression data.Gene Expression Data provides the expression level of genes under different conditions.Gene expression is the process of transcribing the DNA sequence of a gene into mRNA sequences which in turn are later translated into proteins.The number of copies of mRNA produced is called the expression level of a gene.Gene expression data is organized in the form of a matrix. Rows in the matrix represent genes and columns in the matrix represent experimental conditions.Experimental conditions can be different tissue types or time points.Entries in the gene expression matrix are real values.Through the analysis of gene expression data it is possible to determine the behavioral patterns of genes such as similarity of their behavior,nature of their interaction,their respective contribution to the same pathways and so on. Similar expression patterns are exhibited by the genes participating in the same biological process.These patterns have immense relevance and application in bioinformatics and clinical research.Theses patterns are used in the medical domain for aid in more accurate diagnosis,prognosis,treatment planning.drug discovery and protein network analysis.To identify various patterns from gene expression data,data mining techniques are essential.Clustering is an important data mining technique for the analysis of gene expression data.To overcome the problems associated with clustering,biclustering is introduced.Biclustering refers to simultaneous clustering of both rows and columns of a data matrix. Clustering is a global whereas biclustering is a local model.Discovering local expression patterns is essential for identfying many genetic pathways that are not apparent otherwise.It is therefore necessary to move beyond the clustering paradigm towards developing approaches which are capable of discovering local patterns in gene expression data.A biclusters is a submatrix of the gene expression data matrix.The rows and columns in the submatrix need not be contiguous as in the gene expression data matrix.Biclusters are not disjoint.Computation of biclusters is costly because one will have to consider all the combinations of columans and rows in order to find out all the biclusters.The search space for the biclustering problem is 2 m+n where m and n are the number of genes and conditions respectively.Usually m+n is more than 3000.The biclustering problem is NP-hard.Biclustering is a powerful analytical tool for the biologist.The research reported in this thesis addresses the problem of biclustering.Ten algorithms are developed for the identification of coherent biclusters from gene expression data.All these algorithms are making use of a measure called mean squared residue to search for biclusters.The objective here is to identify the biclusters of maximum size with the mean squared residue lower than a given threshold. All these algorithms begin the search from tightly coregulated submatrices called the seeds.These seeds are generated by K-Means clustering algorithm.The algorithms developed can be classified as constraint based,greedy and metaheuristic.Constarint based algorithms uses one or more of the various constaints namely the MSR threshold and the MSR difference threshold.The greedy approach makes a locally optimal choice at each stage with the objective of finding the global optimum.In metaheuristic approaches particle Swarm Optimization(PSO) and variants of Greedy Randomized Adaptive Search Procedure(GRASP) are used for the identification of biclusters.These algorithms are implemented on the Yeast and Lymphoma datasets.Biologically relevant and statistically significant biclusters are identified by all these algorithms which are validated by Gene Ontology database.All these algorithms are compared with some other biclustering algorithms.Algorithms developed in this work overcome some of the problems associated with the already existing algorithms.With the help of some of the algorithms which are developed in this work biclusters with very high row variance,which is higher than the row variance of any other algorithm using mean squared residue, are identified from both Yeast and Lymphoma data sets.Such biclusters which make significant change in the expression level are highly relevant biologically.
Resumo:
ic first-order transition line ending in a critical point. This critical point is responsible for the existence of large premartensitic fluctuations which manifest as broad peaks in the specific heat, not always associated with a true phase transition. The main conclusion is that premartensitic effects result from the interplay between the softness of the anomalous phonon driving the modulation and the magnetoelastic coupling. In particular, the premartensitic transition occurs when such coupling is strong enough to freeze the involved mode phonon. The implication of the results in relation to the available experimental data is discussed.
Resumo:
We consider the effects of quantum fluctuations in mean-field quantum spin-glass models with pairwise interactions. We examine the nature of the quantum glass transition at zero temperature in a transverse field. In models (such as the random orthogonal model) where the classical phase transition is discontinuous an analysis using the static approximation reveals that the transition becomes continuous at zero temperature.