948 resultados para Tobey Mean
Resumo:
In low-cycling countries, cycling is not evenly distributed across genders and age groups. In the UK, men are twice as likely as women to cycle to work and cycling tends to be dominated by younger adults. By contrast, in higher cycling countries and cities, gender differences are low, absent, or in the opposite direction. Such places also lack the UK's steady decline in cycling among those aged over 35 years. Over the past fifteen years some UK local areas have seen increases in cycling. This paper analyses data from the English and Welsh Census 2001 and 2011 to examine whether such increases are associated with greater diversity among cyclists. We find that in areas where cycling has increased, there has been no increase in the representation of females, and a decrease in the representation of older adults. We discuss potential causes and policy implications. Importantly, simply increasing cycling modal share has not proved sufficient to create an inclusive cycling culture. The UK's culturally specific factors limiting female take-up of cycling seem to remain in place, even where cycling has gone up. Creating a mass cycling culture may require deliberately targeting infrastructure and policies towards currently under-represented groups.
Resumo:
This paper addresses the calculation of fractional order expressions through rational fractions. The article starts by analyzing the techniques adopted in the continuous to discrete time conversion. The problem is re-evaluated in an optimization perspective by tacking advantage of the degree of freedom provided by the generalized mean formula. The results demonstrate the superior performance of the new algorithm.
Resumo:
This work models the competitive behaviour of individuals who maximize their own utility managing their network of connections with other individuals. Utility is taken as a synonym of reputation in this model. Each agent has to decide between two variables: the quality of connections and the number of connections. Hence, the reputation of an individual is a function of the number and the quality of connections within the network. On the other hand, individuals incur in a cost when they improve their network of contacts. The initial value of the quality and number of connections of each individual is distributed according to an initial (given) distribution. The competition occurs over continuous time and among a continuum of agents. A mean field game approach is adopted to solve the model, leading to an optimal trajectory for the number and quality of connections for each individual.
Resumo:
In contemporary society, religious signification and secular systems mix and influence each other. Holistic conceptions of a world in which man is integrated harmoniously with nature meet representations of a world run by an immanent God. On the market of the various systems, the individual goes from one system to another, following his immediate needs and expectations without necessarily leaving any marks in a meaningful long term system. This article presents the first results of an ongoing research in Switzerland on contemporary religion focusing on (new) paths of socialization of modern that individuals and the various (non-) belief systems that they simultaneously develop
Resumo:
AIMS: To investigate the relationship of alcohol consumption with the metabolic syndrome and diabetes in a population-based study with high mean alcohol consumption. Few data exist on these conditions in high-risk drinkers. METHODS: In 6172 adults aged 35-75 years, alcohol consumption was categorized as 0, 1-6, 7-13, 14-20, 21-27, 28-34 and ≥ 35 drinks/week or as non-drinkers (0), low-risk (1-13), medium-to-high-risk (14-34) and very-high-risk (≥ 35) drinkers. Alcohol consumption was objectively confirmed by biochemical tests. In multivariate analysis, we assessed the relationship of alcohol consumption with adjusted prevalence of the metabolic syndrome, diabetes and insulin resistance, determined with the homeostasis model assessment of insulin resistance (HOMA-IR). RESULTS: Seventy-three per cent of participants consumed alcohol, 16% were medium-to-high-risk drinkers and 2% very-high-risk drinkers. In multivariate analysis, the prevalence of the metabolic syndrome, diabetes and mean HOMA-IR decreased with low-risk drinking and increased with high-risk drinking. Adjusted prevalence of the metabolic syndrome was 24% in non-drinkers, 19% in low-risk (P<0.001 vs. non-drinkers), 20% in medium-to-high-risk and 29% in very-high-risk drinkers (P=0.005 vs. low-risk). Adjusted prevalence of diabetes was 6.0% in non-drinkers, 3.6% in low-risk (P<0.001 vs. non-drinkers), 3.8% in medium-to-high-risk and 6.7% in very-high-risk drinkers (P=0.046 vs. low-risk). Adjusted HOMA-IR was 2.47 in non-drinkers, 2.14 in low-risk (P<0.001 vs. non-drinkers), 2.27 in medium-to-high-risk and 2.53 in very-high-risk drinkers (P=0.04 vs. low-risk). These relationships did not differ according to beverage types. CONCLUSIONS: Alcohol has a U-shaped relationship with the metabolic syndrome, diabetes and HOMA-IR, without differences between beverage types.
Resumo:
A general derivation of the anharmonic coefficients for a periodic lattice invoking the special case of the central force interaction is presented. All of the contributions to mean square displacement (MSD) to order 14 perturbation theory are enumerated. A direct correspondance is found between the high temperature limit MSD and high temperature limit free energy contributions up to and including 0(14). This correspondance follows from the detailed derivation of some of the contributions to MSD. Numerical results are obtained for all the MSD contributions to 0(14) using the Lennard-Jones potential for the lattice constants and temperatures for which the Monte Carlo results were calculated by Heiser, Shukla and Cowley. The Peierls approximation is also employed in order to simplify the numerical evaluation of the MSD contributions. The numerical results indicate the convergence of the perturbation expansion up to 75% of the melting temperature of the solid (TM) for the exact calculation; however, a better agreement with the Monte Carlo results is not obtained when the total of all 14 contributions is added to the 12 perturbation theory results. Using Peierls approximation the expansion converges up to 45% of TM• The MSD contributions arising in the Green's function method of Shukla and Hubschle are derived and enumerated up to and including 0(18). The total MSD from these selected contributions is in excellent agreement with their results at all temperatures. Theoretical values of the recoilless fraction for krypton are calculated from the MSD contributions for both the Lennard-Jones and Aziz potentials. The agreement with experimental values is quite good.
Resumo:
Molec ul ar dynamics calculations of the mean sq ua re displacement have been carried out for the alkali metals Na, K and Cs and for an fcc nearest neighbour Lennard-Jones model applicable to rare gas solids. The computations for the alkalis were done for several temperatures for temperature vol ume a swell as for the the ze r 0 pressure ze ro zero pressure volume corresponding to each temperature. In the fcc case, results were obtained for a wide range of both the temperature and density. Lattice dynamics calculations of the harmonic and the lowe s t order anharmonic (cubic and quartic) contributions to the mean square displacement were performed for the same potential models as in the molecular dynamics calculations. The Brillouin zone sums arising in the harmonic and the quartic terms were computed for very large numbers of points in q-space, and were extrapolated to obtain results ful converged with respect to the number of points in the Brillouin zone.An excellent agreement between the lattice dynamics results was observed molecular dynamics and in the case of all the alkali metals, e~ept for the zero pressure case of CSt where the difference is about 15 % near the melting temperature. It was concluded that for the alkalis, the lowest order perturbation theory works well even at temperat ures close to the melting temperat ure. For the fcc nearest neighbour model it was found that the number of particles (256) used for the molecular dynamics calculations, produces a result which is somewhere between 10 and 20 % smaller than the value converged with respect to the number of particles. However, the general temperature dependence of the mean square displacement is the same in molecular dynamics and lattice dynamics for all temperatures at the highest densities examined, while at higher volumes and high temperatures the results diverge. This indicates the importance of the higher order (eg. ~* ) perturbation theory contributions in these cases.
Resumo:
We have presented a Green's function method for the calculation of the atomic mean square displacement (MSD) for an anharmonic Hamil toni an . This method effectively sums a whole class of anharmonic contributions to MSD in the perturbation expansion in the high temperature limit. Using this formalism we have calculated the MSD for a nearest neighbour fcc Lennard Jones solid. The results show an improvement over the lowest order perturbation theory results, the difference with Monte Carlo calculations at temperatures close to melting is reduced from 11% to 3%. We also calculated the MSD for the Alkali metals Nat K/ Cs where a sixth neighbour interaction potential derived from the pseudopotential theory was employed in the calculations. The MSD by this method increases by 2.5% to 3.5% over the respective perturbation theory results. The MSD was calculated for Aluminum where different pseudopotential functions and a phenomenological Morse potential were used. The results show that the pseudopotentials provide better agreement with experimental data than the Morse potential. An excellent agreement with experiment over the whole temperature range is achieved with the Harrison modified point-ion pseudopotential with Hubbard-Sham screening function. We have calculated the thermodynamic properties of solid Kr by minimizing the total energy consisting of static and vibrational components, employing different schemes: The quasiharmonic theory (QH), ).2 and).4 perturbation theory, all terms up to 0 ().4) of the improved self consistent phonon theory (ISC), the ring diagrams up to o ().4) (RING), the iteration scheme (ITER) derived from the Greens's function method and a scheme consisting of ITER plus the remaining contributions of 0 ().4) which are not included in ITER which we call E(FULL). We have calculated the lattice constant, the volume expansion, the isothermal and adiabatic bulk modulus, the specific heat at constant volume and at constant pressure, and the Gruneisen parameter from two different potential functions: Lennard-Jones and Aziz. The Aziz potential gives generally a better agreement with experimental data than the LJ potential for the QH, ).2, ).4 and E(FULL) schemes. When only a partial sum of the).4 diagrams is used in the calculations (e.g. RING and ISC) the LJ results are in better agreement with experiment. The iteration scheme brings a definitive improvement over the).2 PT for both potentials.
Resumo:
The atomic mean square displacement (MSD) and the phonon dispersion curves (PDC's) of a number of face-centred cubic (fcc) and body-centred cubic (bcc) materials have been calclllated from the quasiharmonic (QH) theory, the lowest order (A2 ) perturbation theory (PT) and a recently proposed Green's function (GF) method by Shukla and Hiibschle. The latter method includes certain anharmonic effects to all orders of anharmonicity. In order to determine the effect of the range of the interatomic interaction upon the anharmonic contributions to the MSD we have carried out our calculations for a Lennard-Jones (L-J) solid in the nearest-neighbour (NN) and next-nearest neighbour (NNN) approximations. These results can be presented in dimensionless units but if the NN and NNN results are to be compared with each other they must be converted to that of a real solid. When this is done for Xe, the QH MSD for the NN and NNN approximations are found to differ from each other by about 2%. For the A2 and GF results this difference amounts to 8% and 7% respectively. For the NN case we have also compared our PT results, which have been calculated exactly, with PT results calculated using a frequency-shift approximation. We conclude that this frequency-shift approximation is a poor approximation. We have calculated the MSD of five alkali metals, five bcc transition metals and seven fcc transition metals. The model potentials we have used include the Morse, modified Morse, and Rydberg potentials. In general the results obtained from the Green's function method are in the best agreement with experiment. However, this improvement is mostly qualitative and the values of MSD calculated from the Green's function method are not in much better agreement with the experimental data than those calculated from the QH theory. We have calculated the phonon dispersion curves (PDC's) of Na and Cu, using the 4 parameter modified Morse potential. In the case of Na, our results for the PDC's are in poor agreement with experiment. In the case of eu, the agreement between the tlleory and experiment is much better and in addition the results for the PDC's calclliated from the GF method are in better agreement with experiment that those obtained from the QH theory.
Resumo:
The Portuguese community is one of the largest diasporic groups in the Greater Toronto Area and the choice of retention and transmission of language and culture to Luso-Canadians is crucial to the development and sustainability of the community. The overall objective of this study is to learn about the factors that influence Luso-Canadian mothers’ inclination to teach Portuguese language and cultural retention to their children. To explore this topic I employed a qualitative research design that included in-depth interviews conducted in 2012 with six Luso-Canadian mothers. Three central arguments emerged from the findings. First, Luso-Canadian mothers interviewed posses a pronounced desire for their children to succeed academically, and to provide opportunities that their children that they did not have. Second, five of the mothers attempt to achieve this mothering objective partly by disconnecting from their Portuguese roots, and by disassociating their children from the Portuguese language and culture. Third, the disconnection they experience and enact is influenced by the divisions evident in the Portuguese community in the GTA that divides regions and hierarchically ranks dialects, and groups. I conclude that the children in these households inevitably bear the prospects of maintaining a vibrant Portuguese community in the GTA and I propose that actions by the community in ranking dialects influence mothers’ decisions about transmitting language and culture to their children.
Resumo:
Presentation at Brock Library Spring Symposium 2015: What's really going on?
Resumo:
Presently, conditions ensuring the validity of bootstrap methods for the sample mean of (possibly heterogeneous) near epoch dependent (NED) functions of mixing processes are unknown. Here we establish the validity of the bootstrap in this context, extending the applicability of bootstrap methods to a class of processes broadly relevant for applications in economics and finance. Our results apply to two block bootstrap methods: the moving blocks bootstrap of Künsch ( 989) and Liu and Singh ( 992), and the stationary bootstrap of Politis and Romano ( 994). In particular, the consistency of the bootstrap variance estimator for the sample mean is shown to be robust against heteroskedasticity and dependence of unknown form. The first order asymptotic validity of the bootstrap approximation to the actual distribution of the sample mean is also established in this heterogeneous NED context.
Resumo:
By reporting his satisfaction with his job or any other experience, an individual does not communicate the number of utils that he feels. Instead, he expresses his posterior preference over available alternatives conditional on acquired knowledge of the past. This new interpretation of reported job satisfaction restores the power of microeconomic theory without denying the essential role of discrepancies between one’s situation and available opportunities. Posterior human wealth discrepancies are found to be the best predictor of reported job satisfaction. Static models of relative utility and other subjective well-being assumptions are all unambiguously rejected by the data, as well as an \"economic\" model in which job satisfaction is a measure of posterior human wealth. The \"posterior choice\" model readily explains why so many people usually report themselves as happy or satisfied, why both younger and older age groups are insensitive to current earning discrepancies, and why the past weighs more heavily than the present and the future.
Resumo:
In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the frame-work of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gib-bons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)], most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken’s mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and mul-tivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors.
Resumo:
Computational Biology is the research are that contributes to the analysis of biological data through the development of algorithms which will address significant research problems.The data from molecular biology includes DNA,RNA ,Protein and Gene expression data.Gene Expression Data provides the expression level of genes under different conditions.Gene expression is the process of transcribing the DNA sequence of a gene into mRNA sequences which in turn are later translated into proteins.The number of copies of mRNA produced is called the expression level of a gene.Gene expression data is organized in the form of a matrix. Rows in the matrix represent genes and columns in the matrix represent experimental conditions.Experimental conditions can be different tissue types or time points.Entries in the gene expression matrix are real values.Through the analysis of gene expression data it is possible to determine the behavioral patterns of genes such as similarity of their behavior,nature of their interaction,their respective contribution to the same pathways and so on. Similar expression patterns are exhibited by the genes participating in the same biological process.These patterns have immense relevance and application in bioinformatics and clinical research.Theses patterns are used in the medical domain for aid in more accurate diagnosis,prognosis,treatment planning.drug discovery and protein network analysis.To identify various patterns from gene expression data,data mining techniques are essential.Clustering is an important data mining technique for the analysis of gene expression data.To overcome the problems associated with clustering,biclustering is introduced.Biclustering refers to simultaneous clustering of both rows and columns of a data matrix. Clustering is a global whereas biclustering is a local model.Discovering local expression patterns is essential for identfying many genetic pathways that are not apparent otherwise.It is therefore necessary to move beyond the clustering paradigm towards developing approaches which are capable of discovering local patterns in gene expression data.A biclusters is a submatrix of the gene expression data matrix.The rows and columns in the submatrix need not be contiguous as in the gene expression data matrix.Biclusters are not disjoint.Computation of biclusters is costly because one will have to consider all the combinations of columans and rows in order to find out all the biclusters.The search space for the biclustering problem is 2 m+n where m and n are the number of genes and conditions respectively.Usually m+n is more than 3000.The biclustering problem is NP-hard.Biclustering is a powerful analytical tool for the biologist.The research reported in this thesis addresses the problem of biclustering.Ten algorithms are developed for the identification of coherent biclusters from gene expression data.All these algorithms are making use of a measure called mean squared residue to search for biclusters.The objective here is to identify the biclusters of maximum size with the mean squared residue lower than a given threshold. All these algorithms begin the search from tightly coregulated submatrices called the seeds.These seeds are generated by K-Means clustering algorithm.The algorithms developed can be classified as constraint based,greedy and metaheuristic.Constarint based algorithms uses one or more of the various constaints namely the MSR threshold and the MSR difference threshold.The greedy approach makes a locally optimal choice at each stage with the objective of finding the global optimum.In metaheuristic approaches particle Swarm Optimization(PSO) and variants of Greedy Randomized Adaptive Search Procedure(GRASP) are used for the identification of biclusters.These algorithms are implemented on the Yeast and Lymphoma datasets.Biologically relevant and statistically significant biclusters are identified by all these algorithms which are validated by Gene Ontology database.All these algorithms are compared with some other biclustering algorithms.Algorithms developed in this work overcome some of the problems associated with the already existing algorithms.With the help of some of the algorithms which are developed in this work biclusters with very high row variance,which is higher than the row variance of any other algorithm using mean squared residue, are identified from both Yeast and Lymphoma data sets.Such biclusters which make significant change in the expression level are highly relevant biologically.