104 resultados para Finite Simple Groups
Resumo:
El TDA-H es un trastorno que no solo afecta y dificulta al niño su aprendizaje, sino que sus relaciones sociales, su entorno se ven trastornados. Lo que provoca al niño con TDA-H un sentimiento de soledad y de inseguridad que afecta negativa y directamente a su autoestima, lo que acrecienta aun más su problemática. Por ello es importante que familia y escuela se alíen a la hora de luchar contra este trastorno, haciendo partícipe al niño en esta lucha, donde el trió familia-escuela-niñoTDA-H han de trabajar de manera simbiótica, para que el niño pueda superar todas las dificultades que este trastorno arrastra con él. El diagnostico temprano es imprescindible, pero se debe empezar a trabajar a partir de las primeras alarmas que se despierten y que suelen despertarse en la escuela. Esto aliviará e incluso esquivará algunos de los golpes que este niño TDA-H tendrá que ir superando, hasta iniciar su correcto tratamiento.
Resumo:
The present paper shows de design of an experimental study conducted with large groups using educational innovation methodologies at the Polytechnic University of Madrid. Concretely, we have chosen the course titled "History and Politics of Sports" that belongs to the Physical Activity and Sport Science Degree. The selection of this course is because the syllabus is basically theoretical and there are four large groups of freshmen students who do not have previous experiences in a teaching-learning process based on educational innovation. It is hope that the results of this research can be extrapolated to other courses with similar characteristics.
Resumo:
In the accounting literature, interaction or moderating effects are usually assessed by means of OLS regression and summated rating scales are constructed to reduce measurement error bias. Structural equation models and two-stage least squares regression could be used to completely eliminate this bias, but large samples are needed. Partial Least Squares are appropriate for small samples but do not correct measurement error bias. In this article, disattenuated regression is discussed as a small sample alternative and is illustrated on data of Bisbe and Otley (in press) that examine the interaction effect of innovation and style of use of budgets on performance. Sizeable differences emerge between OLS and disattenuated regression
Resumo:
This article analyzes Folner sequences of projections for bounded linear operators and their relationship to the class of finite operators introduced by Williams in the 70ies. We prove that each essentially hyponormal operator has a proper Folner sequence (i.e. a Folner sequence of projections strongly converging to 1). In particular, any quasinormal, any subnormal, any hyponormal and any essentially normal operator has a proper Folner sequence. Moreover, we show that an operator is finite if and only if it has a proper Folner sequence or if it has a non-trivial finite dimensional reducing subspace. We also analyze the structure of operators which have no Folner sequence and give examples of them. For this analysis we introduce the notion of strongly non-Folner operators, which are far from finite block reducible operators, in some uniform sense, and show that this class coincides with the class of non-finite operators.
Resumo:
Actualmente no se disponen de marcadores biológicos específicos para la cáncer de próstata produciéndose en muchas ocasiones biopsias prostáticas innecesarias o un sobretratamientos para cánceres indolentes. Existen cada vez más un número mayor de publicaciones sobre cómo los polimorfismos de nucleótido simple (SNP) se relacionan con la susceptibilidad al cáncer de próstata o predecir con mayor precisión qué grado de agresividad adquiere la enfermedad. Se presenta una revisión bibliográfica de las investigaciones publicadas en PubMed desde el año 2000 hasta el 2012 que ponen de manifiesto la relación de los SNP con el riesgo a padecer cáncer de próstata y con sus características anatomopatológicas.
Resumo:
In the finite field (FF) treatment of vibrational polarizabilities and hyperpolarizabilities, the field-free Eckart conditions must be enforced in order to prevent molecular reorientation during geometry optimization. These conditions are implemented for the first time. Our procedure facilities identification of field-induced internal coordinates that make the major contribution to the vibrational properties. Using only two of these coordinates, quantitative accuracy for nuclear relaxation polarizabilities and hyperpolarizabilities is achieved in π-conjugated systems. From these two coordinates a single most efficient natural conjugation coordinate (NCC) can be extracted. The limitations of this one coordinate approach are discussed. It is shown that the Eckart conditions can lead to an isotope effect that is comparable to the isotope effect on zero-point vibrational averaging, but with a different mass-dependence
Resumo:
This study assesses the decline in second birth rates for men and women across different skill levels in transitional Russia. Changes within educational groups and occupational classes are observed over three distinct time periods: the Soviet era, economic crisis, and economic recovery. The most remarkable finding is the similarity in the extent second birth rates declined within educational groups and occupational classes during the economic crisis. Although further decline occurred in the recovery period, more variation emerged across groups.
Resumo:
The space and time discretization inherent to all FDTD schemesintroduce non-physical dispersion errors, i.e. deviations ofthe speed of sound from the theoretical value predicted bythe governing Euler differential equations. A generalmethodologyfor computing this dispersion error via straightforwardnumerical simulations of the FDTD schemes is presented.The method is shown to provide remarkable accuraciesof the order of 1/1000 in a wide variety of twodimensionalfinite difference schemes.
Resumo:
BACKGROUND: CODIS-STRs in Native Mexican groups have rarely been analysed for human identification and anthropological purposes. AIM:To analyse the genetic relationships and population structure among three Native Mexican groups from Mesoamerica.SUBJECTS AND METHODS: 531 unrelated Native individuals from Mexico were PCR-typed for 15 and 9 autosomal STRs (Identifiler™ and Profiler™ kits, respectively), including five population samples: Purépechas (Mountain, Valley and Lake), Triquis and Yucatec Mayas. Previously published STR data were included in the analyses. RESULTS:Allele frequencies and statistical parameters of forensic importance were estimated by population. The majority of Native groups were not differentiated pairwise, excepting Triquis and Purépechas, which was attributable to their relative geographic and cultural isolation. Although Mayas, Triquis and Purépechas-Mountain presented the highest number of private alleles, suggesting recurrent gene flow, the elevated differentiation of Triquis indicates a different origin of this gene flow. Interestingly, Huastecos and Mayas were not differentiated, which is in agreement with the archaeological hypothesis that Huastecos represent an ancestral Maya group. Interpopulation variability was greater in Natives than in Mestizos, both significant.CONCLUSION: Although results suggest that European admixture has increased the similarity between Native Mexican groups, the differentiation and inconsistent clustering by language or geography stresses the importance of serial founder effect and/or genetic drift in showing their present genetic relationships.
Resumo:
Dialogic learning and interactive groups have proved to be a useful methodological approach appliedin educational situations for lifelong adult learners. The principles of this approach stress theimportance of dialogue and equal participation also when designing the training activities. This paperadopts these principles as the basis for a configurable template that can be integrated in runtimesystems. The template is formulated as a meta-UoL which can be interpreted by IMS Learning Designplayers. This template serves as a guide to flexibly select and edit the activities at runtime (on the fly).The meta-UoL has been used successfully by a practitioner so as to create a real-life example, withpositive and encouraging results
Resumo:
From a managerial point of view, the more effcient, simple, and parameter-free (ESP) an algorithm is, the more likely it will be used in practice for solving real-life problems. Following this principle, an ESP algorithm for solving the Permutation Flowshop Sequencing Problem (PFSP) is proposed in this article. Using an Iterated Local Search (ILS) framework, the so-called ILS-ESP algorithm is able to compete in performance with other well-known ILS-based approaches, which are considered among the most effcient algorithms for the PFSP. However, while other similar approaches still employ several parameters that can affect their performance if not properly chosen, our algorithm does not require any particular fine-tuning process since it uses basic "common sense" rules for the local search, perturbation, and acceptance criterion stages of the ILS metaheuristic. Our approach defines a new operator for the ILS perturbation process, a new acceptance criterion based on extremely simple and transparent rules, and a biased randomization process of the initial solution to randomly generate different alternative initial solutions of similar quality -which is attained by applying a biased randomization to a classical PFSP heuristic. This diversification of the initial solution aims at avoiding poorly designed starting points and, thus, allows the methodology to take advantage of current trends in parallel and distributed computing. A set of extensive tests, based on literature benchmarks, has been carried out in order to validate our algorithm and compare it against other approaches. These tests show that our parameter-free algorithm is able to compete with state-of-the-art metaheuristics for the PFSP. Also, the experiments show that, when using parallel computing, it is possible to improve the top ILS-based metaheuristic by just incorporating to it our biased randomization process with a high-quality pseudo-random number generator.
Resumo:
In this paper, I consider a general and informationally effcient approach to determine the optimal access rule and show that there exists a simple rule that achieves the Ramsey outcome as the unique equilibrium when networks compete in linear prices without network-based price discrimination. My approach is informationally effcient in the sense that the regulator is required to know only the marginal cost structure, i.e. the marginal cost of making and terminating a call. The approach is general in that access prices can depend not only on the marginal costs but also on the retail prices, which can be observed by consumers and therefore by the regulator as well. In particular, I consider the set of linear access pricing rules which includes any fixed access price, the Efficient Component Pricing Rule (ECPR) and the Modified ECPR as special cases. I show that in this set, there is a unique access rule that achieves the Ramsey outcome as the unique equilibrium as long as there exists at least a mild degree of substitutability among networks' services.
Resumo:
Which projects should be financed through separate non-recourse loans (or limited- liability companies) and which should be bundled into a single loan? In the pres- ence of bankruptcy costs, this conglomeration decision trades off the benefit of co- insurance with the cost of risk contamination. This paper characterize this tradeoff for projects with binary returns, depending on the mean, variability, and skewness of returns, the bankruptcy recovery rate, the correlation across projects, the number of projects, and their heterogeneous characteristics. In some cases, separate financing dominates joint financing, even though it increases the interest rate or the probability of bankruptcy.
Resumo:
The Aitchison vector space structure for the simplex is generalized to a Hilbert space structure A2(P) for distributions and likelihoods on arbitrary spaces. Centralnotations of statistics, such as Information or Likelihood, can be identified in the algebraical structure of A2(P) and their corresponding notions in compositional data analysis, such as Aitchison distance or centered log ratio transform.In this way very elaborated aspects of mathematical statistics can be understoodeasily in the light of a simple vector space structure and of compositional data analysis. E.g. combination of statistical information such as Bayesian updating,combination of likelihood and robust M-estimation functions are simple additions/perturbations in A2(Pprior). Weighting observations corresponds to a weightedaddition of the corresponding evidence.Likelihood based statistics for general exponential families turns out to have aparticularly easy interpretation in terms of A2(P). Regular exponential families formfinite dimensional linear subspaces of A2(P) and they correspond to finite dimensionalsubspaces formed by their posterior in the dual information space A2(Pprior).The Aitchison norm can identified with mean Fisher information. The closing constant itself is identified with a generalization of the cummulant function and shown to be Kullback Leiblers directed information. Fisher information is the local geometry of the manifold induced by the A2(P) derivative of the Kullback Leibler information and the space A2(P) can therefore be seen as the tangential geometry of statistical inference at the distribution P.The discussion of A2(P) valued random variables, such as estimation functionsor likelihoods, give a further interpretation of Fisher information as the expected squared norm of evidence and a scale free understanding of unbiased reasoning
Resumo:
We construct an uncoupled randomized strategy of repeated play such that, if every player follows such a strategy, then the joint mixed strategy profiles converge, almost surely, to a Nash equilibrium of the one-shot game. The procedure requires very little in terms of players' information about the game. In fact, players' actions are based only on their own past payoffs and, in a variant of the strategy, players need not even know that their payoffs are determined through other players' actions. The procedure works for general finite games and is based on appropriate modifications of a simple stochastic learningrule introduced by Foster and Young.