944 resultados para Education, Mathematics|Education, Bilingual and Multicultural|Education, Sciences
Resumo:
Peer reviewed
Resumo:
Following and contributing to the ongoing shift from more structuralist, system-oriented to more pragmatic, socio-cultural oriented anglicism research, this paper verifies to what extent the global spread of English affects naming patterns in Flanders. To this end, a diachronic database of first names is constructed, containing the top 75 most popular boy and girl names from 2005 until 2014. In a first step, the etymological background of these names is documented and the evolution in popularity of the English names in the database is tracked. Results reveal no notable surge in the preference for English names. This paper complements these database-driven results with an experimental study, aiming to show how associations through referents are in this case more telling than associations through phonological form (here based on etymology). Focusing on the socio-cultural background of first names in general and of Anglo-American pop culture in particular, the second part of the study specifically reports on results from a survey where participants are asked to name the first three celebrities that leap to mind when hearing a certain first name (e.g. Lana, triggering the response Del Rey). Very clear associations are found between certain first names and specific celebrities from Anglo-American pop culture. Linking back to marketing research and the social turn in onomastics, we will discuss how these celebrities might function as referees, and how social stereotypes surrounding these referees are metonymically attached to their first names. Similar to the country-of-origin-effect in marketing, these metonymical links could very well be the reason why parents select specific “celebrity names”. Although further attitudinal research is needed, this paper supports the importance of including socio-cultural parameters when conducting onomastic research.
Resumo:
Adaptability and invisibility are hallmarks of modern terrorism, and keeping pace with its dynamic nature presents a serious challenge for societies throughout the world. Innovations in computer science have incorporated applied mathematics to develop a wide array of predictive models to support the variety of approaches to counterterrorism. Predictive models are usually designed to forecast the location of attacks. Although this may protect individual structures or locations, it does not reduce the threat—it merely changes the target. While predictive models dedicated to events or social relationships receive much attention where the mathematical and social science communities intersect, models dedicated to terrorist locations such as safe-houses (rather than their targets or training sites) are rare and possibly nonexistent. At the time of this research, there were no publically available models designed to predict locations where violent extremists are likely to reside. This research uses France as a case study to present a complex systems model that incorporates multiple quantitative, qualitative and geospatial variables that differ in terms of scale, weight, and type. Though many of these variables are recognized by specialists in security studies, there remains controversy with respect to their relative importance, degree of interaction, and interdependence. Additionally, some of the variables proposed in this research are not generally recognized as drivers, yet they warrant examination based on their potential role within a complex system. This research tested multiple regression models and determined that geographically-weighted regression analysis produced the most accurate result to accommodate non-stationary coefficient behavior, demonstrating that geographic variables are critical to understanding and predicting the phenomenon of terrorism. This dissertation presents a flexible prototypical model that can be refined and applied to other regions to inform stakeholders such as policy-makers and law enforcement in their efforts to improve national security and enhance quality-of-life.
Resumo:
The main goal of this paper is to establish some equivalence results on stability, recurrence, and ergodicity between a piecewise deterministic Markov process ( PDMP) {X( t)} and an embedded discrete-time Markov chain {Theta(n)} generated by a Markov kernel G that can be explicitly characterized in terms of the three local characteristics of the PDMP, leading to tractable criterion results. First we establish some important results characterizing {Theta(n)} as a sampling of the PDMP {X( t)} and deriving a connection between the probability of the first return time to a set for the discrete-time Markov chains generated by G and the resolvent kernel R of the PDMP. From these results we obtain equivalence results regarding irreducibility, existence of sigma-finite invariant measures, and ( positive) recurrence and ( positive) Harris recurrence between {X( t)} and {Theta(n)}, generalizing the results of [ F. Dufour and O. L. V. Costa, SIAM J. Control Optim., 37 ( 1999), pp. 1483-1502] in several directions. Sufficient conditions in terms of a modified Foster-Lyapunov criterion are also presented to ensure positive Harris recurrence and ergodicity of the PDMP. We illustrate the use of these conditions by showing the ergodicity of a capacity expansion model.
Resumo:
We apply techniques of zeta functions and regularized products theory to study the zeta determinant of a class of abstract operators with compact resolvent, and in particular the relation with other spectral functions.
Resumo:
This article presents maximum likelihood estimators (MLEs) and log-likelihood ratio (LLR) tests for the eigenvalues and eigenvectors of Gaussian random symmetric matrices of arbitrary dimension, where the observations are independent repeated samples from one or two populations. These inference problems are relevant in the analysis of diffusion tensor imaging data and polarized cosmic background radiation data, where the observations are, respectively, 3 x 3 and 2 x 2 symmetric positive definite matrices. The parameter sets involved in the inference problems for eigenvalues and eigenvectors are subsets of Euclidean space that are either affine subspaces, embedded submanifolds that are invariant under orthogonal transformations or polyhedral convex cones. We show that for a class of sets that includes the ones considered in this paper, the MLEs of the mean parameter do not depend on the covariance parameters if and only if the covariance structure is orthogonally invariant. Closed-form expressions for the MLEs and the associated LLRs are derived for this covariance structure.
Resumo:
In this work we study some properties of the differential complex associated to a locally integrable (involutive) structure acting on forms with Gevrey coefficients. Among other results we prove that, for such complexes, Gevrey solvability follows from smooth solvability under the sole assumption of a regularity condition. As a consequence we obtain the proof of the Gevrey solvability for a first order linear PDE with real-analytic coefficients satisfying the Nirenberg-Treves condition (P).
Resumo:
We define a new type of self-similarity for one-parameter families of stochastic processes, which applies to certain important families of processes that are not self-similar in the conventional sense. This includes Hougaard Levy processes such as the Poisson processes, Brownian motions with drift and the inverse Gaussian processes, and some new fractional Hougaard motions defined as moving averages of Hougaard Levy process. Such families have many properties in common with ordinary self-similar processes, including the form of their covariance functions, and the fact that they appear as limits in a Lamperti-type limit theorem for families of stochastic processes.
Resumo:
Genetic recombination can produce heterogeneous phylogenetic histories within a set of homologous genes. Delineating recombination events is important in the study of molecular evolution, as inference of such events provides a clearer picture of the phylogenetic relationships among different gene sequences or genomes. Nevertheless, detecting recombination events can be a daunting task, as the performance of different recombination-detecting approaches can vary, depending on evolutionary events that take place after recombination. We recently evaluated the effects of post-recombination events on the prediction accuracy of recombination-detecting approaches using simulated nucleotide sequence data. The main conclusion, supported by other studies, is that one should not depend on a single method when searching for recombination events. In this paper, we introduce a two-phase strategy, applying three statistical measures to detect the occurrence of recombination events, and a Bayesian phylogenetic approach in delineating breakpoints of such events in nucleotide sequences. We evaluate the performance of these approaches using simulated data, and demonstrate the applicability of this strategy to empirical data. The two-phase strategy proves to be time-efficient when applied to large datasets, and yields high-confidence results.
Resumo:
The theory of Owicki and Gries has been used as a platform for safety-based verifcation and derivation of concurrent programs. It has also been integrated with the progress logic of UNITY which has allowed newer techniques of progress-based verifcation and derivation to be developed. However, a theoretical basis for the integrated theory has thus far been missing. In this paper, we provide a theoretical background for the logic of Owicki and Gries integrated with the logic of progress from UNITY. An operational semantics for the new framework is provided which is used to prove soundness of the progress logic.
Resumo:
The A(n-1)((1)) trigonometric vertex model with generic non-diagonal boundaries is studied. The double-row transfer matrix of the model is diagonalized by algebraic Bethe ansatz method in terms of the intertwiner and the corresponding face-vertex relation. The eigenvalues and the corresponding Bethe ansatz equations are obtained.
Resumo:
Extended gcd calculation has a long history and plays an important role in computational number theory and linear algebra. Recent results have shown that finding optimal multipliers in extended gcd calculations is difficult. We present an algorithm which uses lattice basis reduction to produce small integer multipliers x(1), ..., x(m) for the equation s = gcd (s(1), ..., s(m)) = x(1)s(1) + ... + x(m)s(m), where s1, ... , s(m) are given integers. The method generalises to produce small unimodular transformation matrices for computing the Hermite normal form of an integer matrix.
Resumo:
We study some challenging presentations which arise as groups of deficiency zero. In four cases we settle finiteness: we show that two presentations are for finite groups while two are fur infinite groups. Thus we answer three explicit questions in the literature and we provide the first published deficiency zero presentation for a group with derived length seven. The tools we use are coset enumeration and Knuth-Bendix rewriting, which are well-established as methods for proving finiteness or otherwise of a finitely presented group. We briefly comment on their capabilities and compare their performance.
Resumo:
The concept of local concurrence is used to quantify the entanglement between a single qubit and the remainder of a multiqubit system. For the ground state of the BCS model in the thermodynamic limit the set of local concurrences completely describes the entanglement. As a measure for the entanglement of the full system we investigate the average local concurrence (ALC). We find that the ALC satisfies a simple relation with the order parameter. We then show that for finite systems with a fixed particle number, a relation between the ALC and the condensation energy exposes a threshold coupling. Below the threshold, entanglement measures besides the ALC are significant.