845 resultados para 0802 Computation Theory and Mathematics


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer simulation of dynamical systems involves a phase space which is the finite set of machine arithmetic. Rounding state values of the continuous system to this grid yields a spatially discrete dynamical system, often with different dynamical behaviour. Discretization of an invertible smooth system gives a system with set-valued negative semitrajectories. As the grid is refined, asymptotic behaviour of the semitrajectories follows probabilistic laws which correspond to a set-valued Markov chain, whose transition probabilities can be explicitly calculated. The results are illustrated for two-dimensional dynamical systems obtained by discretization of fractional linear transformations of the unit disc in the complex plane.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Shadowing of a dynamical system is often used to justify the validity of computer simulations of the system, and in numerical calculations an inverse form of the shadowing concept is also of some interest. In this paper we characterize the notion of shadowing in terms of stability, and express the notion of hyperbolicity using the concept of inverse shadowing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In computer simulations of smooth dynamical systems, the original phase space is replaced by machine arithmetic, which is a finite set. The resulting spatially discretized dynamical systems do not inherit all functional properties of the original systems, such as surjectivity and existence of absolutely continuous invariant measures. This can lead to computational collapse to fixed points or short cycles. The paper studies loss of such properties in spatial discretizations of dynamical systems induced by unimodal mappings of the unit interval. The problem reduces to studying set-valued negative semitrajectories of the discretized system. As the grid is refined, the asymptotic behavior of the cardinality structure of the semitrajectories follows probabilistic laws corresponding to a branching process. The transition probabilities of this process are explicitly calculated. These results are illustrated by the example of the discretized logistic mapping.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article advances the theoretical integration between securitization theory and the framing approach, resulting in a set of criteria hereby called security framing. It seeks to make a twofold contribution: to sharpen the study of the ideational elements that underlie the construction of threats, and to advance towards a greater assessment of the audience's preferences. The case study under examination is the 2011 military intervention of the countries of the Gulf Cooperation Council in Bahrain. The security framing of this case will help illuminate the dynamics at play in one of the most important recent events in Gulf politics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this paper is to present an adaptation model for an Adaptive Educational Hypermedia System, PCMAT. The adaptation of the application is based on progressive self-assessment (exercises, tasks, and so on) and applies the constructivist learning theory and the learning styles theory. Our objective is the creation of a better, more adequate adaptation model that takes into account the complexities of different users.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation presented for the PhD Degree in Education Science – Curricular Theory and Science Teaching, by Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The concepts and instruments required for the teaching and learning of geometric optics are introduced in the didactic processwithout a proper didactic transposition. This claim is secured by the ample evidence of both wide- and deep-rooted alternative concepts on the topic. Didactic transposition is a theory that comes from a reflection on the teaching and learning process in mathematics but has been used in other disciplinary fields. It will be used in this work in order to clear up the main obstacles in the teachinglearning process of geometric optics. We proceed to argue that since Newton’s approach to optics, in his Book I of Opticks, is independent of the corpuscular or undulatory nature of light, it is the most suitable for a constructivist learning environment. However, Newton’s theory must be subject to a proper didactic transposition to help overcome the referred alternative concepts. Then is described our didactic transposition in order to create knowledge to be taught using a dialogical process between students’ previous knowledge, history of optics and the desired outcomes on geometrical optics in an elementary pre-service teacher training course. Finally, we use the scheme-facet structure of knowledge both to analyse and discuss our results as well as to illuminate shortcomings that must be addressed in our next stage of the inquiry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Leaders must scan the internal and external environment, chart strategic and task objectives, and provide performance feedback. These instrumental leadership (IL) functions go beyond the motivational and quid-pro quo leader behaviors that comprise the full-range-transformational, transactional, and laissez faire-leadership model. In four studies we examined the construct validity of IL. We found evidence for a four-factor IL model that was highly prototypical of good leadership. IL predicted top-level leader emergence controlling for the full-range factors, initiating structure, and consideration. It also explained unique variance in outcomes beyond the full-range factors; the effects of transformational leadership were vastly overstated when IL was omitted from the model. We discuss the importance of a "fuller full-range" leadership theory for theory and practice. We also showcase our methodological contributions regarding corrections for common method variance (i.e., endogeneity) bias using two-stage least squares (2SLS) regression and Monte Carlo split-sample designs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A subclass of games with population monotonic allocation schemes is studied, namelygames with regular population monotonic allocation schemes (rpmas). We focus on theproperties of these games and we prove the coincidence between the core and both theDavis-Maschler bargaining set and the Mas-Colell bargaining set

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the singular effects of vanishingly small surface tension on the dynamics of finger competition in the Saffman-Taylor problem, using the asymptotic techniques described by Tanveer [Philos. Trans. R. Soc. London, Ser. A 343, 155 (1993)] and Siegel and Tanveer [Phys. Rev. Lett. 76, 419 (1996)], as well as direct numerical computation, following the numerical scheme of Hou, Lowengrub, and Shelley [J. Comput. Phys. 114, 312 (1994)]. We demonstrate the dramatic effects of small surface tension on the late time evolution of two-finger configurations with respect to exact (nonsingular) zero-surface-tension solutions. The effect is present even when the relevant zero-surface-tension solution has asymptotic behavior consistent with selection theory. Such singular effects, therefore, cannot be traced back to steady state selection theory, and imply a drastic global change in the structure of phase-space flow. They can be interpreted in the framework of a recently introduced dynamical solvability scenario according to which surface tension unfolds the structurally unstable flow, restoring the hyperbolicity of multifinger fixed points.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A subclass of games with population monotonic allocation schemes is studied, namelygames with regular population monotonic allocation schemes (rpmas). We focus on theproperties of these games and we prove the coincidence between the core and both theDavis-Maschler bargaining set and the Mas-Colell bargaining set

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a recent paper, Komaki studied the second-order asymptotic properties of predictive distributions, using the Kullback-Leibler divergence as a loss function. He showed that estimative distributions with asymptotically efficient estimators can be improved by predictive distributions that do not belong to the model. The model is assumed to be a multidimensional curved exponential family. In this paper we generalize the result assuming as a loss function any f divergence. A relationship arises between alpha connections and optimal predictive distributions. In particular, using an alpha divergence to measure the goodness of a predictive distribution, the optimal shift of the estimate distribution is related to alpha-covariant derivatives. The expression that we obtain for the asymptotic risk is also useful to study the higher-order asymptotic properties of an estimator, in the mentioned class of loss functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two graphs with adjacency matrices $\mathbf{A}$ and $\mathbf{B}$ are isomorphic if there exists a permutation matrix $\mathbf{P}$ for which the identity $\mathbf{P}^{\mathrm{T}} \mathbf{A} \mathbf{P} = \mathbf{B}$ holds. Multiplying through by $\mathbf{P}$ and relaxing the permutation matrix to a doubly stochastic matrix leads to the linear programming relaxation known as fractional isomorphism. We show that the levels of the Sherali--Adams (SA) hierarchy of linear programming relaxations applied to fractional isomorphism interleave in power with the levels of a well-known color-refinement heuristic for graph isomorphism called the Weisfeiler--Lehman algorithm, or, equivalently, with the levels of indistinguishability in a logic with counting quantifiers and a bounded number of variables. This tight connection has quite striking consequences. For example, it follows immediately from a deep result of Grohe in the context of logics with counting quantifiers that a fixed number of levels of SA suffice to determine isomorphism of planar and minor-free graphs. We also offer applications in both finite model theory and polyhedral combinatorics. First, we show that certain properties of graphs, such as that of having a flow circulation of a prescribed value, are definable in the infinitary logic with counting with a bounded number of variables. Second, we exploit a lower bound construction due to Cai, Fürer, and Immerman in the context of counting logics to give simple explicit instances that show that the SA relaxations of the vertex-cover and cut polytopes do not reach their integer hulls for up to $\Omega(n)$ levels, where $n$ is the number of vertices in the graph.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Programming and mathematics are core areas of computer science (CS) and consequently also important parts of CS education. Introductory instruction in these two topics is, however, not without problems. Studies show that CS students find programming difficult to learn and that teaching mathematical topics to CS novices is challenging. One reason for the latter is the disconnection between mathematics and programming found in many CS curricula, which results in students not seeing the relevance of the subject for their studies. In addition, reports indicate that students' mathematical capability and maturity levels are dropping. The challenges faced when teaching mathematics and programming at CS departments can also be traced back to gaps in students' prior education. In Finland the high school curriculum does not include CS as a subject; instead, focus is on learning to use the computer and its applications as tools. Similarly, many of the mathematics courses emphasize application of formulas, while logic, formalisms and proofs, which are important in CS, are avoided. Consequently, high school graduates are not well prepared for studies in CS. Motivated by these challenges, the goal of the present work is to describe new approaches to teaching mathematics and programming aimed at addressing these issues: Structured derivations is a logic-based approach to teaching mathematics, where formalisms and justifications are made explicit. The aim is to help students become better at communicating their reasoning using mathematical language and logical notation at the same time as they become more confident with formalisms. The Python programming language was originally designed with education in mind, and has a simple syntax compared to many other popular languages. The aim of using it in instruction is to address algorithms and their implementation in a way that allows focus to be put on learning algorithmic thinking and programming instead of on learning a complex syntax. Invariant based programming is a diagrammatic approach to developing programs that are correct by construction. The approach is based on elementary propositional and predicate logic, and makes explicit the underlying mathematical foundations of programming. The aim is also to show how mathematics in general, and logic in particular, can be used to create better programs.