919 resultados para Higher Order Thinking


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Molec ul ar dynamics calculations of the mean sq ua re displacement have been carried out for the alkali metals Na, K and Cs and for an fcc nearest neighbour Lennard-Jones model applicable to rare gas solids. The computations for the alkalis were done for several temperatures for temperature vol ume a swell as for the the ze r 0 pressure ze ro zero pressure volume corresponding to each temperature. In the fcc case, results were obtained for a wide range of both the temperature and density. Lattice dynamics calculations of the harmonic and the lowe s t order anharmonic (cubic and quartic) contributions to the mean square displacement were performed for the same potential models as in the molecular dynamics calculations. The Brillouin zone sums arising in the harmonic and the quartic terms were computed for very large numbers of points in q-space, and were extrapolated to obtain results ful converged with respect to the number of points in the Brillouin zone.An excellent agreement between the lattice dynamics results was observed molecular dynamics and in the case of all the alkali metals, e~ept for the zero pressure case of CSt where the difference is about 15 % near the melting temperature. It was concluded that for the alkalis, the lowest order perturbation theory works well even at temperat ures close to the melting temperat ure. For the fcc nearest neighbour model it was found that the number of particles (256) used for the molecular dynamics calculations, produces a result which is somewhere between 10 and 20 % smaller than the value converged with respect to the number of particles. However, the general temperature dependence of the mean square displacement is the same in molecular dynamics and lattice dynamics for all temperatures at the highest densities examined, while at higher volumes and high temperatures the results diverge. This indicates the importance of the higher order (eg. ~* ) perturbation theory contributions in these cases.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One of the main objectives of the mid-Atlantic transect is to improve dating resolution of sequences and unconfonnity surfaces. Dinoflagellate cysts from two Ocean Drilling Program boreholes, the onshore Leg 174AX Ocean View Site and Leg 174A continental shelf Site 1071, are used to provide age estimates for sequences and unconfonnities fonned on the New Jersey continental margin during the Miocene epoch. Despite the occasional lack of dinocysts in barren and oxidized sections, dinocyst biochronology still offers greater age control than that provided by other microfossils in marginal marine environments. An early Miocene to late Miocene chronology based on ages detennined for the two study sites is presented. In addition, .palynofacies are used to unravel the systems tract character of the Miocene sequences and provide insight into the effects of taphonomy and preservation of palynomorphs in marginal marine and shelf environments under different ~ea level conditions. More precise placement of maximum flooding surfaces is possible through the identification of condensed sections and palynofacies shifts can also reveal subaerially exposed sections and surfaces not apparent in seismic or lithological analyses. The problems with the application of the pollen record in the interpretation of Miocene climate are also discussed. Palynomorphs provide evidence for a second-order lowering of sea level during the Miocene, onto which higher order sea level fluctuations are super-imposed. Correlation of sequences and unconfonnities is attempted between onshore boreholes and from the onshore Ocean View borehole to offshore Site 1071.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Q-methodology permitted 41 people to communicate their perspective of grief. In an attempt to clarify the research to date and to allow those who have experienced this human journey to direct the scientists, 80 statements were chosen to present to the participants based on the research from academic and counselling sources. Five different perspectives emerged from the Q-sorts and factor analysis. Each perspective was valuable for the understanding of different groups of mourners. They were interpreted using questionnaire data and interview information. They are as follows: Factor 1- Growth Optimism; Factor 2 - Schema Destruction and Negative Affect; Factor 3- Identification with the Deceased Person; Factor 4- Intact World view with High Clarity and High Social Support; Factor 5- Schema Destruction with High Preoccupation and Attention to Emotion. Some people grow in the face of grief, others hold on to essentially the same schemas and others are devastated by their loss. The different perspectives reported herein supply clues to the sources of these differing outcomes. From examination of Factor 1, it appears that a healthy living relationship helps substantially in the event of loss. An orientation toward emotions that encourages clarity, exemplified by Factor 4, without hyper-vigilance to emotion may be helpful as well. Strategies for maintaining schematic representations of the world with little alteration include: identification with the values of the deceased person, as in Factor 3 and reliance on social support and/or God as demonstrated by Factor 4. When the relationship had painful periods, social support may be accessed to benefit some mourners. When the person's frame of reference or higher order schemas are assaulted by the events of loss, the people most at risk for traumatic grief seem to be those with difficult relationships as indicated by Factor 5 individuals. When low social support, high attention to emotion with low clarity and little belief that feelings can be altered for the better are also attributes of the mourner devastating grief can result. In the end, there are groups of people who are forced to endure the entire process of schema destruction and devastation. Some appear to recover in part and others appear to stay in a form of purgatory for many years. The results of this study suggest that, those who experience devastating grief may be in the minority. In the future interventions could be more specifically addressed if these perspectives are replicated in a larger, more detailed study.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis will introduce a new strongly typed programming language utilizing Self types, named Win--*Foy, along with a suitable user interface designed specifically to highlight language features. The need for such a programming language is based on deficiencies found in programming languages that support both Self types and subtyping. Subtyping is a concept that is taken for granted by most software engineers programming in object-oriented languages. Subtyping supports subsumption but it does not support the inheritance of binary methods. Binary methods contain an argument of type Self, the same type as the object itself, in a contravariant position, i.e. as a parameter. There are several arguments in favour of introducing Self types into a programming language (11. This rationale led to the development of a relation that has become known as matching [4, 5). The matching relation does not support subsumption, however, it does support the inheritance of binary methods. Two forms of matching have been proposed (lJ. Specifically, these relations are known as higher-order matching and I-bound matching. Previous research on these relations indicates that the higher-order matching relation is both reflexive and transitive whereas the f-bound matching is reflexive but not transitive (7]. The higher-order matching relation provides significant flexibility regarding inheritance of methods that utilize or return values of the same type. This flexibility, in certain situations, can restrict the programmer from defining specific classes and methods which are based on constant values [21J. For this reason, the type This is used as a second reference to the type of the object that cannot, contrary to Self, be specialized in subclasses. F-bound matching allows a programmer to define a function that will work for all types of A', a subtype of an upper bound function of type A, with the result type being dependent on A'. The use of parametric polymorphism in f-bound matching provides a connection to subtyping in object-oriented languages. This thesis will contain two main sections. Firstly, significant details concerning deficiencies of the subtype relation and the need to introduce higher-order and f-bound matching relations into programming languages will be explored. Secondly, a new programming language named Win--*Foy Functional Object-Oriented Programming Language has been created, along with a suitable user interface, in order to facilitate experimentation by programmers regarding the matching relation. The construction of the programming language and the user interface will be explained in detail.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Formal verification of software can be an enormous task. This fact brought some software engineers to claim that formal verification is not feasible in practice. One possible method of supporting the verification process is a programming language that provides powerful abstraction mechanisms combined with intensive reuse of code. In this thesis we present a strongly typed functional object-oriented programming language. This language features type operators of arbitrary kind corresponding to so-called type protocols. Sub classing and inheritance is based on higher-order matching, i.e., utilizes type protocols as basic tool for reuse of code. We define the operational and axiomatic semantics of this language formally. The latter is the basis of the interactive proof assistant VOOP (Verified Object-Oriented Programs) that allows the user to prove equational properties of programs interactively.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Researchers have conceptualized repetitive behaviours in individuals with Autism Spectrum Disorder (ASD) on a continuum oflower-Ievel, motoric, repetitive behaviours and higher-order, repetitive behaviours that include symptoms ofOCD (Hollander, Wang, Braun, & Marsh, 2009). Although obsessional, ritualistic, and stereotyped behaviours are a core feature of ASD, individuals with ASD frequently experience obsessions and compulsions that meet DSM-IV-TR (American Psychiatric Association, 2000) criteria for Obsessive-Compulsive Disorder (OCD). Given the acknowledged difficulty in differentiating between OCD and Autism-related obsessive-compulsive phenomena, the present study uses the term Obsessive Compulsive Behaviour (OCB) to represent both phenomena. This study used a multiple baseline design across behaviours and ABC designs (Cooper, Heron, & Heward, 2007) to investigate if a 9-week Group Function-Based Cognitive Behavioural Therapy (CBT) decreased OCB in four children (ages 7 - 11 years) with High Functioning Autism (HFA). Key treatment components included traditional CBT components (awareness training, cognitive-behavioural skills training, exposure and response prevention) as well as function-based assessment and intervention. Time series data indicated significant decreases in OCBs. Standardized assessments showed decreases in symptom severity, and increases in quality of life for the participants and their families. Issues regarding symptom presentation, assessment, and treatment of a dually diagnosed child are discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Heyting categories, a variant of Dedekind categories, and Arrow categories provide a convenient framework for expressing and reasoning about fuzzy relations and programs based on those methods. In this thesis we present an implementation of Heyting and arrow categories suitable for reasoning and program execution using Coq, an interactive theorem prover based on Higher-Order Logic (HOL) with dependent types. This implementation can be used to specify and develop correct software based on L-fuzzy relations such as fuzzy controllers. We give an overview of lattices, L-fuzzy relations, category theory and dependent type theory before describing our implementation. In addition, we provide examples of program executions based on our framework.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Several Authors Have Discussed Recently the Limited Dependent Variable Regression Model with Serial Correlation Between Residuals. the Pseudo-Maximum Likelihood Estimators Obtained by Ignoring Serial Correlation Altogether, Have Been Shown to Be Consistent. We Present Alternative Pseudo-Maximum Likelihood Estimators Which Are Obtained by Ignoring Serial Correlation Only Selectively. Monte Carlo Experiments on a Model with First Order Serial Correlation Suggest That Our Alternative Estimators Have Substantially Lower Mean-Squared Errors in Medium Size and Small Samples, Especially When the Serial Correlation Coefficient Is High. the Same Experiments Also Suggest That the True Level of the Confidence Intervals Established with Our Estimators by Assuming Asymptotic Normality, Is Somewhat Lower Than the Intended Level. Although the Paper Focuses on Models with Only First Order Serial Correlation, the Generalization of the Proposed Approach to Serial Correlation of Higher Order Is Also Discussed Briefly.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The GARCH and Stochastic Volatility paradigms are often brought into conflict as two competitive views of the appropriate conditional variance concept : conditional variance given past values of the same series or conditional variance given a larger past information (including possibly unobservable state variables). The main thesis of this paper is that, since in general the econometrician has no idea about something like a structural level of disaggregation, a well-written volatility model should be specified in such a way that one is always allowed to reduce the information set without invalidating the model. To this respect, the debate between observable past information (in the GARCH spirit) versus unobservable conditioning information (in the state-space spirit) is irrelevant. In this paper, we stress a square-root autoregressive stochastic volatility (SR-SARV) model which remains true to the GARCH paradigm of ARMA dynamics for squared innovations but weakens the GARCH structure in order to obtain required robustness properties with respect to various kinds of aggregation. It is shown that the lack of robustness of the usual GARCH setting is due to two very restrictive assumptions : perfect linear correlation between squared innovations and conditional variance on the one hand and linear relationship between the conditional variance of the future conditional variance and the squared conditional variance on the other hand. By relaxing these assumptions, thanks to a state-space setting, we obtain aggregation results without renouncing to the conditional variance concept (and related leverage effects), as it is the case for the recently suggested weak GARCH model which gets aggregation results by replacing conditional expectations by linear projections on symmetric past innovations. Moreover, unlike the weak GARCH literature, we are able to define multivariate models, including higher order dynamics and risk premiums (in the spirit of GARCH (p,p) and GARCH in mean) and to derive conditional moment restrictions well suited for statistical inference. Finally, we are able to characterize the exact relationships between our SR-SARV models (including higher order dynamics, leverage effect and in-mean effect), usual GARCH models and continuous time stochastic volatility models, so that previous results about aggregation of weak GARCH and continuous time GARCH modeling can be recovered in our framework.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent work shows that a low correlation between the instruments and the included variables leads to serious inference problems. We extend the local-to-zero analysis of models with weak instruments to models with estimated instruments and regressors and with higher-order dependence between instruments and disturbances. This makes this framework applicable to linear models with expectation variables that are estimated non-parametrically. Two examples of such models are the risk-return trade-off in finance and the impact of inflation uncertainty on real economic activity. Results show that inference based on Lagrange Multiplier (LM) tests is more robust to weak instruments than Wald-based inference. Using LM confidence intervals leads us to conclude that no statistically significant risk premium is present in returns on the S&P 500 index, excess holding yields between 6-month and 3-month Treasury bills, or in yen-dollar spot returns.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La perception visuelle du mouvement est essentielle à l’exécution de déplacements sécuritaires ainsi qu’à l’interaction efficace avec notre environnement. C’est pourquoi il est nécessaire de comprendre la nature des mécanismes responsables de l’analyse de l’information sur le mouvement, ainsi que l’effet du vieillissement sur la réponse de ces mécanismes. Deux études seront présentées. La première avait pour but l’analyse des mécanismes responsables de la perception du mouvement de rotation fractale, nouveau stimulus introduit par Benton, O’Brien & Curran (2007). Ce type de stimulus a été créé afin d’isoler les mécanismes sensibles à la forme. Plusieurs auteurs ont suggéré que les mécanismes sensibles au mouvement de deuxième ordre utiliseraient les indices de position afin d’extraire l’information sur le mouvement (Seiffert & Cavanagh, 1998). Ainsi, la présente étude visait à déterminer si la rotation fractale est analysée par de tels mécanismes. Les résultats obtenus suggèrent que les mécanismes sensibles à la rotation fractale seraient basés sur l’orientation; tandis que ceux sensibles à la rotation de premier ordre, basés sur l’énergie. De plus, une certaine dissociation des mécanismes responsables du traitement de la rotation fractale et de premier ordre serait présente. La deuxième étude avait pour but, quant à elle, d’établir l’effet du vieillissement sur l’intégration du mouvement de premier et deuxième ordre. Les résultats indiquent que les mécanismes sensibles au mouvement de deuxième ordre seraient davantage affectés, comparativement à ceux de premier ordre. Ainsi, les fonctions visuelles requérant une intégration corticale de plus haut niveau seraient davantage affectées par l’effet du vieillissement.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

L'utilisation des méthodes formelles est de plus en plus courante dans le développement logiciel, et les systèmes de types sont la méthode formelle qui a le plus de succès. L'avancement des méthodes formelles présente de nouveaux défis, ainsi que de nouvelles opportunités. L'un des défis est d'assurer qu'un compilateur préserve la sémantique des programmes, de sorte que les propriétés que l'on garantit à propos de son code source s'appliquent également au code exécutable. Cette thèse présente un compilateur qui traduit un langage fonctionnel d'ordre supérieur avec polymorphisme vers un langage assembleur typé, dont la propriété principale est que la préservation des types est vérifiée de manière automatisée, à l'aide d'annotations de types sur le code du compilateur. Notre compilateur implante les transformations de code essentielles pour un langage fonctionnel d'ordre supérieur, nommément une conversion CPS, une conversion des fermetures et une génération de code. Nous présentons les détails des représentation fortement typées des langages intermédiaires, et les contraintes qu'elles imposent sur l'implantation des transformations de code. Notre objectif est de garantir la préservation des types avec un minimum d'annotations, et sans compromettre les qualités générales de modularité et de lisibilité du code du compilateur. Cet objectif est atteint en grande partie dans le traitement des fonctionnalités de base du langage (les «types simples»), contrairement au traitement du polymorphisme qui demande encore un travail substantiel pour satisfaire la vérification de type.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Les tâches de vision artificielle telles que la reconnaissance d’objets demeurent irrésolues à ce jour. Les algorithmes d’apprentissage tels que les Réseaux de Neurones Artificiels (RNA), représentent une approche prometteuse permettant d’apprendre des caractéristiques utiles pour ces tâches. Ce processus d’optimisation est néanmoins difficile. Les réseaux profonds à base de Machine de Boltzmann Restreintes (RBM) ont récemment été proposés afin de guider l’extraction de représentations intermédiaires, grâce à un algorithme d’apprentissage non-supervisé. Ce mémoire présente, par l’entremise de trois articles, des contributions à ce domaine de recherche. Le premier article traite de la RBM convolutionelle. L’usage de champs réceptifs locaux ainsi que le regroupement d’unités cachées en couches partageant les même paramètres, réduit considérablement le nombre de paramètres à apprendre et engendre des détecteurs de caractéristiques locaux et équivariant aux translations. Ceci mène à des modèles ayant une meilleure vraisemblance, comparativement aux RBMs entraînées sur des segments d’images. Le deuxième article est motivé par des découvertes récentes en neurosciences. Il analyse l’impact d’unités quadratiques sur des tâches de classification visuelles, ainsi que celui d’une nouvelle fonction d’activation. Nous observons que les RNAs à base d’unités quadratiques utilisant la fonction softsign, donnent de meilleures performances de généralisation. Le dernière article quand à lui, offre une vision critique des algorithmes populaires d’entraînement de RBMs. Nous montrons que l’algorithme de Divergence Contrastive (CD) et la CD Persistente ne sont pas robustes : tous deux nécessitent une surface d’énergie relativement plate afin que leur chaîne négative puisse mixer. La PCD à "poids rapides" contourne ce problème en perturbant légèrement le modèle, cependant, ceci génère des échantillons bruités. L’usage de chaînes tempérées dans la phase négative est une façon robuste d’adresser ces problèmes et mène à de meilleurs modèles génératifs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The attached file is created with Scientific Workplace Latex

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Le regroupement des neurones de propriétés similaires est à l’origine de modules permettant d’optimiser l’analyse de l’information. La conséquence est la présence de cartes fonctionnelles dans le cortex visuel primaire de certains mammifères pour de nombreux paramètres tels que l’orientation, la direction du mouvement ou la position des stimuli (visuotopie). Le premier volet de cette thèse est consacré à caractériser l’organisation modulaire dans le cortex visuel primaire pour un paramètre fondamental, la suppression centre / pourtour et au delà du cortex visuel primaire (dans l’aire 21a), pour l’orientation et la direction. Toutes les études ont été effectuées à l’aide de l’imagerie optique des signaux intrinsèques sur le cortex visuel du chat anesthésié. La quantification de la modulation par la taille des stimuli à permis de révéler la présence de modules de forte et de faible suppression par le pourtour dans le cortex visuel primaire (aires 17 et 18). Ce type d’organisation n’avait été observé jusqu’ici que dans une aire de plus haut niveau hiérarchique chez le primate. Une organisation modulaire pour l’orientation, similaire à celle observée dans le cortex visuel primaire a été révélée dans l’aire 21a. Par contre, contrairement à l’aire 18, l’aire 21a ne semblait pas être organisée en domaine de direction. L’ensemble de ces résultats pourront permettre d’alimenter les connaissances sur l’organisation anatomo-fonctionnelle du cortex visuel du chat mais également de mieux comprendre les facteurs qui déterminent la présence d’une organisation modulaire. Le deuxième volet abordé dans cette thèse s’est intéressé à l’amélioration de l’aspect quantitatif apporté par l’analyse temporelle en imagerie optique des signaux intrinsèques. Cette nouvelle approche, basée sur l’analyse de Fourier a permis d’augmenter considérablement le rapport signal / bruit des enregistrements. Toutefois, cette analyse ne s’est basée jusqu’ici que sur la quantification d’une seule harmonique ce qui a limité son emploi à la cartographie de l’orientation et de rétinotopie uniquement. En exploitant les plus hautes harmoniques, un modèle a été proposé afin d’estimer la taille des champs récepteurs et la sélectivité à la direction. Ce modèle a par la suite été validé par des approches conventionnelles dans le cortex visuel primaire.