905 resultados para Graphical representations
Resumo:
By 1900 the Jewish community of Tunisia witnessed the emergence of new competing identities: “assimilationist” of the Alliance Israelite Universelle, termed “Alliancist,” and Zionist. Strikingly, two members of the same family in Tunis, Raymond Valensi, President of the AIU Regional Committee, and Alfred Valensi, President of the Zionist Federation, led the struggle for their separate causes. In his discussion of identity in the modern world, Homi Bhabha asks, "How do strategies of representation or empowerment come to be formulated in the competing claims of communities…where, despite shared histories of …discrimination, the exchange of values, meanings and priorities…may be profoundly antagonistic…?" It is in this context that the claims of the Alliance and Zionism will be examined prior to World War I, based on the Archives of the AIU and on such secondary sources as the indispensable work of Paul Sebag. The tensions between the Alliancists and Zionists continued until the outbreak of World War II, as the French-speaking Jews of Tunisia sought to define their individual and collective identities.
Resumo:
Problem: Medical and veterinary students memorize facts but then have difficulty applying those facts in clinical problem solving. Cognitive engineering research suggests that the inability of medical and veterinary students to infer concepts from facts may be due in part to specific features of how information is represented and organized in educational materials. First, physical separation of pieces of information may increase the cognitive load on the student. Second, information that is necessary but not explicitly stated may also contribute to the student’s cognitive load. Finally, the types of representations – textual or graphical – may also support or hinder the student’s learning process. This may explain why students have difficulty applying biomedical facts in clinical problem solving. Purpose: To test the hypothesis that three specific aspects of expository text – the patial distance between the facts needed to infer a rule, the explicitness of information, and the format of representation – affected the ability of students to solve clinical problems. Setting: The study was conducted in the parasitology laboratory of a college of veterinary medicine in Texas. Sample: The study subjects were a convenience sample consisting of 132 second-year veterinary students who matriculated in 2007. The age of this class upon admission ranged from 20-52, and the gender makeup of this class consisted of approximately 75% females and 25% males. Results: No statistically significant difference in student ability to solve clinical problems was found when relevant facts were placed in proximity, nor when an explicit rule was stated. Further, no statistically significant difference in student ability to solve clinical problems was found when students were given different representations of material, including tables and concept maps. Findings: The findings from this study indicate that the three properties investigated – proximity, explicitness, and representation – had no statistically significant effect on student learning as it relates to clinical problem-solving ability. However, ad hoc observations as well as findings from other researchers suggest that the subjects were probably using rote learning techniques such as memorization, and therefore were not attempting to infer relationships from the factual material in the interventions, unless they were specifically prompted to look for patterns. A serendipitous finding unrelated to the study hypothesis was that those subjects who correctly answered questions regarding functional (non-morphologic) properties, such as mode of transmission and intermediate host, at the family taxonomic level were significantly more likely to correctly answer clinical case scenarios than were subjects who did not correctly answer questions regarding functional properties. These findings suggest a strong relationship (p < .001) between well-organized knowledge of taxonomic functional properties and clinical problem solving ability. Recommendations: Further study should be undertaken investigating the relationship between knowledge of functional taxonomic properties and clinical problem solving ability. In addition, the effect of prompting students to look for patterns in instructional material, followed by the effect of factors that affect cognitive load such as proximity, explicitness, and representation, should be explored.
Resumo:
Manuscript 1: “Conceptual Analysis: Externalizing Nursing Knowledge” We use concept analysis to establish that the report tool nurses prepare, carry, reference, amend, and use as a temporary data repository are examples of cognitive artifacts. This tool, integrally woven throughout the work and practice of nurses, is important to cognition and clinical decision-making. Establishing the tool as a cognitive artifact will support new dimensions of study. Such studies can characterize how this report tool supports cognition, internal representation of knowledge and skills, and external representation of knowledge of the nurse. Manuscript 2: “Research Methods: Exploring Cognitive Work” The purpose of this paper is to describe a complex, cross-sectional, multi-method approach to study of personal cognitive artifacts in the clinical environment. The complex data arrays present in these cognitive artifacts warrant the use of multiple methods of data collection. Use of a less robust research design may result in an incomplete understanding of the meaning, value, content, and relationships between personal cognitive artifacts in the clinical environment and the cognitive work of the user. Manuscript 3: “Making the Cognitive Work of Registered Nurses Visible” Purpose: Knowledge representations and structures are created and used by registered nurses to guide patient care. Understanding is limited regarding how these knowledge representations, or cognitive artifacts, contribute to working memory, prioritization, organization, cognition, and decision-making. The purpose of this study was to identify and characterize the role a specific cognitive artifact knowledge representation and structure as it contributed to the cognitive work of the registered nurse. Methods: Data collection was completed, using qualitative research methods, by shadowing and interviewing 25 registered nurses. Data analysis employed triangulation and iterative analytic processes. Results: Nurse cognitive artifacts support recall, data evaluation, decision-making, organization, and prioritization. These cognitive artifacts demonstrated spatial, longitudinal, chronologic, visual, and personal cues to support the cognitive work of nurses. Conclusions: Nurse cognitive artifacts are an important adjunct to the cognitive work of nurses, and directly support patient care. Nurses need to be able to configure their cognitive artifact in ways that are meaningful and support their internal knowledge representations.
Resumo:
Primary motor cortex (M1) is involved in the production of voluntary movement and contains a complete functional representation, or map, of the skeletal musculature. This functional map can be altered by pathological experiences, such as peripheral nerve injury or stroke, by pharmacological manipulation, and by behavioral experience. The process by which experience-dependent alterations of cortical function occur is termed plasticity. In this thesis, plasticity of M1 functional organization as a consequence of behavioral experience was examined in adult primates (squirrel monkeys). Maps of movement representations were derived under anesthesia using intracortical microstimulation, whereby a microelectrode was inserted into the cortex to electrically stimulate corticospinal neurons at low current levels and evoke movements of the forelimb, principally of the hand. Movement representations were examined before and at several times after training on behavioral tasks that emphasized use of the fingers. Two behavioral tasks were utilized that dissociated the repetition of motor activity from the acquisition of motor skills. One task was easy to perform, and as such promoted repetitive motor activity without learning. The other task was more difficult, requiring the acquisition of motor skills for successful performance. Kinematic analysis indicated that monkeys used a consistent set of forelimb movements during pellet extractions. Functional mapping revealed that repetitive motor activity during the easier task did not produce plastic changes in movement representations. Instead, map plasticity, in the form of selective expansions of task-related movement representations, was only produced following skill acquisition on the difficult task. Additional studies revealed that, in general, map plasticity persisted without further training for up to three months, in parallel with the retention of task-related motor skills. Also, extensive additional training on the small well task produced further improvements in performance, and further changes in movement maps. In sum, these experiments support the following three conclusions regarding the role of M1 in motor learning. First, behaviorally-driven plasticity is learning-dependent, not activity-dependent. Second, plastic changes in M1 functional representations represent a neural correlate of acquired motor skills. Third, the persistence of map plasticity suggests that M1 is part of the neural substrate for the memory of motor skills. ^
Resumo:
In this paper we present a research that took place between 2010 and 2012 included in an investigation scholarship awarded by the State University of la Plata. It is about the problem with the transition between college and professional work. It is a part of the produced studies on the importance of social representations as factors that impact on the performance of specific activities. In this case it's about finding out the relations given among the representations about graduated professional role of the Psychology career and its job insertion and performance. The theoretical framework corresponds to Social Psychology and Guidance theories. Methodologically this is an exploratory and descriptive study, based on the 'triangulation' conception, of multiple type, that allows combining in the same investigation, different strategies, theoretical perspectives and sources; however qualitative techniques were prioritized to analyze data. Finally there are some considerations about the social representations concerning to the professional performance, mainly in the clinical field associated to education, and also to the problems of both situations over other fields
Resumo:
In this paper we present a research that took place between 2010 and 2012 included in an investigation scholarship awarded by the State University of la Plata. It is about the problem with the transition between college and professional work. It is a part of the produced studies on the importance of social representations as factors that impact on the performance of specific activities. In this case it's about finding out the relations given among the representations about graduated professional role of the Psychology career and its job insertion and performance. The theoretical framework corresponds to Social Psychology and Guidance theories. Methodologically this is an exploratory and descriptive study, based on the 'triangulation' conception, of multiple type, that allows combining in the same investigation, different strategies, theoretical perspectives and sources; however qualitative techniques were prioritized to analyze data. Finally there are some considerations about the social representations concerning to the professional performance, mainly in the clinical field associated to education, and also to the problems of both situations over other fields
Resumo:
In this paper we present a research that took place between 2010 and 2012 included in an investigation scholarship awarded by the State University of la Plata. It is about the problem with the transition between college and professional work. It is a part of the produced studies on the importance of social representations as factors that impact on the performance of specific activities. In this case it's about finding out the relations given among the representations about graduated professional role of the Psychology career and its job insertion and performance. The theoretical framework corresponds to Social Psychology and Guidance theories. Methodologically this is an exploratory and descriptive study, based on the 'triangulation' conception, of multiple type, that allows combining in the same investigation, different strategies, theoretical perspectives and sources; however qualitative techniques were prioritized to analyze data. Finally there are some considerations about the social representations concerning to the professional performance, mainly in the clinical field associated to education, and also to the problems of both situations over other fields
Resumo:
We present a novel graphical user interface program GrafLab (GRAvity Field LABoratory) for spherical harmonic synthesis (SHS) created in MATLAB®. This program allows to comfortably compute 38 various functionals of the geopotential up to ultra-high degrees and orders of spherical harmonic expansion. For the most difficult part of the SHS, namely the evaluation of the fully normalized associated Legendre functions (fnALFs), we used three different approaches according to required maximum degree: (i) the standard forward column method (up to maximum degree 1800, in some cases up to degree 2190); (ii) the modified forward column method combined with Horner's scheme (up to maximum degree 2700); (iii) the extended-range arithmetic (up to an arbitrary maximum degree). For the maximum degree 2190, the SHS with fnALFs evaluated using the extended-range arithmetic approach takes only approximately 2-3 times longer than its standard arithmetic counterpart, i.e. the standard forward column method. In the GrafLab, the functionals of the geopotential can be evaluated on a regular grid or point-wise, while the input coordinates can either be read from a data file or entered manually. For the computation on a regular grid we decided to apply the lumped coefficients approach due to significant time-efficiency of this method. Furthermore, if a full variance-covariance matrix of spherical harmonic coefficients is available, it is possible to compute the commission errors of the functionals. When computing on a regular grid, the output functionals or their commission errors may be depicted on a map using automatically selected cartographic projection.
Resumo:
The main characteristics of the Vernagtferner mass balance are sumarized in the table below. The mass balance years from 1964/65 to 2003/2004 are listed. The table includes the total area of the glacier (basis for the calculations), the equilibrium line altitude (ELA), percentage of the accumulation area in relation to the total area (AAR) and the specific net mass balance in mm w.e. (water equivalent) per year. It becomes clear that, after a rather minor growth period in the mid 1970's, the glacier continually lost mass since the beginning of the 1980's. Besides that, a clear increase of mass balance years with extreme mass losses could be observed in the last decade. The "glacier-friendly" summer with a well-balanced mass balance in 1999 could only interrupt the series of years with extreme mass losses, but this means no change in the trend. The minor mass loss in 1999 was caused by a winter snow cover above average, which prevented the glacier from becoming snow free over large areas and thus resulted in a lower ice melt. Although real summer conditions in 2000 were mainly restricted to August and produced a snow free area only slightly larger than in 1999, there have been further ice losses. This trend of negative mass balance continued also in the years 2001 and 2002. Nevertheless, the losses are moderate because a smaller part of the glacier became ice free until autumn (appr. 50 %). The summer 2003 caused a loss of ice in a dimension never seen since the beginning of the scientific investigations. This resulted from a combination of different factors: after only a moderate winter snowcover the glacier became snow free very early. For the first time the ablation area spanned over the entire glacier (blue fields in the mass balance tables!). Only one short snowfall event interrupted the ablation period, which lasted twice as long as in the years of large losses in the 1990's. The extreme mass loss in 2003 will also influence the mass balance in the following year 2004. The graphical representation of the elevation distribution of the specific mass balance together with the absolute mass balance can be found individually for each year by choosing one of the mass balance values from the table. These diagrams also include the area-height-distribution of the glacier and the ablation area. A tabular version of the numeric values in dependence of the elevation, provided separately for the accumulation area, the ablation area and the total glacier, can be found in colums "Persistent Identifier". The tables include the results for three different parts of the glacier and for the total glacier.
Resumo:
There are conventional methods to calculate the centroid of spatial units and distance among them with using Geographical Information Systems (GIS). The paper points out potential measurement errors of this calculation. By taking Indian district data as an example, systematic errors concealed in such variables are shown. Two comparisons are examined; firstly, we compare the centroid obtained from the spatial units, polygons, and the centre of each city where its district headquarters locates. Secondly, between the centres represented in the above, we calculate the direct distances and road distances obtained from each pair of two districts. From the comparison between the direct distances of centroid of spatial units and the road distances of centre of district headquarters, we show the distribution of errors and list some caveats for the use of conventional variables obtained from GIS.
Resumo:
Set-Sharing analysis, the classic Jacobs and Langen's domain, has been widely used to infer several interesting properties of programs at compile-time such as occurs-check reduction, automatic parallelization, flnite-tree analysis, etc. However, performing abstract uniflcation over this domain implies the use of a closure operation which makes the number of sharing groups grow exponentially. Much attention has been given in the literature to mitígate this key inefficiency in this otherwise very useful domain. In this paper we present two novel alternative representations for the traditional set-sharing domain, tSH and tNSH. which compress efficiently the number of elements into fewer elements enabling more efficient abstract operations, including abstract uniflcation, without any loss of accuracy. Our experimental evaluation supports that both representations can reduce dramatically the number of sharing groups showing they can be more practical solutions towards scalable set-sharing.
Resumo:
Abstract is not available.
Resumo:
Pragmatism is the leading motivation of regularization. We can understand regularization as a modification of the maximum-likelihood estimator so that a reasonable answer could be given in an unstable or ill-posed situation. To mention some typical examples, this happens when fitting parametric or non-parametric models with more parameters than data or when estimating large covariance matrices. Regularization is usually used, in addition, to improve the bias-variance tradeoff of an estimation. Then, the definition of regularization is quite general, and, although the introduction of a penalty is probably the most popular type, it is just one out of multiple forms of regularization. In this dissertation, we focus on the applications of regularization for obtaining sparse or parsimonious representations, where only a subset of the inputs is used. A particular form of regularization, L1-regularization, plays a key role for reaching sparsity. Most of the contributions presented here revolve around L1-regularization, although other forms of regularization are explored (also pursuing sparsity in some sense). In addition to present a compact review of L1-regularization and its applications in statistical and machine learning, we devise methodology for regression, supervised classification and structure induction of graphical models. Within the regression paradigm, we focus on kernel smoothing learning, proposing techniques for kernel design that are suitable for high dimensional settings and sparse regression functions. We also present an application of regularized regression techniques for modeling the response of biological neurons. Supervised classification advances deal, on the one hand, with the application of regularization for obtaining a na¨ıve Bayes classifier and, on the other hand, with a novel algorithm for brain-computer interface design that uses group regularization in an efficient manner. Finally, we present a heuristic for inducing structures of Gaussian Bayesian networks using L1-regularization as a filter. El pragmatismo es la principal motivación de la regularización. Podemos entender la regularización como una modificación del estimador de máxima verosimilitud, de tal manera que se pueda dar una respuesta cuando la configuración del problema es inestable. A modo de ejemplo, podemos mencionar el ajuste de modelos paramétricos o no paramétricos cuando hay más parámetros que casos en el conjunto de datos, o la estimación de grandes matrices de covarianzas. Se suele recurrir a la regularización, además, para mejorar el compromiso sesgo-varianza en una estimación. Por tanto, la definición de regularización es muy general y, aunque la introducción de una función de penalización es probablemente el método más popular, éste es sólo uno de entre varias posibilidades. En esta tesis se ha trabajado en aplicaciones de regularización para obtener representaciones dispersas, donde sólo se usa un subconjunto de las entradas. En particular, la regularización L1 juega un papel clave en la búsqueda de dicha dispersión. La mayor parte de las contribuciones presentadas en la tesis giran alrededor de la regularización L1, aunque también se exploran otras formas de regularización (que igualmente persiguen un modelo disperso). Además de presentar una revisión de la regularización L1 y sus aplicaciones en estadística y aprendizaje de máquina, se ha desarrollado metodología para regresión, clasificación supervisada y aprendizaje de estructura en modelos gráficos. Dentro de la regresión, se ha trabajado principalmente en métodos de regresión local, proponiendo técnicas de diseño del kernel que sean adecuadas a configuraciones de alta dimensionalidad y funciones de regresión dispersas. También se presenta una aplicación de las técnicas de regresión regularizada para modelar la respuesta de neuronas reales. Los avances en clasificación supervisada tratan, por una parte, con el uso de regularización para obtener un clasificador naive Bayes y, por otra parte, con el desarrollo de un algoritmo que usa regularización por grupos de una manera eficiente y que se ha aplicado al diseño de interfaces cerebromáquina. Finalmente, se presenta una heurística para inducir la estructura de redes Bayesianas Gaussianas usando regularización L1 a modo de filtro.
Resumo:
Thanks to their inherent properties, probabilistic graphical models are one of the prime candidates for machine learning and decision making tasks especially in uncertain domains. Their capabilities, like representation, inference and learning, if used effectively, can greatly help to build intelligent systems that are able to act accordingly in different problem domains. Evolutionary algorithms is one such discipline that has employed probabilistic graphical models to improve the search for optimal solutions in complex problems. This paper shows how probabilistic graphical models have been used in evolutionary algorithms to improve their performance in solving complex problems. Specifically, we give a survey of probabilistic model building-based evolutionary algorithms, called estimation of distribution algorithms, and compare different methods for probabilistic modeling in these algorithms.