894 resultados para Moduli in modern mapping theory
Resumo:
The protection of the fundamental human values (life, bodily integrity, human dignity, privacy) becomes imperative with the rapid progress in modern biotechnology, which can result in major alterations in the genetic make-up of organisms. It has become possible to insert human genes into pigs so that their internal organs coated in human proteins are more suitable for transplantation into humans (xenotransplantation), and micro-organisms that cam make insulin have been created, thus changing the genetic make-up of humans. At the end of the 1980s, the Central and Eastern European (CEE) countries either initiated new legislation or started to amend existing laws in this area (clinical testing of drugs, experiments on man, prenatal genetic diagnosis, legal protection of the embryo/foetus, etc.). The analysis here indicates that the CEE countries have not sufficiently adjusted their regulations to the findings of modern biotechnology, either because of the relatively short period they have had to do so, or because there are no definite answers to the questions which modern biotechnology has raised (ethical aspects of xenotransplantation, or of the use of live-aborted embryonic or foetal tissue in neuro-transplantation, etc.). In order to harmonise the existing regulations in CEE countries with respect to the EU and supranational contexts, two critical issues should be taken into consideration. The first is the necessity for CEE countries to recognise the place of humans within the achievements of modern biotechnology (a broader affirmation of the principle of autonomy, an explicit ban on the violation of the genetic identity of either born or unborn life, etc.). The second concerns the definition of the status of different biotechnological procedures and their permissibility (gene therapy, therapeutic genomes, xenotransplantation, etc.). The road towards such answers may be more easily identified once all CEE countries become members of the Council of Europe and express their wish to join the EU, which in turn presupposes taking over the entire body of EU legislation.
Resumo:
Mr. Pechersky set out to examine a specific feature of the employer-employee relationship in Russian business organisations. He wanted to study to what extent the so-called "moral hazard" is being solved (if it is being solved at all), whether there is a relationship between pay and performance, and whether there is a correlation between economic theory and Russian reality. Finally, he set out to construct a model of the Russian economy that better reflects the way it actually functions than do certain other well-known models (for example models of incentive compensation, the Shapiro-Stiglitz model etc.). His report was presented to the RSS in the form of a series of manuscripts in English and Russian, and on disc, with many tables and graphs. He begins by pointing out the different examples of randomness that exist in the relationship between employee and employer. Firstly, results are frequently affected by circumstances outside the employee's control that have nothing to do with how intelligently, honestly, and diligently the employee has worked. When rewards are based on results, uncontrollable randomness in the employee's output induces randomness in their incomes. A second source of randomness involves the outside events that are beyond the control of the employee that may affect his or her ability to perform as contracted. A third source of randomness arises when the performance itself (rather than the result) is measured, and the performance evaluation procedures include random or subjective elements. Mr. Pechersky's study shows that in Russia the third source of randomness plays an important role. Moreover, he points out that employer-employee relationships in Russia are sometimes opposite to those in the West. Drawing on game theory, he characterises the Western system as follows. The two players are the principal and the agent, who are usually representative individuals. The principal hires an agent to perform a task, and the agent acquires an information advantage concerning his actions or the outside world at some point in the game, i.e. it is assumed that the employee is better informed. In Russia, on the other hand, incentive contracts are typically negotiated in situations in which the employer has the information advantage concerning outcome. Mr. Pechersky schematises it thus. Compensation (the wage) is W and consists of a base amount, plus a portion that varies with the outcome, x. So W = a + bx, where b is used to measure the intensity of the incentives provided to the employee. This means that one contract will be said to provide stronger incentives than another if it specifies a higher value for b. This is the incentive contract as it operates in the West. The key feature distinguishing the Russian example is that x is observed by the employer but is not observed by the employee. So the employer promises to pay in accordance with an incentive scheme, but since the outcome is not observable by the employee the contract cannot be enforced, and the question arises: is there any incentive for the employer to fulfil his or her promises? Mr. Pechersky considers two simple models of employer-employee relationships displaying the above type of information symmetry. In a static framework the obtained result is somewhat surprising: at the Nash equilibrium the employer pays nothing, even though his objective function contains a quadratic term reflecting negative consequences for the employer if the actual level of compensation deviates from the expectations of the employee. This can lead, for example, to labour turnover, or the expenses resulting from a bad reputation. In a dynamic framework, the conclusion can be formulated as follows: the higher the discount factor, the higher the incentive for the employer to be honest in his/her relationships with the employee. If the discount factor is taken to be a parameter reflecting the degree of (un)certainty (the higher the degree of uncertainty is, the lower is the discount factor), we can conclude that the answer to the formulated question depends on the stability of the political, social and economic situation in a country. Mr. Pechersky believes that the strength of a market system with private property lies not just in its providing the information needed to compute an efficient allocation of resources in an efficient manner. At least equally important is the manner in which it accepts individually self-interested behaviour, but then channels this behaviour in desired directions. People do not have to be cajoled, artificially induced, or forced to do their parts in a well-functioning market system. Instead, they are simply left to pursue their own objectives as they see fit. Under the right circumstances, people are led by Adam Smith's "invisible hand" of impersonal market forces to take the actions needed to achieve an efficient, co-ordinated pattern of choices. The problem is that, as Mr. Pechersky sees it, there is no reason to believe that the circumstances in Russia are right, and the invisible hand is doing its work properly. Political instability, social tension and other circumstances prevent it from doing so. Mr. Pechersky believes that the discount factor plays a crucial role in employer-employee relationships. Such relationships can be considered satisfactory from a normative point of view, only in those cases where the discount factor is sufficiently large. Unfortunately, in modern Russia the evidence points to the typical discount factor being relatively small. This fact can be explained as a manifestation of aversion to risk of economic agents. Mr. Pechersky hopes that when political stabilisation occurs, the discount factors of economic agents will increase, and the agent's behaviour will be explicable in terms of more traditional models.
Resumo:
Within the scope of a comprehensive assessment of the degree of soil erosion in Switzerland, common methods have been used in the past including test plot measurements, artificial rainfall simulation, and erosion modelling. In addition, mapping guidelines for all visible erosion features have been developed since the 1970s and are being successfully applied in many research and soil conservation projects. Erosion damage has been continuously mapped over a period of 9 years in a test region in the central Bernese plateau. In 2005, two additional study areas were added. The present paper assesses the data gathered and provides a comparison of the three study areas within a period of one year (from October 2005 to October 2006), focusing on the on-site impacts of soil erosion. During this period, about 11 erosive rainfall events occurred. Average soil loss rates mapped at each study site amounted to 0.7 t ha-1, 1.2 t ha-1 and 2.3 t ha-1, respectively. About one fourth of the total arable land showed visible erosion damage. Maximum soil losses of about 70 t ha-1 occurred on individual farm plots. Average soil erosion patterns are widely used to underline the severity of an erosion problem (e.g. impacts on water bodies). But since severe rainfall events, wheel tracks, headlands, and other “singularities” often cause high erosion rates, analysis of extreme erosion patterns such as maximum values led to a more differentiated understanding and appropriate conclusions for planning and design of soil protection measures. The study contains an assessment of soil erosion in Switzerland, emphasizing questions about extent, frequency and severity. At the same time, the effects of different types of land management are investigated in the field, aiming at the development of meaningful impact indicators of (un-)sustainable agriculture/soil erosion risk as well as the validation of erosion models. The results illustrate that conservation agriculture including no-till, strip tillage and in-mulch seeding plays an essential role in reducing soil loss as compared to conventional tillage.
Resumo:
Can one observe an increasing level of individual lack of orientation because of rapid social change in modern societies? This question is examined using data from a representative longitudinal survey in Germany conducted in 2002–04. The study examines the role of education, age, sex, region (east/west), and political orientation for the explanation of anomia and its development. First we present the different sources of anomie in modern societies, based on the theoretical foundations of Durkheim and Merton, and introduce the different definitions of anomia, including our own cognitive version. Then we deduce several hypotheses from the theory, which we test by means of longitudinal data for the period 2002–04 in Germany using the latent growth curve model as our statistical method. The empirical findings show that all the sociodemographic variables, including political orientation, are strong predictors of the initial level of anomia. Regarding the development of anomia over time (2002–04), only the region (west) has a significant impact. In particular, the results of a multi-group analysis show that western German people with a right-wing political orientation become more anomic over this period. The article concludes with some theoretical implications.
Resumo:
One of the most influential statements in the anomie theory tradition has been Merton’s argument that the volume of instrumental property crime should be higher where there is a greater imbalance between the degree of commitment to monetary success goals and the degree of commitment to legitimate means of pursing such goals. Contemporary anomie theories stimulated by Merton’s perspective, most notably Messner and Rosenfeld’s institutional anomie theory, have expanded the scope conditions by emphasizing lethal criminal violence as an outcome to which anomie theory is highly relevant, and virtually all contemporary empirical studies have focused on applying the perspective to explaining spatial variation in homicide rates. In the present paper, we argue that current explications of Merton’s theory and IAT have not adequately conveyed the relevance of the core features of the anomie perspective to lethal violence. We propose an expanded anomie model in which an unbalanced pecuniary value system – the core causal variable in Merton’s theory and IAT – translates into higher levels of homicide primarily in indirect ways by increasing levels of firearm prevalence, drug market activity, and property crime, and by enhancing the degree to which these factors stimulate lethal outcomes. Using aggregate-level data collected during the mid-to-late 1970s for a sample of relatively large social aggregates within the U.S., we find a significant effect on homicide rates of an interaction term reflecting high levels of commitment to monetary success goals and low levels of commitment to legitimate means. Virtually all of this effect is accounted for by higher levels of property crime and drug market activity that occur in areas with an unbalanced pecuniary value system. Our analysis also reveals that property crime is more apt to lead to homicide under conditions of high levels of structural disadvantage. These and other findings underscore the potential value of elaborating the anomie perspective to explicitly account for lethal violence.
Resumo:
Barry Saltzman was a giant in the fields of meteorology and climate science. A leading figure in the study of weather and climate for over 40 yr, he has frequently been referred to as the "father of modern climate theory." Ahead of his time in many ways, Saltzman made significant contributions to our understanding of the general circulation and spectral energetics budget of the atmosphere, as well as climate change across a wide spectrum of time scales. In his endeavor to develop a unified theory of how the climate system works, lie played a role in the development of energy balance models, statistical dynamical models, and paleoclimate dynamical models. He was a pioneer in developing meteorologically motivated dynamical systems, including the progenitor of Lorenz's famous chaos model. In applying his own dynamical-systems approach to long-term climate change, he recognized the potential for using atmospheric general circulation models in a complimentary way. In 1998, he was awarded the Carl-Gustaf Rossby medal, the highest honor of the American Meteorological Society "for his life-long contributions to the study of the global circulation and the evolution of the earth's climate." In this paper, the authors summarize and place into perspective some of the most significant contributions that Barry Saltzman made during his long and distinguished career. This short review also serves as an introduction to the papers in this special issue of the Journal of Climate dedicated to Barry's memory.
Resumo:
We present an overview of a perturbative-kinetic approach to jet propagation, energy loss, and momentum broadening in a high temperature quark–gluon plasma. The leading-order kinetic equations describe the interactions between energetic jet-particles and a non-abelian plasma, consisting of on-shell thermal excitations and soft gluonic fields. These interactions include ↔ scatterings, collinear bremsstrahlung, and drag and momentum diffusion. We show how the contribution from the soft gluonic fields can be factorized into a set of Wilson line correlators on the light-cone. We review recent field-theoretical developments, rooted in the causal properties of these correlators, which simplify the calculation of the appropriate Wilson lines in thermal field theory. With these simplifications lattice measurements of transverse momentum broadening have become possible, and the kinetic equations describing parton transport have been extended to next-to-leading order in the coupling g.
Resumo:
The present study investigates the influence of environmental (temperature, salinity) and biological (growth rate, inter-generic variations) parameters on calcium isotope fractionation (d44/40Ca) in scleractinian coral skeleton to better constrain this record. Previous studies focused on the d44/40Ca record in different marine organisms to reconstruct seawater composition or temperature, but only few studies investigated corals. This study presents measurements performed on modern corals from natural environments (from the Maldives for modern and from Tahiti for fossil corals) as well as from laboratory cultures (Centre Scientifique de Monaco). Measurements on Porites sp., Acropora sp., Montipora verrucosa and Stylophora pistillata allow constraining inter-generic variability. Our results show that the fractionation of d44/40Ca ranges from 0.6 to 0.1 per mil, independent of the genus or the environmental conditions. No significant relationship between the rate of calcification and d44/40Ca was found. The weak temperature dependence reported in earlier studies is most probably not the only parameter that is responsible for the fractionation. Indeed, sub-seasonal temperature variations reconstructed by d18O and Sr/Ca ratio using a multi-proxy approach, are not mirrored in the coral's d44/40Ca variations. The intergeneric variability and intrageneric variability among the studied samples are weak except for S. pistillata, which shows calcium isotopic values increasing with salinity. The variability between samples cultured at a salinity of 40 is higher than those cultured at a salinity of 36 for this species. The present study reveals a strong biological control of the skeletal calcium isotope composition by the polyp and a weak influence of environmental factors, specifically temperature and salinity (except for S. pistillata). Vital effects have to be investigated in situ to better constrain their influence on the calcium isotopic signal. If vital effects could be extracted from the isotopic signal, the calcium isotopic composition of coral skeletons could provide reliable information on the calcium composition and budget in ocean.
Resumo:
La mayor parte de los entornos diseñados por el hombre presentan características geométricas específicas. En ellos es frecuente encontrar formas poligonales, rectangulares, circulares . . . con una serie de relaciones típicas entre distintos elementos del entorno. Introducir este tipo de conocimiento en el proceso de construcción de mapas de un robot móvil puede mejorar notablemente la calidad y la precisión de los mapas resultantes. También puede hacerlos más útiles de cara a un razonamiento de más alto nivel. Cuando la construcción de mapas se formula en un marco probabilístico Bayesiano, una especificación completa del problema requiere considerar cierta información a priori sobre el tipo de entorno. El conocimiento previo puede aplicarse de varias maneras, en esta tesis se presentan dos marcos diferentes: uno basado en el uso de primitivas geométricas y otro que emplea un método de representación cercano al espacio de las medidas brutas. Un enfoque basado en características geométricas supone implícitamente imponer un cierto modelo a priori para el entorno. En este sentido, el desarrollo de una solución al problema SLAM mediante la optimización de un grafo de características geométricas constituye un primer paso hacia nuevos métodos de construcción de mapas en entornos estructurados. En el primero de los dos marcos propuestos, el sistema deduce la información a priori a aplicar en cada caso en base a una extensa colección de posibles modelos geométricos genéricos, siguiendo un método de Maximización de la Esperanza para hallar la estructura y el mapa más probables. La representación de la estructura del entorno se basa en un enfoque jerárquico, con diferentes niveles de abstracción para los distintos elementos geométricos que puedan describirlo. Se llevaron a cabo diversos experimentos para mostrar la versatilidad y el buen funcionamiento del método propuesto. En el segundo marco, el usuario puede definir diferentes modelos de estructura para el entorno mediante grupos de restricciones y energías locales entre puntos vecinos de un conjunto de datos del mismo. El grupo de restricciones que se aplica a cada grupo de puntos depende de la topología, que es inferida por el propio sistema. De este modo, se pueden incorporar nuevos modelos genéricos de estructura para el entorno con gran flexibilidad y facilidad. Se realizaron distintos experimentos para demostrar la flexibilidad y los buenos resultados del enfoque propuesto. Abstract Most human designed environments present specific geometrical characteristics. In them, it is easy to find polygonal, rectangular and circular shapes, with a series of typical relations between different elements of the environment. Introducing this kind of knowledge in the mapping process of mobile robots can notably improve the quality and accuracy of the resulting maps. It can also make them more suitable for higher level reasoning applications. When mapping is formulated in a Bayesian probabilistic framework, a complete specification of the problem requires considering a prior for the environment. The prior over the structure of the environment can be applied in several ways; this dissertation presents two different frameworks, one using a feature based approach and another one employing a dense representation close to the measurements space. A feature based approach implicitly imposes a prior for the environment. In this sense, feature based graph SLAM was a first step towards a new mapping solution for structured scenarios. In the first framework, the prior is inferred by the system from a wide collection of feature based priors, following an Expectation-Maximization approach to obtain the most probable structure and the most probable map. The representation of the structure of the environment is based on a hierarchical model with different levels of abstraction for the geometrical elements describing it. Various experiments were conducted to show the versatility and the good performance of the proposed method. In the second framework, different priors can be defined by the user as sets of local constraints and energies for consecutive points in a range scan from a given environment. The set of constraints applied to each group of points depends on the topology, which is inferred by the system. This way, flexible and generic priors can be incorporated very easily. Several tests were carried out to demonstrate the flexibility and the good results of the proposed approach.
Resumo:
Nuestro cerebro contiene cerca de 1014 sinapsis neuronales. Esta enorme cantidad de conexiones proporciona un entorno ideal donde distintos grupos de neuronas se sincronizan transitoriamente para provocar la aparición de funciones cognitivas, como la percepción, el aprendizaje o el pensamiento. Comprender la organización de esta compleja red cerebral en base a datos neurofisiológicos, representa uno de los desafíos más importantes y emocionantes en el campo de la neurociencia. Se han propuesto recientemente varias medidas para evaluar cómo se comunican las diferentes partes del cerebro a diversas escalas (células individuales, columnas corticales, o áreas cerebrales). Podemos clasificarlos, según su simetría, en dos grupos: por una parte, la medidas simétricas, como la correlación, la coherencia o la sincronización de fase, que evalúan la conectividad funcional (FC); mientras que las medidas asimétricas, como la causalidad de Granger o transferencia de entropía, son capaces de detectar la dirección de la interacción, lo que denominamos conectividad efectiva (EC). En la neurociencia moderna ha aumentado el interés por el estudio de las redes funcionales cerebrales, en gran medida debido a la aparición de estos nuevos algoritmos que permiten analizar la interdependencia entre señales temporales, además de la emergente teoría de redes complejas y la introducción de técnicas novedosas, como la magnetoencefalografía (MEG), para registrar datos neurofisiológicos con gran resolución. Sin embargo, nos hallamos ante un campo novedoso que presenta aun varias cuestiones metodológicas sin resolver, algunas de las cuales trataran de abordarse en esta tesis. En primer lugar, el creciente número de aproximaciones para determinar la existencia de FC/EC entre dos o más señales temporales, junto con la complejidad matemática de las herramientas de análisis, hacen deseable organizarlas todas en un paquete software intuitivo y fácil de usar. Aquí presento HERMES (http://hermes.ctb.upm.es), una toolbox en MatlabR, diseñada precisamente con este fin. Creo que esta herramienta será de gran ayuda para todos aquellos investigadores que trabajen en el campo emergente del análisis de conectividad cerebral y supondrá un gran valor para la comunidad científica. La segunda cuestión practica que se aborda es el estudio de la sensibilidad a las fuentes cerebrales profundas a través de dos tipos de sensores MEG: gradiómetros planares y magnetómetros, esta aproximación además se combina con un enfoque metodológico, utilizando dos índices de sincronización de fase: phase locking value (PLV) y phase lag index (PLI), este ultimo menos sensible a efecto la conducción volumen. Por lo tanto, se compara su comportamiento al estudiar las redes cerebrales, obteniendo que magnetómetros y PLV presentan, respectivamente, redes más densamente conectadas que gradiómetros planares y PLI, por los valores artificiales que crea el problema de la conducción de volumen. Sin embargo, cuando se trata de caracterizar redes epilépticas, el PLV ofrece mejores resultados, debido a la gran dispersión de las redes obtenidas con PLI. El análisis de redes complejas ha proporcionado nuevos conceptos que mejoran caracterización de la interacción de sistemas dinámicos. Se considera que una red está compuesta por nodos, que simbolizan sistemas, cuyas interacciones se representan por enlaces, y su comportamiento y topología puede caracterizarse por un elevado número de medidas. Existe evidencia teórica y empírica de que muchas de ellas están fuertemente correlacionadas entre sí. Por lo tanto, se ha conseguido seleccionar un pequeño grupo que caracteriza eficazmente estas redes, y condensa la información redundante. Para el análisis de redes funcionales, la selección de un umbral adecuado para decidir si un determinado valor de conectividad de la matriz de FC es significativo y debe ser incluido para un análisis posterior, se convierte en un paso crucial. En esta tesis, se han obtenido resultados más precisos al utilizar un test de subrogadas, basado en los datos, para evaluar individualmente cada uno de los enlaces, que al establecer a priori un umbral fijo para la densidad de conexiones. Finalmente, todas estas cuestiones se han aplicado al estudio de la epilepsia, caso práctico en el que se analizan las redes funcionales MEG, en estado de reposo, de dos grupos de pacientes epilépticos (generalizada idiopática y focal frontal) en comparación con sujetos control sanos. La epilepsia es uno de los trastornos neurológicos más comunes, con más de 55 millones de afectados en el mundo. Esta enfermedad se caracteriza por la predisposición a generar ataques epilépticos de actividad neuronal anormal y excesiva o bien síncrona, y por tanto, es el escenario perfecto para este tipo de análisis al tiempo que presenta un gran interés tanto desde el punto de vista clínico como de investigación. Los resultados manifiestan alteraciones especificas en la conectividad y un cambio en la topología de las redes en cerebros epilépticos, desplazando la importancia del ‘foco’ a la ‘red’, enfoque que va adquiriendo relevancia en las investigaciones recientes sobre epilepsia. ABSTRACT There are about 1014 neuronal synapses in the human brain. This huge number of connections provides the substrate for neuronal ensembles to become transiently synchronized, producing the emergence of cognitive functions such as perception, learning or thinking. Understanding the complex brain network organization on the basis of neuroimaging data represents one of the most important and exciting challenges for systems neuroscience. Several measures have been recently proposed to evaluate at various scales (single cells, cortical columns, or brain areas) how the different parts of the brain communicate. We can classify them, according to their symmetry, into two groups: symmetric measures, such as correlation, coherence or phase synchronization indexes, evaluate functional connectivity (FC); and on the other hand, the asymmetric ones, such as Granger causality or transfer entropy, are able to detect effective connectivity (EC) revealing the direction of the interaction. In modern neurosciences, the interest in functional brain networks has increased strongly with the onset of new algorithms to study interdependence between time series, the advent of modern complex network theory and the introduction of powerful techniques to record neurophysiological data, such as magnetoencephalography (MEG). However, when analyzing neurophysiological data with this approach several questions arise. In this thesis, I intend to tackle some of the practical open problems in the field. First of all, the increase in the number of time series analysis algorithms to study brain FC/EC, along with their mathematical complexity, creates the necessity of arranging them into a single, unified toolbox that allow neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of them. I developed such a toolbox for this aim, it is named HERMES (http://hermes.ctb.upm.es), and encompasses several of the most common indexes for the assessment of FC and EC running for MatlabR environment. I believe that this toolbox will be very helpful to all the researchers working in the emerging field of brain connectivity analysis and will entail a great value for the scientific community. The second important practical issue tackled in this thesis is the evaluation of the sensitivity to deep brain sources of two different MEG sensors: planar gradiometers and magnetometers, in combination with the related methodological approach, using two phase synchronization indexes: phase locking value (PLV) y phase lag index (PLI), the latter one being less sensitive to volume conduction effect. Thus, I compared their performance when studying brain networks, obtaining that magnetometer sensors and PLV presented higher artificial values as compared with planar gradiometers and PLI respectively. However, when it came to characterize epileptic networks it was the PLV which gives better results, as PLI FC networks where very sparse. Complex network analysis has provided new concepts which improved characterization of interacting dynamical systems. With this background, networks could be considered composed of nodes, symbolizing systems, whose interactions with each other are represented by edges. A growing number of network measures is been applied in network analysis. However, there is theoretical and empirical evidence that many of these indexes are strongly correlated with each other. Therefore, in this thesis I reduced them to a small set, which could more efficiently characterize networks. Within this framework, selecting an appropriate threshold to decide whether a certain connectivity value of the FC matrix is significant and should be included in the network analysis becomes a crucial step, in this thesis, I used the surrogate data tests to make an individual data-driven evaluation of each of the edges significance and confirmed more accurate results than when just setting to a fixed value the density of connections. All these methodologies were applied to the study of epilepsy, analysing resting state MEG functional networks, in two groups of epileptic patients (generalized and focal epilepsy) that were compared to matching control subjects. Epilepsy is one of the most common neurological disorders, with more than 55 million people affected worldwide, characterized by its predisposition to generate epileptic seizures of abnormal excessive or synchronous neuronal activity, and thus, this scenario and analysis, present a great interest from both the clinical and the research perspective. Results revealed specific disruptions in connectivity and network topology and evidenced that networks’ topology is changed in epileptic brains, supporting the shift from ‘focus’ to ‘networks’ which is gaining importance in modern epilepsy research.
Resumo:
Este artículo ofrece una reflexión sobre el papel de los mapas conceptuales en el actual escenario de la educación In the present paper, we carry out the application of concept mapping strategies to learning Physical Chemistry, in particular, of all aspect of Corrosion. This strategy is an alternative method to supplement examinations: it can show the teacher how much the students knew and how much they didn´t know; and the students can evaluate their own learning. Before giving tile matter on Corrosion, the teachers evaluated the previous knowledge of the students in the field and explained to the students how create the conceptual maps with Cmap tools. When the subject is finished, teachers are assessed the conceptual maps developed by students and therefore also the level of the students learning. Teachers verified that the concept mapping is quite suitable for complicated theorics as Corrosion and it is an appropriate tool for the consolidation of educational experiences and for improvement affective lifelong learning. By using this method we demonstrated that the set of concepts accumulated in the cognitive structure of every student in unique and every student has therefore arranged the concepts from top to bottom in the mapping field in different ways with different linking" phrases, although these are involved in the same learning task.
Resumo:
Esta tesis estudia la reducción plena (‘full reduction’ en inglés) en distintos cálculos lambda. 1 En esencia, la reducción plena consiste en evaluar los cuerpos de las funciones en los lenguajes de programación funcional con ligaduras. Se toma el cálculo lambda clásico (i.e., puro y sin tipos) como el sistema formal que modela el paradigma de programación funcional. La reducción plena es una técnica fundamental cuando se considera a los programas como datos, por ejemplo para la optimización de programas mediante evaluación parcial, o cuando algún atributo del programa se representa a su vez por un programa, como el tipo en los demostradores automáticos de teoremas actuales. Muchas semánticas operacionales que realizan reducción plena tienen naturaleza híbrida. Se introduce formalmente la noción de naturaleza híbrida, que constituye el hilo conductor de todo el trabajo. En el cálculo lambda la naturaleza híbrida se manifiesta como una ‘distinción de fase’ en el tratamiento de las abstracciones, ya sean consideradas desde fuera o desde dentro de si mismas. Esta distinción de fase conlleva una estructura en capas en la que una semántica híbrida depende de una o más semánticas subsidiarias. Desde el punto de vista de los lenguajes de programación, la tesis muestra como derivar, mediante técnicas de transformación de programas, implementaciones de semánticas operacionales que reducen plenamente a partir de sus especificaciones. Las técnicas de transformación de programas consisten en transformaciones sintácticas que preservan la equivalencia semántica de los programas. Se ajustan las técnicas de transformación de programas existentes para trabajar con implementaciones de semánticas híbridas. Además, se muestra el impacto que tiene la reducción plena en las implementaciones que utilizan entornos. Los entornos son un ingrediente fundamental en las implementaciones realistas de una máquina abstracta. Desde el punto de vista de los sistemas formales, la tesis desvela una teoría novedosa para el cálculo lambda con paso por valor (‘call-by-value lambda calculus’ en inglés) que es consistente con la reducción plena. Dicha teoría induce una noción de equivalencia observacional que distingue más puntos que las teorías existentes para dicho cálculo. Esta contribución ayuda a establecer una ‘teoría estándar’ en el cálculo lambda con paso por valor que es análoga a la ‘teoría estándar’ del cálculo lambda clásico propugnada por Barendregt. Se presentan resultados de teoría de la demostración, y se sugiere como abordar el estudio de teoría de modelos. ABSTRACT This thesis studies full reduction in lambda calculi. In a nutshell, full reduction consists in evaluating the body of the functions in a functional programming language with binders. The classical (i.e., pure untyped) lambda calculus is set as the formal system that models the functional paradigm. Full reduction is a prominent technique when programs are treated as data objects, for instance when performing optimisations by partial evaluation, or when some attribute of the program is represented by a program itself, like the type in modern proof assistants. A notable feature of many full-reducing operational semantics is its hybrid nature, which is introduced and which constitutes the guiding theme of the thesis. In the lambda calculus, the hybrid nature amounts to a ‘phase distinction’ in the treatment of abstractions when considered either from outside or from inside themselves. This distinction entails a layered structure in which a hybrid semantics depends on one or more subsidiary semantics. From a programming languages standpoint, the thesis shows how to derive implementations of full-reducing operational semantics from their specifications, by using program transformations techniques. The program transformation techniques are syntactical transformations which preserve the semantic equivalence of programs. The existing program transformation techniques are adjusted to work with implementations of hybrid semantics. The thesis also shows how full reduction impacts the implementations that use the environment technique. The environment technique is a key ingredient of real-world implementations of abstract machines which helps to circumvent the issue with binders. From a formal systems standpoint, the thesis discloses a novel consistent theory for the call-by-value variant of the lambda calculus which accounts for full reduction. This novel theory entails a notion of observational equivalence which distinguishes more points than other existing theories for the call-by-value lambda calculus. This contribution helps to establish a ‘standard theory’ in that calculus which constitutes the analogous of the ‘standard theory’ advocated by Barendregt in the classical lambda calculus. Some prooftheoretical results are presented, and insights on the model-theoretical study are given.
Resumo:
Linkage disequilibrium analysis can provide high resolution in the mapping of disease genes because it incorporates information on recombinations that have occurred during the entire period from the mutational event to the present. A circumstance particularly favorable for high-resolution mapping is when a single founding mutation segregates in an isolated population. We review here the population structure of Finland in which a small founder population some 100 generations ago has expanded into 5.1 million people today. Among the 30-odd autosomal recessive disorders that are more prevalent in Finland than elsewhere, several appear to have segregated for this entire period in the “panmictic” southern Finnish population. Linkage disequilibrium analysis has allowed precise mapping and determination of genetic distances at the 0.1-cM level in several of these disorders. Estimates of genetic distance have proven accurate, but previous calculations of the confidence intervals were too small because sampling variation was ignored. In the north and east of Finland the population can be viewed as having been “founded” only after 1500. Disease mutations that have undergone such a founding bottleneck only 20 or so generations ago exhibit linkage disequilibrium and haplotype sharing over long genetic distances (5–15 cM). These features have been successfully exploited in the mapping and cloning of many genes. We review the statistical issues of fine mapping by linkage disequilibrium and suggest that improved methodologies may be necessary to map diseases of complex etiology that may have arisen from multiple founding mutations.
Resumo:
We present an approach to map large numbers of Tc1 transposon insertions in the genome of Caenorhabditis elegans. Strains have been described that contain up to 500 polymorphic Tc1 insertions. From these we have cloned and shotgun sequenced over 2000 Tc1 flanks, resulting in an estimated set of 400 or more distinct Tc1 insertion alleles. Alignment of these sequences revealed a weak Tc1 insertion site consensus sequence that was symmetric around the invariant TA target site and reads CAYATATRTG. The Tc1 flanking sequences were compared with 40 Mbp of a C. elegans genome sequence. We found 151 insertions within the sequenced area, a density of ≈1 Tc1 insertion in every 265 kb. As the rest of the C. elegans genome sequence is obtained, remaining Tc1 alleles will fall into place. These mapped Tc1 insertions can serve two functions: (i) insertions in or near genes can be used to isolate deletion derivatives that have that gene mutated; and (ii) they represent a dense collection of polymorphic sequence-tagged sites. We demonstrate a strategy to use these Tc1 sequence-tagged sites in fine-mapping mutations.
Resumo:
Most methods for assessment of chromatin structure involve chemical or nuclease damage to DNA followed by analysis of distribution and susceptibility of cutting sites. The agents used generally do not permeate cells, making nuclear isolation mandatory. In vivo mapping strategies might allow detection of labile constituents and/or structures that are lost when chromatin is swollen in isolated nuclei at low ionic strengths. DNase I has been the most widely used enzyme to detect chromatin sites where DNA is active in transcription, replication or recombination. We have introduced the bovine DNase I gene into yeast under control of a galactose-responsive promoter. Expression of the nuclease leads to DNA degradation and cell death. Shorter exposure to the active enzyme allows mapping of chromatin structure in whole cells without isolation of nuclei. The validity and efficacy of the strategy are demonstrated by footprinting a labile repressor bound to its operator. Investigation of the inter-nucleosome linker regions in several types of repressed domains has revealed different degrees of protection in cells, relative to isolated nuclei.