996 resultados para Simple task
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The integration of remote monitoring techniques at different scales is of crucial importance for monitoring of volcanoes and assessment of the associated hazard. In this optic, technological advancement and collaboration between research groups also play a key role. Vhub is a community cyberinfrastructure platform designed for collaboration in volcanology research. Within the Vhub framework, this dissertation focuses on two research themes, both representing novel applications of remotely sensed data in volcanology: advancement in the acquisition of topographic data via active techniques and application of passive multi-spectral satellite data to monitoring of vegetated volcanoes. Measuring surface deformation is a critical issue in analogue modelling of Earth science phenomena. I present a novel application of the Microsoft Kinect sensor to measurement of vertical and horizontal displacements in analogue models. Specifically, I quantified vertical displacement in a scaled analogue model of Nisyros volcano, Greece, simulating magmatic deflation and inflation and related surface deformation, and included the horizontal component to reconstruct 3D models of pit crater formation. The detection of active faults around volcanoes is of importance for seismic and volcanic hazard assessment, but not a simple task to be achieved using analogue models. I present new evidence of neotectonic deformation along a north-south trending fault from the Mt Shasta debris avalanche deposit (DAD), northern California. The fault was identified on an airborne LiDAR campaign of part of the region interested by the DAD and then confirmed in the field. High resolution LiDAR can be utilized also for geomorphological assessment of DADs, and I describe a size-distance analysis to document geomorphological aspects of hummock in the Shasta DAD. Relating the remote observations of volcanic passive degassing to conditions and impacts on the ground provides an increased understanding of volcanic degassing and how satellite-based monitoring can be used to inform hazard management strategies in nearreal time. Combining a variety of satellite-based spectral time series I aim to perform the first space-based assessment of the impacts of sulfur dioxide emissions from Turrialba volcano, Costa Rica, on vegetation in the surrounding environment, and establish whether vegetation indices could be used more broadly to detect volcanic unrest.
Resumo:
Master production schedule (MPS) plays an important role in an integrated production planning system. It converts the strategic planning defined in a production plan into the tactical operation execution. The MPS is also known as a tool for top management to control over manufacture resources and becomes input of the downstream planning levels such as material requirement planning (MRP) and capacity requirement planning (CRP). Hence, inappropriate decision on the MPS development may lead to infeasible execution, which ultimately causes poor delivery performance. One must ensure that the proposed MPS is valid and realistic for implementation before it is released to real manufacturing system. In practice, where production environment is stochastic in nature, the development of MPS is no longer simple task. The varying processing time, random event such as machine failure is just some of the underlying causes of uncertainty that may be hardly addressed at planning stage so that in the end the valid and realistic MPS is tough to be realized. The MPS creation problem becomes even more sophisticated as decision makers try to consider multi-objectives; minimizing inventory, maximizing customer satisfaction, and maximizing resource utilization. This study attempts to propose a methodology for MPS creation which is able to deal with those obstacles. This approach takes into account uncertainty and makes trade off among conflicting multi-objectives at the same time. It incorporates fuzzy multi-objective linear programming (FMOLP) and discrete event simulation (DES) for MPS development.
Resumo:
Spatial scaling is an integral aspect of many spatial tasks that involve symbol-to-referent correspondences (e.g., map reading, drawing). In this study, we asked 3–6-year-olds and adults to locate objects in a two-dimensional spatial layout using information from a second spatial representation (map). We examined how scaling factor and reference features, such as the shape of the layout or the presence of landmarks, affect performance. Results showed that spatial scaling on this simple task undergoes considerable development, especially between 3 and 5 years of age. Furthermore, the youngest children showed large individual variability and profited from landmark information. Accuracy differed between scaled and un-scaled items, but not between items using different scaling factors (1:2 vs. 1:4), suggesting that participants encoded relative rather than absolute distances.
Resumo:
La presente tesis doctoral se orienta al estudio y análisis de los caminos empedrados antiguos, desde la época prerromana, tanto desde el punto de vista histórico como desde el técnico. La cuantificación de la romanidad de un camino representa un objetivo importante para la mayoría de los estudiosos de la caminería antigua, así como para los arqueólogos, por los datos que ofrece acerca del uso del territorio, los trazados de caminos en la antigüedad y los tráficos asociados. Cuantificar la romanidad de un camino no es tarea sencilla debido a que intervienen multitud de condicionantes que están vivos y son cambiantes como consecuencia del dinamismo inherente al propio camino. En cuanto al aspecto histórico, se realiza una descripción y análisis de la evolución del camino en la Península Ibérica desde sus orígenes hasta mediados del siglo XX, que permite diferenciar la red itineraria según su momento histórico. Así mismo, se describen y analizan: las ruedas y los carros desde sus orígenes, especialmente en la época romana -incluyendo una toma de medidas de distintos tipos de carro, existentes en instituciones y colecciones particulares-; las técnicas de transporte en la antigüedad y las características de la infraestructura viaria de época romana, detallando aspectos generales de sus técnicas de ingeniería y construcción. Desde el punto de vista técnico, el enfoque metodológico ha sido definir un Índice de Romanidad del Camino (IRC) para la datación de vías romanas empedradas, basado en un análisis multicriterio, a partir de los distintos factores que caracterizan su romanidad. Se ha realizado un exhaustivo estudio de campo, con la correspondiente toma de datos en las vías. Se han realizado una serie de ensayos de laboratorio con un prototipo creado exprofeso para simular el desgaste de la piedra producido por el traqueteo del carro al circular por el camino empedrado y dar una hipótesis de datación del camino. Se ha realizado un tratamiento estadístico con la muestra de datos medidos en campo. Se ha definido además el concepto de elasticidad de rodera usando la noción de derivada elástica. En cuanto a los resultados obtenidos: se ha calculado el Índice de Romanidad del Camino (IRC) en una serie de vías empedradas, para cuantificar su romanidad, obteniéndose un resultado coherente con la hipótesis previa sobre la datación de dichas vías; y se ha formulado un modelo exponencial para el número de frecuentaciones de carga que lo relaciona con la elasticidad de rodera y con su esbeltez y que se ha utilizado para relacionar la elasticidad de la rodera con la geología de la roca. Se ha iniciado una línea de investigación sobre la estimación de tráficos históricos en la caminería antigua, considerando que el volumen de tráfico a lo largo del tiempo en un tramo de vía está relacionado con los valores de elasticidad de rodera de dicho tramo a través de la tipología de la roca. En resumen, la presente tesis doctoral proporciona un método para sistematizar el estudio de los caminos antiguos, así como para datarlos y estimar la evolución de sus tráficos. The present Ph. D. Thesis aims to study and to analyze ancient cobbled ways, since pre-roman times, both from the historical and technical points of view. The quantification of the Roman character of a way represents an important target for most of the researchers of ancient ways, as well as for the archaeologists, due to information that it offers about the use of the territory, the tracings of ways in the antiquity and the associate flows. To quantify the Roman character of a way is not a simple task because it involves multitude of influent factors that are alive and variable as a result of the dynamism inherent to the way. As for the historical aspect, a description and analysis of the evolution of the way in the Iberian Peninsula from its origins until the middle of the twentieth century has been done. This allows us to distinguish between elements of the network according to its historical moment. Likewise, a description and analysis is given about: the wheels and the cars since their origins, especially in the Roman time - including a capture of measurements of different types of car, belonging to institutions and to particular collections-; the transport techniques on the antiquity and the characteristics of the road infrastructure of Roman epoch, detailing general technical engineering and constructive aspects. From the technical point of view, the methodological approach has been to define an Index of the Roman Character of the Way (IRC) for the dating of cobbled Roman routes, based on a multi-criterion analysis, involving different factors typical of Roman ways. An exhaustive field study has been realized, with the corresponding capture of information in the routes. A series of laboratory essays has been realized with an ad hoc prototype created to simulate the wear of the stone produced by cars circulating along the cobbled way, and to give a dating hypothesis of the way. A statistical treatment has been realized with the sample of information measured in field. There has been defined also the concept of elasticity of rolling trace using the notion of elastic derivative. As for the obtained results: there has been calculated the Index of Roman Character of the Way (IRC) in a series of cobbled routes, to quantify its Roman character, obtaining a coherent result with the previous dating hypothesis of the above mentioned routes; and an exponential model has been formulated for the number of frequent attendances of load that relates this number to the elasticity of rolling trace and to its slenderness and that has been used to relate the elasticity of the rolling trace to the geology of the rock. An investigation line has been opened about the estimation of historical flows in ancient ways, considering that the traffic volume over the course of time in a route stretch is related to the values of elasticity of rolling trace of the above mentioned stretch by means of the typology of the rock. In short, the present Ph. D. Thesis provides a method to systematize the study of ancient ways, as well as to date them and to estimate the evolution of their flows.
Resumo:
La astronomía de rayos γ estudia las partículas más energéticas que llegan a la Tierra desde el espacio. Estos rayos γ no se generan mediante procesos térmicos en simples estrellas, sino mediante mecanismos de aceleración de partículas en objetos celestes como núcleos de galaxias activos, púlsares, supernovas, o posibles procesos de aniquilación de materia oscura. Los rayos γ procedentes de estos objetos y sus características proporcionan una valiosa información con la que los científicos tratan de comprender los procesos físicos que ocurren en ellos y desarrollar modelos teóricos que describan su funcionamiento con fidelidad. El problema de observar rayos γ es que son absorbidos por las capas altas de la atmósfera y no llegan a la superficie (de lo contrario, la Tierra será inhabitable). De este modo, sólo hay dos formas de observar rayos γ embarcar detectores en satélites, u observar los efectos secundarios que los rayos γ producen en la atmósfera. Cuando un rayo γ llega a la atmósfera, interacciona con las partículas del aire y genera un par electrón - positrón, con mucha energía. Estas partículas secundarias generan a su vez más partículas secundarias cada vez menos energéticas. Estas partículas, mientras aún tienen energía suficiente para viajar más rápido que la velocidad de la luz en el aire, producen una radiación luminosa azulada conocida como radiación Cherenkov durante unos pocos nanosegundos. Desde la superficie de la Tierra, algunos telescopios especiales, conocidos como telescopios Cherenkov o IACTs (Imaging Atmospheric Cherenkov Telescopes), son capaces de detectar la radiación Cherenkov e incluso de tomar imágenes de la forma de la cascada Cherenkov. A partir de estas imágenes es posible conocer las principales características del rayo γ original, y con suficientes rayos se pueden deducir características importantes del objeto que los emitió, a cientos de años luz de distancia. Sin embargo, detectar cascadas Cherenkov procedentes de rayos γ no es nada fácil. Las cascadas generadas por fotones γ de bajas energías emiten pocos fotones, y durante pocos nanosegundos, y las correspondientes a rayos γ de alta energía, si bien producen más electrones y duran más, son más improbables conforme mayor es su energía. Esto produce dos líneas de desarrollo de telescopios Cherenkov: Para observar cascadas de bajas energías son necesarios grandes reflectores que recuperen muchos fotones de los pocos que tienen estas cascadas. Por el contrario, las cascadas de altas energías se pueden detectar con telescopios pequeños, pero conviene cubrir con ellos una superficie grande en el suelo para aumentar el número de eventos detectados. Con el objetivo de mejorar la sensibilidad de los telescopios Cherenkov actuales, en el rango de energía alto (> 10 TeV), medio (100 GeV - 10 TeV) y bajo (10 GeV - 100 GeV), nació el proyecto CTA (Cherenkov Telescope Array). Este proyecto en el que participan más de 27 países, pretende construir un observatorio en cada hemisferio, cada uno de los cuales contará con 4 telescopios grandes (LSTs), unos 30 medianos (MSTs) y hasta 70 pequeños (SSTs). Con un array así, se conseguirán dos objetivos. En primer lugar, al aumentar drásticamente el área de colección respecto a los IACTs actuales, se detectarán más rayos γ en todos los rangos de energía. En segundo lugar, cuando una misma cascada Cherenkov es observada por varios telescopios a la vez, es posible analizarla con mucha más precisión gracias a las técnicas estereoscópicas. La presente tesis recoge varios desarrollos técnicos realizados como aportación a los telescopios medianos y grandes de CTA, concretamente al sistema de trigger. Al ser las cascadas Cherenkov tan breves, los sistemas que digitalizan y leen los datos de cada píxel tienen que funcionar a frecuencias muy altas (≈1 GHz), lo que hace inviable que funcionen de forma continua, ya que la cantidad de datos guardada será inmanejable. En su lugar, las señales analógicas se muestrean, guardando las muestras analógicas en un buffer circular de unos pocos µs. Mientras las señales se mantienen en el buffer, el sistema de trigger hace un análisis rápido de las señales recibidas, y decide si la imagen que hay en el buér corresponde a una cascada Cherenkov y merece ser guardada, o por el contrario puede ignorarse permitiendo que el buffer se sobreescriba. La decisión de si la imagen merece ser guardada o no, se basa en que las cascadas Cherenkov producen detecciones de fotones en píxeles cercanos y en tiempos muy próximos, a diferencia de los fotones de NSB (night sky background), que llegan aleatoriamente. Para detectar cascadas grandes es suficiente con comprobar que más de un cierto número de píxeles en una región hayan detectado más de un cierto número de fotones en una ventana de tiempo de algunos nanosegundos. Sin embargo, para detectar cascadas pequeñas es más conveniente tener en cuenta cuántos fotones han sido detectados en cada píxel (técnica conocida como sumtrigger). El sistema de trigger desarrollado en esta tesis pretende optimizar la sensibilidad a bajas energías, por lo que suma analógicamente las señales recibidas en cada píxel en una región de trigger y compara el resultado con un umbral directamente expresable en fotones detectados (fotoelectrones). El sistema diseñado permite utilizar regiones de trigger de tamaño seleccionable entre 14, 21 o 28 píxeles (2, 3, o 4 clusters de 7 píxeles cada uno), y con un alto grado de solapamiento entre ellas. De este modo, cualquier exceso de luz en una región compacta de 14, 21 o 28 píxeles es detectado y genera un pulso de trigger. En la versión más básica del sistema de trigger, este pulso se distribuye por toda la cámara de forma que todos los clusters sean leídos al mismo tiempo, independientemente de su posición en la cámara, a través de un delicado sistema de distribución. De este modo, el sistema de trigger guarda una imagen completa de la cámara cada vez que se supera el número de fotones establecido como umbral en una región de trigger. Sin embargo, esta forma de operar tiene dos inconvenientes principales. En primer lugar, la cascada casi siempre ocupa sólo una pequeña zona de la cámara, por lo que se guardan muchos píxeles sin información alguna. Cuando se tienen muchos telescopios como será el caso de CTA, la cantidad de información inútil almacenada por este motivo puede ser muy considerable. Por otro lado, cada trigger supone guardar unos pocos nanosegundos alrededor del instante de disparo. Sin embargo, en el caso de cascadas grandes la duración de las mismas puede ser bastante mayor, perdiéndose parte de la información debido al truncamiento temporal. Para resolver ambos problemas se ha propuesto un esquema de trigger y lectura basado en dos umbrales. El umbral alto decide si hay un evento en la cámara y, en caso positivo, sólo las regiones de trigger que superan el nivel bajo son leídas, durante un tiempo más largo. De este modo se evita guardar información de píxeles vacíos y las imágenes fijas de las cascadas se pueden convertir en pequeños \vídeos" que representen el desarrollo temporal de la cascada. Este nuevo esquema recibe el nombre de COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), y se ha descrito detalladamente en el capítulo 5. Un problema importante que afecta a los esquemas de sumtrigger como el que se presenta en esta tesis es que para sumar adecuadamente las señales provenientes de cada píxel, estas deben tardar lo mismo en llegar al sumador. Los fotomultiplicadores utilizados en cada píxel introducen diferentes retardos que deben compensarse para realizar las sumas adecuadamente. El efecto de estos retardos ha sido estudiado, y se ha desarrollado un sistema para compensarlos. Por último, el siguiente nivel de los sistemas de trigger para distinguir efectivamente las cascadas Cherenkov del NSB consiste en buscar triggers simultáneos (o en tiempos muy próximos) en telescopios vecinos. Con esta función, junto con otras de interfaz entre sistemas, se ha desarrollado un sistema denominado Trigger Interface Board (TIB). Este sistema consta de un módulo que irá montado en la cámara de cada LST o MST, y que estará conectado mediante fibras ópticas a los telescopios vecinos. Cuando un telescopio tiene un trigger local, este se envía a todos los vecinos conectados y viceversa, de modo que cada telescopio sabe si sus vecinos han dado trigger. Una vez compensadas las diferencias de retardo debidas a la propagación en las fibras ópticas y de los propios fotones Cherenkov en el aire dependiendo de la dirección de apuntamiento, se buscan coincidencias, y en el caso de que la condición de trigger se cumpla, se lee la cámara en cuestión, de forma sincronizada con el trigger local. Aunque todo el sistema de trigger es fruto de la colaboración entre varios grupos, fundamentalmente IFAE, CIEMAT, ICC-UB y UCM en España, con la ayuda de grupos franceses y japoneses, el núcleo de esta tesis son el Level 1 y la Trigger Interface Board, que son los dos sistemas en los que que el autor ha sido el ingeniero principal. Por este motivo, en la presente tesis se ha incluido abundante información técnica relativa a estos sistemas. Existen actualmente importantes líneas de desarrollo futuras relativas tanto al trigger de la cámara (implementación en ASICs), como al trigger entre telescopios (trigger topológico), que darán lugar a interesantes mejoras sobre los diseños actuales durante los próximos años, y que con suerte serán de provecho para toda la comunidad científica participante en CTA. ABSTRACT -ray astronomy studies the most energetic particles arriving to the Earth from outer space. This -rays are not generated by thermal processes in mere stars, but by means of particle acceleration mechanisms in astronomical objects such as active galactic nuclei, pulsars, supernovas or as a result of dark matter annihilation processes. The γ rays coming from these objects and their characteristics provide with valuable information to the scientist which try to understand the underlying physical fundamentals of these objects, as well as to develop theoretical models able to describe them accurately. The problem when observing rays is that they are absorbed in the highest layers of the atmosphere, so they don't reach the Earth surface (otherwise the planet would be uninhabitable). Therefore, there are only two possible ways to observe γ rays: by using detectors on-board of satellites, or by observing their secondary effects in the atmosphere. When a γ ray reaches the atmosphere, it interacts with the particles in the air generating a highly energetic electron-positron pair. These secondary particles generate in turn more particles, with less energy each time. While these particles are still energetic enough to travel faster than the speed of light in the air, they produce a bluish radiation known as Cherenkov light during a few nanoseconds. From the Earth surface, some special telescopes known as Cherenkov telescopes or IACTs (Imaging Atmospheric Cherenkov Telescopes), are able to detect the Cherenkov light and even to take images of the Cherenkov showers. From these images it is possible to know the main parameters of the original -ray, and with some -rays it is possible to deduce important characteristics of the emitting object, hundreds of light-years away. However, detecting Cherenkov showers generated by γ rays is not a simple task. The showers generated by low energy -rays contain few photons and last few nanoseconds, while the ones corresponding to high energy -rays, having more photons and lasting more time, are much more unlikely. This results in two clearly differentiated development lines for IACTs: In order to detect low energy showers, big reflectors are required to collect as much photons as possible from the few ones that these showers have. On the contrary, small telescopes are able to detect high energy showers, but a large area in the ground should be covered to increase the number of detected events. With the aim to improve the sensitivity of current Cherenkov showers in the high (> 10 TeV), medium (100 GeV - 10 TeV) and low (10 GeV - 100 GeV) energy ranges, the CTA (Cherenkov Telescope Array) project was created. This project, with more than 27 participating countries, intends to build an observatory in each hemisphere, each one equipped with 4 large size telescopes (LSTs), around 30 middle size telescopes (MSTs) and up to 70 small size telescopes (SSTs). With such an array, two targets would be achieved. First, the drastic increment in the collection area with respect to current IACTs will lead to detect more -rays in all the energy ranges. Secondly, when a Cherenkov shower is observed by several telescopes at the same time, it is possible to analyze it much more accurately thanks to the stereoscopic techniques. The present thesis gathers several technical developments for the trigger system of the medium and large size telescopes of CTA. As the Cherenkov showers are so short, the digitization and readout systems corresponding to each pixel must work at very high frequencies (_ 1 GHz). This makes unfeasible to read data continuously, because the amount of data would be unmanageable. Instead, the analog signals are sampled, storing the analog samples in a temporal ring buffer able to store up to a few _s. While the signals remain in the buffer, the trigger system performs a fast analysis of the signals and decides if the image in the buffer corresponds to a Cherenkov shower and deserves to be stored, or on the contrary it can be ignored allowing the buffer to be overwritten. The decision of saving the image or not, is based on the fact that Cherenkov showers produce photon detections in close pixels during near times, in contrast to the random arrival of the NSB phtotons. Checking if more than a certain number of pixels in a trigger region have detected more than a certain number of photons during a certain time window is enough to detect large showers. However, taking also into account how many photons have been detected in each pixel (sumtrigger technique) is more convenient to optimize the sensitivity to low energy showers. The developed trigger system presented in this thesis intends to optimize the sensitivity to low energy showers, so it performs the analog addition of the signals received in each pixel in the trigger region and compares the sum with a threshold which can be directly expressed as a number of detected photons (photoelectrons). The trigger system allows to select trigger regions of 14, 21, or 28 pixels (2, 3 or 4 clusters with 7 pixels each), and with extensive overlapping. In this way, every light increment inside a compact region of 14, 21 or 28 pixels is detected, and a trigger pulse is generated. In the most basic version of the trigger system, this pulse is just distributed throughout the camera in such a way that all the clusters are read at the same time, independently from their position in the camera, by means of a complex distribution system. Thus, the readout saves a complete camera image whenever the number of photoelectrons set as threshold is exceeded in a trigger region. However, this way of operating has two important drawbacks. First, the shower usually covers only a little part of the camera, so many pixels without relevant information are stored. When there are many telescopes as will be the case of CTA, the amount of useless stored information can be very high. On the other hand, with every trigger only some nanoseconds of information around the trigger time are stored. In the case of large showers, the duration of the shower can be quite larger, loosing information due to the temporal cut. With the aim to solve both limitations, a trigger and readout scheme based on two thresholds has been proposed. The high threshold decides if there is a relevant event in the camera, and in the positive case, only the trigger regions exceeding the low threshold are read, during a longer time. In this way, the information from empty pixels is not stored and the fixed images of the showers become to little \`videos" containing the temporal development of the shower. This new scheme is named COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), and it has been described in depth in chapter 5. An important problem affecting sumtrigger schemes like the one presented in this thesis is that in order to add the signals from each pixel properly, they must arrive at the same time. The photomultipliers used in each pixel introduce different delays which must be compensated to perform the additions properly. The effect of these delays has been analyzed, and a delay compensation system has been developed. The next trigger level consists of looking for simultaneous (or very near in time) triggers in neighbour telescopes. These function, together with others relating to interfacing different systems, have been developed in a system named Trigger Interface Board (TIB). This system is comprised of one module which will be placed inside the LSTs and MSTs cameras, and which will be connected to the neighbour telescopes through optical fibers. When a telescope receives a local trigger, it is resent to all the connected neighbours and vice-versa, so every telescope knows if its neighbours have been triggered. Once compensated the delay differences due to propagation in the optical fibers and in the air depending on the pointing direction, the TIB looks for coincidences, and in the case that the trigger condition is accomplished, the camera is read a fixed time after the local trigger arrived. Despite all the trigger system is the result of the cooperation of several groups, specially IFAE, Ciemat, ICC-UB and UCM in Spain, with some help from french and japanese groups, the Level 1 and the Trigger Interface Board constitute the core of this thesis, as they have been the two systems designed by the author of the thesis. For this reason, a large amount of technical information about these systems has been included. There are important future development lines regarding both the camera trigger (implementation in ASICS) and the stereo trigger (topological trigger), which will produce interesting improvements for the current designs during the following years, being useful for all the scientific community participating in CTA.
Resumo:
En la actualidad encontramos una gran y creciente cantidad de información en las redes sociales. Esta información en su mayoría se encuentra desestructurada o no organizada de forma adecuada, esto produce que sea difícil alcanzar consensos en argumentaciones y además impide la rápida participación de nuevos agentes en las mismas. Se han estudiado diferentes soluciones para alcanzar consensos en áreas concretos y en su mayoría centrados en el entorno académico, sin embargo se pueden encontrar pocas aplicaciones que traten de acercarse a una solución dentro de un contexto abierto como son las redes sociales. El contexto de las redes sociales es complejo pues no existe un control sobre los usuarios, los hilos de argumentación pueden desvirtuarse y es complejo alcanzar consensos cuando no existe una figura de experto bien definida como suele ocurrir en el contexto académico. Este trabajo trata de crear una herramienta web en forma de red social, con una base en sistemas inteligentes que permita a los usuarios poder obtener suficiente información de una conversación minimizando el esfuerzo para poder participar activamente.---ABSTRACT---Nowadays a large and an increasing amount of information can be found on social networks. This information is mostly unstructured and not properly organized, which is a problem when conclusions are needed to reach a consensus in argumentations. In addition new participants can find difficulties to join argumentations. Different solutions have been studied to solve these problems focused in academic contexts, however few applications which attempt to solve these problems on social networks can be found. It is not a simple task to handle the complexity of arguments on a social network. Besides, the free context and the lack of control over users make reaching a consensus even harder. This academic work seeks to create a tool in the form of an intelligent systems based social networks which may allow users to minimize the effort needed to join and participate in an argumentation.
Resumo:
Grafts of favorable axonal growth substrates were combined with transient nerve growth factor (NGF) infusions to promote morphological and functional recovery in the adult rat brain after lesions of the septohippocampal projection. Long-term septal cholinergic neuronal rescue and partial hippocampal reinnervation were achieved, resulting in partial functional recovery on a simple task assessing habituation but not on a more complex task assessing spatial reference memory. Control animals that received transient NGF infusions without axonal-growth-promoting grafts lacked behavioral recovery but also showed long-term septal neuronal rescue. These findings indicate that (i) partial recovery from central nervous system injury can be induced by both preventing host neuronal loss and promoting host axonal regrowth and (ii) long-term neuronal loss can be prevented with transient NGF infusions.
Resumo:
Está ampliamente aceptado que es fundamental desarrollar la habilidad de resolver problemas. El pensamiento computacional se basa en resolver problemas haciendo uso de conceptos fundamentales de la informática. Nada mejor para desarrollar la habilidad de resolver problemas usando conceptos informáticos que una asignatura de introducción a la programación. Este trabajo presenta nuestras reflexiones acerca de cómo iniciar a un estudiante en el campo de la programación de computadores. El trabajo no detalla los contenidos a impartir, sino que se centra en aspectos metodológicos, con la inclusión de experiencias y ejemplos concretos, a la vez que generales, extensibles a cualquier enseñanza de programación. En general, aunque se van desarrollado lenguajes cada vez más cercanos al lenguaje humano, la programación de ordenadores utilizando lenguajes formales no es una materia intuitiva y de fácil comprensión por parte de los estudiantes. A la persona que ya sabe programar le parece una tarea sencilla, pero al neófito no. Es más, dominar el arte de la programación es complejo. Por esta razón es indispensable utilizar todas las técnicas y herramientas posibles que faciliten dicha labor.
Resumo:
Immigration and freedom of movement of EU citizens are among the main issues debated throughout the European Parliament election campaign and have some potential in determining who tomorrow’s EU leaders will be. This Policy Brief looks at how the two policies are debated at national level – in France, Germany and the UK – and at EU level between the ‘top candidates’ for European Commission Presidency – Jean-Claude Juncker (EPP), Ska Keller (Greens), Martin Schulz (PES) and Guy Verhofstadt (ALDE) – who have participated in several public debates. Two different campaigns have been unfolding in front of EU citizens’ eyes. The tense debate that can be identified at national level on these issues, is not transferred to the EU level, where immigration and free movement are less controversial topics. Furthermore, although participating in European elections, national parties present agendas responding exclusively to the economic and social challenges of their Member State, while the candidates for the Commission Presidency bring forward ‘more European’ programmes. Hence, several aspects need to be reflected upon: What will the consequences of this discontinuity be? How will this impact the future European agenda in terms of immigration and free movement? What institutional consequences will there be? Answering these questions is not a simple task, however, this paper aims to identify the parameters that need to be taken into account and the political landscape which will determine the future EU agenda in terms of immigration and free movement.
Resumo:
Among the Solar System’s bodies, Moon, Mercury and Mars are at present, or have been in the recent years, object of space missions aimed, among other topics, also at improving our knowledge about surface composition. Between the techniques to detect planet’s mineralogical composition, both from remote and close range platforms, visible and near-infrared reflectance (VNIR) spectroscopy is a powerful tool, because crystal field absorption bands are related to particular transitional metals in well-defined crystal structures, e.g., Fe2+ in M1 and M2 sites of olivine or pyroxene (Burns, 1993). Thanks to the improvements in the spectrometers onboard the recent missions, a more detailed interpretation of the planetary surfaces can now be delineated. However, quantitative interpretation of planetary surface mineralogy could not always be a simple task. In fact, several factors such as the mineral chemistry, the presence of different minerals that absorb in a narrow spectral range, the regolith with a variable particle size range, the space weathering, the atmosphere composition etc., act in unpredictable ways on the reflectance spectra on a planetary surface (Serventi et al., 2014). One method for the interpretation of reflectance spectra of unknown materials involves the study of a number of spectra acquired in the laboratory under different conditions, such as different mineral abundances or different particle sizes, in order to derive empirical trends. This is the methodology that has been followed in this PhD thesis: the single factors previously listed have been analyzed, creating, in the laboratory, a set of terrestrial analogues with well-defined composition and size. The aim of this work is to provide new tools and criteria to improve the knowledge of the composition of planetary surfaces. In particular, mixtures composed with different content and chemistry of plagioclase and mafic minerals have been spectroscopically analyzed at different particle sizes and with different mineral relative percentages. The reflectance spectra of each mixture have been analyzed both qualitatively (using the software ORIGIN®) and quantitatively applying the Modified Gaussian Model (MGM, Sunshine et al., 1990) algorithm. In particular, the spectral parameter variations of each absorption band have been evaluated versus the volumetric FeO% content in the PL phase and versus the PL modal abundance. This delineated calibration curves of composition vs. spectral parameters and allow implementation of spectral libraries. Furthermore, the trends derived from terrestrial analogues here analyzed and from analogues in the literature have been applied for the interpretation of hyperspectral images of both plagioclase-rich (Moon) and plagioclase-poor (Mars) bodies.
Resumo:
Laboratory classes provide a visual and practical way of supplementing traditional teaching through lectures and tutorial classes. A criticism of laboratories in our School is that they are largely based on demonstration with insufficient participation by students. This provided the motivation to create a new laboratory experiment which would be interactive, encourage student enthusiasm with the subject and improve the quality of student learning.
The topic of the laboratory is buoyancy. While this is a key topic in the first-year fluids module, the laboratory has been designed in such a way that prior knowledge of the topic is unnecessary and therefore it would be accessible by secondary school pupils. The laboratory climaxes in a design challenge. However, it begins with a simple task involving students identifying some theoretical background information using given websites. They then have to apply their knowledge by developing some equations. Next, given some materials (a sheet of tinfoil, card and blu-tack), they have to design a vessel to carry the greatest mass without sinking. Thus, they are given an open-ended problem and have to provide a mathematical justification for their design. Students are expected to declare the maximum mass for their boat in advance of it being tested to create a sense of competition and fun. Overall, the laboratory involves tasks which begin at a low level and progressively get harder, incorporating understanding, applying, evaluating and designing (with reference to Bloom’s taxonomy).
The experiment has been tested in a modern laboratory with wall-mounted screens and access to the internet. Students enjoyed the hands-on aspect and thought the format helped their learning.
The use of cheap materials which are readily available means that many students can be involved at one time. Support documentation has been produced, both for the student participants and the facilitator. The latter is given advice on how to guide the students (without simply giving them the answer) and given some warning about potential problems the students might have.
The authors believe that the laboratory can be adapted for use by secondary school pupils and hope that it will be used to promote engineering in an engaging and enthusing way to a wider audience. To this end, contact has already been made with the Widening Participation Unit at the University to gain advice on possible next steps.
Resumo:
Research into social facilitation effects reveals three factors affecting response performance: types of task, types of audience and type of actor. This study attempts to establish a minimal baseline for task and audience type in order to examine difference between personality types in the actors. Results indicate that performance in both extraverts and introverts increases in the minimal conditions of the mere presence of another person whilst carrying out a simple mathematical task. These results are interpreted through an analysis of Zajonc's (1965) drive theory with Eysenck's (1967) personality theory indicating that through further investigation performance curves might be devised for introverts and extraverts performing under a variety of task and audience conditions.
Resumo:
A paradox of memory research is that repeated checking results in a decrease in memory certainty, memory vividness and confidence [van den Hout, M. A., & Kindt, M. (2003a). Phenomenological validity of an OCD-memory model and the remember/know distinction. Behaviour Research and Therapy, 41, 369–378; van den Hout, M. A., & Kindt, M. (2003b). Repeated checking causes memory distrust. Behaviour Research and Therapy, 41, 301–316]. Although these findings have been mainly attributed to changes in episodic long-term memory, it has been suggested [Shimamura, A. P. (2000). Toward a cognitive neuroscience of metacognition. Consciousness and Cognition, 9, 313–323] that representations in working memory could already suffer from detrimental checking. In two experiments we set out to test this hypothesis by employing a delayed-match-to-sample working memory task. Letters had to be remembered in their correct locations, a task that was designed to engage the episodic short-term buffer of working memory [Baddeley, A. D. (2000). The episodic buffer: a new component in working memory? Trends in Cognitive Sciences, 4, 417–423]. Of most importance, we introduced an intermediate distractor question that was prone to induce frustrating and unnecessary checking on trials where no correct answer was possible. Reaction times and confidence ratings on the actual memory test of these trials confirmed the success of this manipulation. Most importantly, high checkers [cf. VOCI; Thordarson, D. S., Radomsky, A. S., Rachman, S., Shafran, R, Sawchuk, C. N., & Hakstian, A. R. (2004). The Vancouver obsessional compulsive inventory (VOCI). Behaviour Research and Therapy, 42(11), 1289–1314] were less accurate than low checkers when frustrating checking was induced, especially if the experimental context actually emphasized the irrelevance of the misleading question. The clinical relevance of this result was substantiated by means of an extreme groups comparison across the two studies. The findings are discussed in the context of detrimental checking and lack of distractor inhibition as a way of weakening fragile bindings within the episodic short-term buffer of Baddeley's (2000) model. Clinical implications, limitations and future research are considered.