884 resultados para Distance of education. Video lessons. Chemistry. Learning of difficulties
Resumo:
This report is compiled from data gathered by interviewing motorists to sample their opinion of Iowa's method of supplementing the yellow barrier line pavement marking of no passing zones on primary highways with yellow pennant shaped "No Passing Zone" signs mounted on the left shoulder of the highway. The effective designation of no passing zones is one form of control that can contribute to a reduction in the number of fatal high-speed head-on collisions resulting from passing in areas which do not afford sufficient sight distance of approaching traffic. It is the purpose of this report to present an evaluation of the Iowa "No Passing Zone" sign by individuals from all states who have traveled on Iowa's primary highways and who must obey the no passing zone restrictions and be warned by this sign of the presence of the zones. The "No Passing Zone" sign was formulated and approved by the Governor's Safety Committee a short time prior to the experimental erection of the signs. The Governor's Safety Committee adopted this sign as they felt that such a sign should be distinctive (not similar to any other type of sign) and easily visible to a driver attempting a passing maneuver.
Resumo:
The objective of this work was to evaluate the spatial distribution of thrips in different crops, and the correlation between meterological parameters and the flight movements of this pest, using immunomarking. The experiment was conducted in cultivated areas, with tomato (Solanum lycopersicum), potato (Solanum tuberosum), and onion (Allium cepa); and non-cultivated areas, with weedy plants. The areas with tomato (100 days), potato (20 days), and weeds were sprayed with casein, albumin, and soy milk, respectively, to mark adult thrips; however, the areas with onion (50 days) and tomato (10 days) were not sprayed. Thrips were captured with georeferenced blue sticky traps, transferred into tubes, and identified by treatment area with the Elisa test. The dependence between the samples and the capture distance was determined using geostatistics. Meteorlogical parameters were correlated with thrips density in each area. The three protein types used for immunomarking were detected in different proportions in the thrips. There was a correlation between casein-marked thrips and wind speed. The thrips flew a maximum distance of 3.5 km and dispersed from the older (tomato) to the younger crops (potato). The immunomarking method is efficient to mark large quantities of thrips.
Resumo:
Un système efficace de sismique tridimensionnelle (3-D) haute-résolution adapté à des cibles lacustres de petite échelle a été développé. Dans le Lac Léman, près de la ville de Lausanne, en Suisse, des investigations récentes en deux dimension (2-D) ont mis en évidence une zone de faille complexe qui a été choisie pour tester notre système. Les structures observées incluent une couche mince (<40 m) de sédiments quaternaires sub-horizontaux, discordants sur des couches tertiaires de molasse pentées vers le sud-est. On observe aussi la zone de faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau de la Molasse Subalpine. Deux campagnes 3-D complètes, d?environ d?un kilomètre carré, ont été réalisées sur ce site de test. La campagne pilote (campagne I), effectuée en 1999 pendant 8 jours, a couvert 80 profils en utilisant une seule flûte. Pendant la campagne II (9 jours en 2001), le nouveau système trois-flûtes, bien paramétrés pour notre objectif, a permis l?acquisition de données de très haute qualité sur 180 lignes CMP. Les améliorations principales incluent un système de navigation et de déclenchement de tirs grâce à un nouveau logiciel. Celui-ci comprend un contrôle qualité de la navigation du bateau en temps réel utilisant un GPS différentiel (dGPS) à bord et une station de référence près du bord du lac. De cette façon, les tirs peuvent être déclenchés tous les 5 mètres avec une erreur maximale non-cumulative de 25 centimètres. Tandis que pour la campagne I la position des récepteurs de la flûte 48-traces a dû être déduite à partir des positions du bateau, pour la campagne II elle ont pu être calculées précisément (erreur <20 cm) grâce aux trois antennes dGPS supplémentaires placées sur des flotteurs attachés à l?extrémité de chaque flûte 24-traces. Il est maintenant possible de déterminer la dérive éventuelle de l?extrémité des flûtes (75 m) causée par des courants latéraux ou de petites variations de trajet du bateau. De plus, la construction de deux bras télescopiques maintenant les trois flûtes à une distance de 7.5 m les uns des autres, qui est la même distance que celle entre les lignes naviguées de la campagne II. En combinaison avec un espacement de récepteurs de 2.5 m, la dimension de chaque «bin» de données 3-D de la campagne II est de 1.25 m en ligne et 3.75 m latéralement. L?espacement plus grand en direction « in-line » par rapport à la direction «cross-line» est justifié par l?orientation structurale de la zone de faille perpendiculaire à la direction «in-line». L?incertitude sur la navigation et le positionnement pendant la campagne I et le «binning» imprécis qui en résulte, se retrouve dans les données sous forme d?une certaine discontinuité des réflecteurs. L?utilisation d?un canon à air à doublechambre (qui permet d?atténuer l?effet bulle) a pu réduire l?aliasing observé dans les sections migrées en 3-D. Celui-ci était dû à la combinaison du contenu relativement haute fréquence (<2000 Hz) du canon à eau (utilisé à 140 bars et à 0.3 m de profondeur) et d?un pas d?échantillonnage latéral insuffisant. Le Mini G.I 15/15 a été utilisé à 80 bars et à 1 m de profondeur, est mieux adapté à la complexité de la cible, une zone faillée ayant des réflecteurs pentés jusqu?à 30°. Bien que ses fréquences ne dépassent pas les 650 Hz, cette source combine une pénétration du signal non-aliasé jusqu?à 300 m dans le sol (par rapport au 145 m pour le canon à eau) pour une résolution verticale maximale de 1.1 m. Tandis que la campagne I a été acquise par groupes de plusieurs lignes de directions alternées, l?optimisation du temps d?acquisition du nouveau système à trois flûtes permet l?acquisition en géométrie parallèle, ce qui est préférable lorsqu?on utilise une configuration asymétrique (une source et un dispositif de récepteurs). Si on ne procède pas ainsi, les stacks sont différents selon la direction. Toutefois, la configuration de flûtes, plus courtes que pour la compagne I, a réduit la couverture nominale, la ramenant de 12 à 6. Une séquence classique de traitement 3-D a été adaptée à l?échantillonnage à haute fréquence et elle a été complétée par deux programmes qui transforment le format non-conventionnel de nos données de navigation en un format standard de l?industrie. Dans l?ordre, le traitement comprend l?incorporation de la géométrie, suivi de l?édition des traces, de l?harmonisation des «bins» (pour compenser l?inhomogénéité de la couverture due à la dérive du bateau et de la flûte), de la correction de la divergence sphérique, du filtrage passe-bande, de l?analyse de vitesse, de la correction DMO en 3-D, du stack et enfin de la migration 3-D en temps. D?analyses de vitesse détaillées ont été effectuées sur les données de couverture 12, une ligne sur deux et tous les 50 CMP, soit un nombre total de 600 spectres de semblance. Selon cette analyse, les vitesses d?intervalles varient de 1450-1650 m/s dans les sédiments non-consolidés et de 1650-3000 m/s dans les sédiments consolidés. Le fait que l?on puisse interpréter plusieurs horizons et surfaces de faille dans le cube, montre le potentiel de cette technique pour une interprétation tectonique et géologique à petite échelle en trois dimensions. On distingue cinq faciès sismiques principaux et leurs géométries 3-D détaillées sur des sections verticales et horizontales: les sédiments lacustres (Holocène), les sédiments glacio-lacustres (Pléistocène), la Molasse du Plateau, la Molasse Subalpine de la zone de faille (chevauchement) et la Molasse Subalpine au sud de cette zone. Les couches de la Molasse du Plateau et de la Molasse Subalpine ont respectivement un pendage de ~8° et ~20°. La zone de faille comprend de nombreuses structures très déformées de pendage d?environ 30°. Des tests préliminaires avec un algorithme de migration 3-D en profondeur avant sommation et à amplitudes préservées démontrent que la qualité excellente des données de la campagne II permet l?application de telles techniques à des campagnes haute-résolution. La méthode de sismique marine 3-D était utilisée jusqu?à présent quasi-exclusivement par l?industrie pétrolière. Son adaptation à une échelle plus petite géographiquement mais aussi financièrement a ouvert la voie d?appliquer cette technique à des objectifs d?environnement et du génie civil.<br/><br/>An efficient high-resolution three-dimensional (3-D) seismic reflection system for small-scale targets in lacustrine settings was developed. In Lake Geneva, near the city of Lausanne, Switzerland, past high-resolution two-dimensional (2-D) investigations revealed a complex fault zone (the Paudèze thrust zone), which was subsequently chosen for testing our system. Observed structures include a thin (<40 m) layer of subhorizontal Quaternary sediments that unconformably overlie southeast-dipping Tertiary Molasse beds and the Paudèze thrust zone, which separates Plateau and Subalpine Molasse units. Two complete 3-D surveys have been conducted over this same test site, covering an area of about 1 km2. In 1999, a pilot survey (Survey I), comprising 80 profiles, was carried out in 8 days with a single-streamer configuration. In 2001, a second survey (Survey II) used a newly developed three-streamer system with optimized design parameters, which provided an exceptionally high-quality data set of 180 common midpoint (CMP) lines in 9 days. The main improvements include a navigation and shot-triggering system with in-house navigation software that automatically fires the gun in combination with real-time control on navigation quality using differential GPS (dGPS) onboard and a reference base near the lake shore. Shots were triggered at 5-m intervals with a maximum non-cumulative error of 25 cm. Whereas the single 48-channel streamer system of Survey I requires extrapolation of receiver positions from the boat position, for Survey II they could be accurately calculated (error <20 cm) with the aid of three additional dGPS antennas mounted on rafts attached to the end of each of the 24- channel streamers. Towed at a distance of 75 m behind the vessel, they allow the determination of feathering due to cross-line currents or small course variations. Furthermore, two retractable booms hold the three streamers at a distance of 7.5 m from each other, which is the same distance as the sail line interval for Survey I. With a receiver spacing of 2.5 m, the bin dimension of the 3-D data of Survey II is 1.25 m in in-line direction and 3.75 m in cross-line direction. The greater cross-line versus in-line spacing is justified by the known structural trend of the fault zone perpendicular to the in-line direction. The data from Survey I showed some reflection discontinuity as a result of insufficiently accurate navigation and positioning and subsequent binning errors. Observed aliasing in the 3-D migration was due to insufficient lateral sampling combined with the relatively high frequency (<2000 Hz) content of the water gun source (operated at 140 bars and 0.3 m depth). These results motivated the use of a double-chamber bubble-canceling air gun for Survey II. A 15 / 15 Mini G.I air gun operated at 80 bars and 1 m depth, proved to be better adapted for imaging the complexly faulted target area, which has reflectors dipping up to 30°. Although its frequencies do not exceed 650 Hz, this air gun combines a penetration of non-aliased signal to depths of 300 m below the water bottom (versus 145 m for the water gun) with a maximum vertical resolution of 1.1 m. While Survey I was shot in patches of alternating directions, the optimized surveying time of the new threestreamer system allowed acquisition in parallel geometry, which is preferable when using an asymmetric configuration (single source and receiver array). Otherwise, resulting stacks are different for the opposite directions. However, the shorter streamer configuration of Survey II reduced the nominal fold from 12 to 6. A 3-D conventional processing flow was adapted to the high sampling rates and was complemented by two computer programs that format the unconventional navigation data to industry standards. Processing included trace editing, geometry assignment, bin harmonization (to compensate for uneven fold due to boat/streamer drift), spherical divergence correction, bandpass filtering, velocity analysis, 3-D DMO correction, stack and 3-D time migration. A detailed semblance velocity analysis was performed on the 12-fold data set for every second in-line and every 50th CMP, i.e. on a total of 600 spectra. According to this velocity analysis, interval velocities range from 1450-1650 m/s for the unconsolidated sediments and from 1650-3000 m/s for the consolidated sediments. Delineation of several horizons and fault surfaces reveal the potential for small-scale geologic and tectonic interpretation in three dimensions. Five major seismic facies and their detailed 3-D geometries can be distinguished in vertical and horizontal sections: lacustrine sediments (Holocene) , glaciolacustrine sediments (Pleistocene), Plateau Molasse, Subalpine Molasse and its thrust fault zone. Dips of beds within Plateau and Subalpine Molasse are ~8° and ~20°, respectively. Within the fault zone, many highly deformed structures with dips around 30° are visible. Preliminary tests with 3-D preserved-amplitude prestack depth migration demonstrate that the excellent data quality of Survey II allows application of such sophisticated techniques even to high-resolution seismic surveys. In general, the adaptation of the 3-D marine seismic reflection method, which to date has almost exclusively been used by the oil exploration industry, to a smaller geographical as well as financial scale has helped pave the way for applying this technique to environmental and engineering purposes.<br/><br/>La sismique réflexion est une méthode d?investigation du sous-sol avec un très grand pouvoir de résolution. Elle consiste à envoyer des vibrations dans le sol et à recueillir les ondes qui se réfléchissent sur les discontinuités géologiques à différentes profondeurs et remontent ensuite à la surface où elles sont enregistrées. Les signaux ainsi recueillis donnent non seulement des informations sur la nature des couches en présence et leur géométrie, mais ils permettent aussi de faire une interprétation géologique du sous-sol. Par exemple, dans le cas de roches sédimentaires, les profils de sismique réflexion permettent de déterminer leur mode de dépôt, leurs éventuelles déformations ou cassures et donc leur histoire tectonique. La sismique réflexion est la méthode principale de l?exploration pétrolière. Pendant longtemps on a réalisé des profils de sismique réflexion le long de profils qui fournissent une image du sous-sol en deux dimensions. Les images ainsi obtenues ne sont que partiellement exactes, puisqu?elles ne tiennent pas compte de l?aspect tridimensionnel des structures géologiques. Depuis quelques dizaines d?années, la sismique en trois dimensions (3-D) a apporté un souffle nouveau à l?étude du sous-sol. Si elle est aujourd?hui parfaitement maîtrisée pour l?imagerie des grandes structures géologiques tant dans le domaine terrestre que le domaine océanique, son adaptation à l?échelle lacustre ou fluviale n?a encore fait l?objet que de rares études. Ce travail de thèse a consisté à développer un système d?acquisition sismique similaire à celui utilisé pour la prospection pétrolière en mer, mais adapté aux lacs. Il est donc de dimension moindre, de mise en oeuvre plus légère et surtout d?une résolution des images finales beaucoup plus élevée. Alors que l?industrie pétrolière se limite souvent à une résolution de l?ordre de la dizaine de mètres, l?instrument qui a été mis au point dans le cadre de ce travail permet de voir des détails de l?ordre du mètre. Le nouveau système repose sur la possibilité d?enregistrer simultanément les réflexions sismiques sur trois câbles sismiques (ou flûtes) de 24 traces chacun. Pour obtenir des données 3-D, il est essentiel de positionner les instruments sur l?eau (source et récepteurs des ondes sismiques) avec une grande précision. Un logiciel a été spécialement développé pour le contrôle de la navigation et le déclenchement des tirs de la source sismique en utilisant des récepteurs GPS différentiel (dGPS) sur le bateau et à l?extrémité de chaque flûte. Ceci permet de positionner les instruments avec une précision de l?ordre de 20 cm. Pour tester notre système, nous avons choisi une zone sur le Lac Léman, près de la ville de Lausanne, où passe la faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau et de la Molasse Subalpine. Deux campagnes de mesures de sismique 3-D y ont été réalisées sur une zone d?environ 1 km2. Les enregistrements sismiques ont ensuite été traités pour les transformer en images interprétables. Nous avons appliqué une séquence de traitement 3-D spécialement adaptée à nos données, notamment en ce qui concerne le positionnement. Après traitement, les données font apparaître différents faciès sismiques principaux correspondant notamment aux sédiments lacustres (Holocène), aux sédiments glacio-lacustres (Pléistocène), à la Molasse du Plateau, à la Molasse Subalpine de la zone de faille et la Molasse Subalpine au sud de cette zone. La géométrie 3-D détaillée des failles est visible sur les sections sismiques verticales et horizontales. L?excellente qualité des données et l?interprétation de plusieurs horizons et surfaces de faille montrent le potentiel de cette technique pour les investigations à petite échelle en trois dimensions ce qui ouvre des voies à son application dans les domaines de l?environnement et du génie civil.
Resumo:
The geometric characterisation of tree orchards is a high-precision activity comprising the accurate measurement and knowledge of the geometry and structure of the trees. Different types of sensors can be used to perform this characterisation. In this work a terrestrial LIDAR sensor (SICK LMS200) whose emission source was a 905-nm pulsed laser diode was used. Given the known dimensions of the laser beam cross-section (with diameters ranging from 12 mm at the point of emission to 47.2 mm at a distance of 8 m), and the known dimensions of the elements that make up the crops under study (flowers, leaves, fruits, branches, trunks), it was anticipated that, for much of the time, the laser beam would only partially hit a foreground target/object, with the consequent problem of mixed pixels or edge effects. Understanding what happens in such situations was the principal objective of this work. With this in mind, a series of tests were set up to determine the geometry of the emitted beam and to determine the response of the sensor to different beam blockage scenarios. The main conclusions that were drawn from the results obtained were: (i) in a partial beam blockage scenario, the distance value given by the sensor depends more on the blocked radiant power than on the blocked surface area; (ii) there is an area that influences the measurements obtained that is dependent on the percentage of blockage and which ranges from 1.5 to 2.5 m with respect to the foreground target/object. If the laser beam impacts on a second target/object located within this range, this will affect the measurement given by the sensor. To interpret the information obtained from the point clouds provided by the LIDAR sensors, such as the volume occupied and the enclosing area, it is necessary to know the resolution and the process for obtaining this mesh of points and also to be aware of the problem associated with mixed pixels.
Resumo:
Understanding how wikis are used to support collaborative learning is an important concern for researchers and teachers. Adopting a discourse analytic approach, this paper attempts to understand the teaching processes when a wiki is embedded in a science project in primary education to foster collaborative learning. Through studying interaction between the teacher and students, our findings identify ways in which the teacher prompts collaborative learning but also shed light on the difficulties for the teacher in supporting student collective collaboration. It is argued that technological wiki features supporting collaborative learning can only be realized if teacher talk and pedagogy are aligned with the characteristics of wiki collaborative work: the freedom of students to organize and participate by themselves, creating dialogic space and promoting student participation. We argue that a dialogic approach for examining interaction can be used to help to design a more effective pedagogic approach in the use of wikis in education, to shift into Web 2.0 learning paradigm and to equip learners with the competences they need to participate in knowledge co-construction.
Resumo:
During the last few years, the discussion on the marginal social costs of transportation has been active. Applying the externalities as a tool to control transport would fulfil the polluter pays principle and simultaneously create a fair control method between the transport modes. This report presents the results of two calculation algorithms developed to estimate the marginal social costs based on the externalities of air pollution. The first algorithm calculates the future scenarios of sea transport traffic externalities until 2015 in the Gulf of Finland. The second algorithm calculates the externalities of Russian passenger car transit traffic via Finland by taking into account both sea and road transport. The algorithm estimates the ship-originated emissions of carbon dioxide (CO2), nitrogen oxides (NOx), sulphur oxides (SOx), particulates (PM) and the externalities for each year from 2007 to 2015. The total NOx emissions in the Gulf of Finland from the six ship types were almost 75.7 kilotons (Table 5.2) in 2007. The ship types are: passenger (including cruisers and ROPAX vessels), tanker, general cargo, Ro-Ro, container and bulk vessels. Due to the increase of traffic, the estimation for NOx emissions for 2015 is 112 kilotons. The NOx emission estimation for the whole Baltic Sea shipping is 370 kilotons in 2006 (Stipa & al, 2007). The total marginal social costs due to ship-originated CO2, NOx, SOx and PM emissions in the GOF were calculated to almost 175 million Euros in 2007. The costs will increase to nearly 214 million Euros in 2015 due to the traffic growth. The major part of the externalities is due to CO2 emissions. If we neglect the CO2 emissions by extracting the CO2 externalities from the results, we get the total externalities of 57 million Euros in 2007. After eight years (2015), the externalities would be 28 % lower, 41 million Euros (Table 8.1). This is the result of the sulphur emissions reducing regulation of marine fuels. The majority of the new car transit goes through Finland to Russia due to the lack of port capacity in Russia. The amount of cars was 339 620 vehicles (Statistics of Finnish Customs 2008) in 2005. The externalities are calculated for the transportation of passenger vehicles as follows: by ship to a Finnish port and, after that, by trucks to the Russian border checkpoint. The externalities are between 2 – 3 million Euros (year 2000 cost level) for each route. The ports included in the calculations are Hamina, Hanko, Kotka and Turku. With the Euro-3 standard trucks, the port of Hanko would be the best choice to transport the vehicles. This is because of lower emissions by new trucks and the saved transport distance of a ship. If the trucks are more polluting Euro 1 level trucks, the port of Kotka would be the best choice. This indicates that the truck emissions have a considerable effect on the externalities and that the transportation of light cargo, such as passenger cars by ship, produces considerably high emission externalities. The emission externalities approach offers a new insight for valuing the multiple traffic modes. However, the calculation of the marginal social costs based on the air emission externalities should not be regarded as a ready-made calculation system. The system is clearly in the need of some improvement but it can already be considered as a potential tool for political decision making.
Resumo:
In nature, variation for example in herbivory, wind exposure, moisture and pollution impact often creates variation in physiological stress and plant productivity. This variation is seldom clear-cut, but rather results in clines of decreasing growth and productivity towards the high-stress end. These clines of unidirectionally changing stress are generally known as ‘stress gradients’. Through its effect on plant performance, stress has the capacity to fundamentally alter the ecological relationships between individuals, and through variation in survival and reproduction it also causes evolutionary change, i.e. local adaptations to stress and eventually speciation. In certain conditions local adaptations to environmental stress have been documented in a matter of just a few generations. In plant-plant interactions, intensities of both negative interactions (competition) and positive ones (facilitation) are expected to vary along stress gradients. The stress-gradient hypothesis (SGH) suggests that net facilitation will be strongest in conditions of high biotic and abiotic stress, while a more recent ‘humpback’ model predicts strongest net facilitation at intermediate levels of stress. Plant interactions on stress gradients, however, are affected by a multitude of confounding factors, making studies of facilitation-related theories challenging. Among these factors are plant ontogeny, spatial scale, and local adaptation to stress. The last of these has very rarely been included in facilitation studies, despite the potential co-occurrence of local adaptations and changes in net facilitation in stress gradients. Current theory would predict both competitive effects and facilitative responses to be weakest in populations locally adapted to withstand high abiotic stress. This thesis is based on six experiments, conducted both in greenhouses and in the field in Russia, Norway and Finland, with mountain birch (Betula pubescens subsp. czerepanovii) as the model species. The aims were to study potential local adaptations in multiple stress gradients (both natural and anthropogenic), changes in plant-plant interactions under conditions of varying stress (as predicted by SGH), potential mechanisms behind intraspecific facilitation, and factors confounding plant-plant facilitation, such as spatiotemporal, ontogenetic, and genetic differences. I found rapid evolutionary adaptations (occurring within a time-span of 60 to 70 years) towards heavy-metal resistance around two copper-nickel smelters, a phenomenon that has resulted in a trade-off of decreased performance in pristine conditions. Heavy-metal-adapted individuals had lowered nickel uptake, indicating a possible mechanism behind the detected resistance. Seedlings adapted to heavy-metal toxicity were not co-resistant to others forms of abiotic stress, but showed co-resistance to biotic stress by being consumed to a lesser extent by insect herbivores. Conversely, populations from conditions of high natural stress (wind, drought etc.) showed no local adaptations, despite much longer evolutionary time scales. Due to decreasing emissions, I was unable to test SGH in the pollution gradients. In natural stress gradients, however, plant performance was in accordance with SGH, with the strongest host-seedling facilitation found at the high-stress sites in two different stress gradients. Factors confounding this pattern included (1) plant size / ontogenetic status, with seedling-seedling interactions being competition dominated and host-seedling interactions potentially switching towards competition with seedling growth, and (2) spatial distance, with competition dominating at very short planting distances, and facilitation being strongest at a distance of circa ¼ benefactor height. I found no evidence for changes in facilitation with respect to the evolutionary histories of plant populations. Despite the support for SGH, it may be that the ‘humpback’ model is more relevant when the main stressor is resource-related, while what I studied were the effects of ‘non-resource’ stressors (i.e. heavy-metal pollution and wind). The results have potential practical applications: the utilisation of locally adapted seedlings and plant facilitation may increase the success of future restoration efforts in industrial barrens as well as in other wind-exposed sites. The findings also have implications with regard to the effects of global change in subarctic environments: the documented potential by mountain birch for rapid evolutionary change, together with the general lack of evolutionary ‘dead ends’, due to not (over)specialising to current natural conditions, increase the chances of this crucial forest-forming tree persisting even under the anticipated climate change.
Resumo:
This article examines the participation of Spanish older people in formal, non-formal and informal learning activities and presents a profile of participants in each kind of learning activity. We used data from a nationally representative sample of Spanish people between 60 and 75 years old (n = 4,703). The data were extracted from the 2007 Encuesta sobre la Participación de la Población Adulta en Actividades de Aprendizaje (EADA, Survey on Adult Population Involvement in Learning Activities). Overall, only 22.8 % of the sample participated in a learning activity. However, there was wide variation in the participation rates for the different types of activity. Informal activities were far more common than formal ones. Multivariate logistic regression indicated that education level and involvement in social and cultural activities were associated with likelihood of participating, regardless of the type of learning activity. When these variables were taken into account, age did not predict decreasing participation, at least in non-formal and informal activities. Implications for further research, future trends and policies to promote older adult education are discussed.
Resumo:
To test the potential effects of winds on the migratory detours of shearwaters, transequatorial migrations of 3 shearwaters, the Manx Puffinus puffinus, the Cory"s Calonectris diomedea, and the Cape Verde C. edwardsii shearwaters were tracked using geolocators. Concurrent data on the direction and strength of winds were obtained from the NASA SeaWinds scatterometer to calculate daily impedance models reflecting the resistance of sea surface winds to the shearwater movements. From these models we estimated relative wind-mediated costs for the observed synthesis pathway obtained from tracked birds, for the shortest distance pathway and for other simulated alternative pathways for every day of the migration period. We also estimated daily trajectories of the minimum cost pathway and compared distance and relative costs of all pathways. Shearwaters followed 26 to 52% longer pathways than the shortest distance path. In general, estimated wind-mediated costs of both observed synthesis and simulated alternative pathways were strongly dependent on the date of departure. Costs of observed synthesis pathways were about 15% greater than the synthesis pathway with the minimum cost, but, in the Cory"s and the Cape Verde shearwaters, these pathways were on average 15 to 20% shorter in distance, suggesting the extra costs of the observed pathways are compensated by saving about 2 travelling days. In Manx shearwaters, however, the distance of the observed synthesis pathway was 25% longer than that of the lowest cost synthesis pathway, probably because birds avoided shorter but potentially more turbulent pathways. Our results suggest that winds are a major determinant of the migratory routes of seabirds.
Resumo:
The case-study method of instruction is increasing in popularity and instructors of various scientific disciplines are adopting this method for their courses. Its effectiveness suggests that there is a need for such resources to be used in chemistry education. In this paper we describe this method in detail and present our use of cases in a scientific communication course offered to undergraduate chemistry students at the University of São Paulo. The description of the method and the example of its use may be helpful for faculty members who wish to explore new ways to engage students more deeply in their learning and to reinvigorate their own teachig practice.
Resumo:
Chemistry teachers increasingly use research articles in their undergraduate courses. This trend arises from current pedagogical emphasis on active learning and scientific process. In this paper, we describe some educational experiences on the use of research articles in chemistry higher education. Additionally, we present our own conclusions on the use of such methodology applied to a scientific communication course offered to undergraduate chemistry students at the University of São Paulo, Brazil.
Resumo:
The literature on the challenges of teacher education in undergraduate chemistry teaching is limited. In the present study, the application of didactic proposals elaborated by two authors of this paper, graduate students and teaching assistants of the teaching improvement program at University of São Paulo, was investigated in terms of their contribution to the teaching assistants' education and undergraduate students' receptivity toward them. Such proposals were based on the jigsaw cooperative learning strategy and applied in two undergraduate courses. The results indicate students' good receptivity and suggest their importance to teaching assistants' education.
Resumo:
The pedagogic training of university professors has been overlooked in institutions of higher education. In the second semester of 2009, students of the Graduate Chemistry Program at the Federal University of Minas Gerais participated in an investigation that aimed to evaluate the students' perceptions of their own training to become professors. It was found that a large number of students felt prepared to teach at institutions of higher education, despite their lack of experience at this education level. A significant point in this work was the finding that most students do not acknowledge the need of associating the teaching practice with teaching-learning theories, but instead consider the knowledge of scientific content important.
Resumo:
This paper presents an overview of the development of chemical education as a research area and some of its contributions to society. Although science education is a relatively recent area of research, it went through an expressive development in the last decades. As in the whole world, in Brazil also such development is attested by the expressive number of scientific societies, specialized journals, and meetings with growing attendance in the areas of science education in general and chemical education in particular. Following are the main contributions of research in science education related to chemistry teaching: adoption of teaching-learning principles in chemistry education; contextualization of chemical knowledge; interdisciplinary approach to chemistry teaching; use of the history of science for the definition of contents and for the design of curricula and teaching tools; development of specific disciplines for the initial and in-service training of chemistry teachers; publication of innovative chemistry textbooks by university-based research groups; elaboration of official guidelines for high-school level; and evaluation of chemistry textbooks to be distributed to high-school students by the Brazilian government. In spite of a positive impact of such initiatives, science education in Brazil still faces many problems, as indicated by poor results in international evaluations (such as the Program for International Student Assessment). However, changes in such a scenario depend less on the research in chemical education than on the much-needed governmental initiatives aiming at the improvement of both attractiveness of teaching career and structural conditions of public schools. In conclusion, new government investments in education are necessary for continuing the development of chemistry; moreover, scientific societies and decision makers in educational policies should take into consideration the contributions originated from the chemical education research area.
Resumo:
The aim of the study was to create and evaluate an intervention programme for Tanzanian children from a low-income area who are at risk of reading and writing difficulties. The learning difficulties, including reading and writing difficulties, are likely to be behind many of the common school problems in Tanzania, but they are not well understood, and research is needed. The design of the study included an identification and intervention phase with follow-up. A group based dynamic assessment approach was used in identifying children at risk of difficulties in reading and writing. The same approach was used in the intervention. The study was a randomized experiment with one experimental and two control groups. For the experimental and the control groups, a total of 96 (46 girls and 50 boys) children from grade one were screened out of 301 children from two schools in a low income urban area of Dar-es-Salaam. One third of the children, the experimental group, participated in an intensive training programme in literacy skills for five weeks, six hours per week, aimed at promoting reading and writing ability, while the children in the control groups had a mathematics and art programme. Follow-up was performed five months after the intervention. The intervention programme and the tests were based on the Zambian BASAT (Basic Skill Assessment Tool, Ketonen & Mulenga, 2003), but the content was drawn from the Kiswahili school curriculum in Tanzania. The main components of the training and testing programme were the same, only differing in content. The training process was different from traditional training in Tanzanian schools in that principles of teaching and training in dynamic assessment were followed. Feedback was the cornerstone of the training and the focus was on supporting the children in exploring knowledge and strategies in performing the tasks. The experimental group improved significantly more (p = .000) than the control groups during the intervention from pre-test to follow-up (repeated measures ANOVA). No differences between the control groups were noticed. The effect was significant on all the measures: phonological awareness, reading skills, writing skills and overall literacy skills. A transfer effect on school marks in Kiswahili and English was found. Following a discussion of the results, suggestions for further research and adaptation of the programme are presented.