60 resultados para Mapping the end times
Resumo:
The beds of active ice streams in Greenland and Antarctica are largely inaccessible, hindering a full understanding of the processes that initiate, sustain and inhibit fast ice flow in ice sheets. Detailed mapping of the glacial geomorphology of palaeo-ice stream tracks is, therefore, a valuable tool for exploring the basal processes that control their behaviour. In this paper we present a map that shows detailed glacial geomorphology from a part of the Dubawnt Lake Palaeo-Ice Stream bed on the north-western Canadian Shield (Northwest Territories), which operated at the end of the last glacial cycle. The map (centred on 63 degrees 55 '' 42'N, 102 degrees 29 '' 11'W, approximate scale 1:90,000) was compiled from digital Landsat Enhanced Thematic Mapper Plus satellite imagery and digital and hard-copy stereo-aerial photographs. The ice stream bed is dominated by parallel mega-scale glacial lineations (MGSL), whose lengths exceed several kilometres but the map also reveals that they have, in places, been superimposed with transverse ridges known as ribbed moraines. The ribbed moraines lie on top of the MSGL and appear to have segmented the individual lineaments. This indicates that formation of the ribbed moraines post-date the formation of the MSGL. The presence of ribbed moraine in the onset zone of another palaeo-ice stream has been linked to oscillations between cold and warm-based ice and/or a patchwork of cold-based areas which led to acceleration and deceleration of ice velocity. Our hypothesis is that the ribbed moraines on the Dubawnt Lake Ice Stream bed are a manifestation of the process that led to ice stream shut-down and may be associated with the process of basal freeze-on. The precise formation of ribbed moraines, however, remains open to debate and field observation of their structure will provide valuable data for formal testing of models of their formation.
Resumo:
Projections of future global sea level depend on reliable estimates of changes in the size of polar ice sheets. Calculating this directly from global general circulation models (GCMs) is unreliable because the coarse resolution of 100 km or more is unable to capture narrow ablation zones, and ice dynamics is not usually taken into account in GCMs. To overcome these problems a high-resolution (20 km) dynamic ice sheet model has been coupled to the third Hadley Centre Coupled Ocean-Atmosphere GCM (HadCM3). A novel feature is the use of two-way coupling, so that climate changes in the GCM drive ice mass changes in the ice sheet model that, in turn, can alter the future climate through changes in orography, surface albedo, and freshwater input to the model ocean. At the start of the main experiment the atmospheric carbon dioxide concentration was increased to 4 times the preindustrial level and held constant for 3000 yr. By the end of this period the Greenland ice sheet is almost completely ablated and has made a direct contribution of approximately 7 m to global average sea level, causing a peak rate of sea level rise of 5 mm yr-1 early in the simulation. The effect of ice sheet depletion on global and regional climate has been examined and it was found that apart from the sea level rise, the long-term effect on global climate is small. However, there are some significant regional climate changes that appear to have reduced the rate at which the ice sheet ablates.
Resumo:
We separate and quantify the sources of uncertainty in projections of regional (*2,500 km) precipitation changes for the twenty-first century using the CMIP3 multi-model ensemble, allowing a direct comparison with a similar analysis for regional temperature changes. For decadal means of seasonal mean precipitation, internal variability is the dominant uncertainty for predictions of the first decade everywhere, and for many regions until the third decade ahead. Model uncertainty is generally the dominant source of uncertainty for longer lead times. Scenario uncertainty is found to be small or negligible for all regions and lead times, apart from close to the poles at the end of the century. For the global mean, model uncertainty dominates at all lead times. The signal-to-noise ratio (S/N) of the precipitation projections is highest at the poles but less than 1 almost everywhere else, and is far lower than for temperature projections. In particular, the tropics have the highest S/N for temperature, but the lowest for precipitation. We also estimate a ‘potential S/N’ by assuming that model uncertainty could be reduced to zero, and show that, for regional precipitation, the gains in S/N are fairly modest, especially for predictions of the next few decades. This finding suggests that adaptation decisions will need to be made in the context of high uncertainty concerning regional changes in precipitation. The potential to narrow uncertainty in regional temperature projections is far greater. These conclusions on S/N are for the current generation of models; the real signal may be larger or smaller than the CMIP3 multi-model mean. Also note that the S/N for extreme precipitation, which is more relevant for many climate impacts, may be larger than for the seasonal mean precipitation considered here.
Resumo:
Survival times for the Acacia mangium plantation in the Segaliud Lokan Project, Sabah, East Malaysia were analysed based on 20 permanent sample plots (PSPs) established in 1988 as a spacing experiment. The PSPs were established following a complete randomized block design with five levels of spacing randomly assigned to units within four blocks at different sites. The survival times of trees in years are of interest. Since the inventories were only conducted annually, the actual survival time for each tree was not observed. Hence, the data set comprises censored survival times. Initial analysis of the survival of the Acacia mangium plantation suggested there is block by spacing interaction; a Weibull model gives a reasonable fit to the replicate survival times within each PSP; but a standard Weibull regression model is inappropriate because the shape parameter differs between PSPs. In this paper we investigate the form of the non-constant Weibull shape parameter. Parsimonious models for the Weibull survival times have been derived using maximum likelihood methods. The factor selection for the parameters is based on a backward elimination procedure. The models are compared using likelihood ratio statistics. The results suggest that both Weibull parameters depend on spacing and block.
Resumo:
Information technology in construction (ITC) has been gaining wide acceptance and is being implemented in the construction research domains as a tool to assist decision makers. Most of the research into visualization technologies (VT) has been on the wide range of 3D and simulation applications suitable for construction processes. Despite its development with interoperability and standardization of products, VT usage has remained very low when it comes to communicating and addressing the needs of building end-users (BEU). This paper argues that building end users are a source of experience and expertise that can be brought into the briefing stage for the evaluation of design proposals. It also suggests that the end user is a source of new ideas promoting innovation. In this research a positivistic methodology that includes the comparison of 3D models and the traditional 2D methods is proposed. It will help to identify "how much", if anything, a non-spatial specialist can gain in terms Of "understanding" of a particular design proposal presented, using both methods.
Resumo:
This paper presents the design evolution process of a composite leaf spring for freight rail applications. Three designs of eye-end attachment for composite leaf springs are described. The material used is glass fibre reinforced polyester. Static testing and finite element analysis have been carried out to obtain the characteristics of the spring. Load-deflection curves and strain measurement as a function of load for the three designs tested have been plotted for comparison with FEA predicted values. The main concern associated with the first design is the delamination failure at the interface of the fibres that have passed around the eye and the spring body, even though the design can withstand 150 kN static proof load and one million cycles fatigue load. FEA results confirmed that there is a high interlaminar shear stress concentration in that region. The second design feature is an additional transverse bandage around the region prone to delamination. Delamination was contained but not completely prevented. The third design overcomes the problem by ending the fibres at the end of the eye section.
Resumo:
Pregnant rats were given control (46 mg iron/kg, 61 mg zinc/kg), low-Zn (6.9 mg Zn/kg) or low-Zn plus Fe (168 mg Fe/kg) diets from day 1 of pregnancy. The animals were allowed to give birth and parturition times recorded. Exactly 24 h after the end of parturition the pups were killed and analysed for water, fat, protein, Fe and Zn contents and the mothers' haemoglobin (Hb) and packed cell volume (PCV) were measured. There were no differences in weight gain or food intakes throughout pregnancy. Parturition times were similar (mean time 123 (SE 15) min) and there were no differences in the number of pups born. Protein, water and fat contents of the pups were similar but the low-Zn Fe-supplemented group had higher pup Fe than the low-Zn unsupplemented group, and the control group had higher pup Zn than both the low-Zn groups. The low-Zn groups had a greater incidence of haemorrhaged or deformed pups, or both, than the controls. Pregnant rats were given diets of adequate Zn level (40 mg/kg) but with varying Fe:Zn (0.8, 1.7, 2.9, 3.7). Zn retention from the diet was measured using 65Zn as an extrinsic label on days 3, 10 and 17 of pregnancy with a whole-body gamma-counter. A group of non-pregnant rats was also included as controls. The 65Zn content of mothers and pups was measured 24-48 h after birth and at 14, 21 and 24 d of age. In all groups Zn retention was highest from the first meal, fell in the second meal and then rose in the third meal of the pregnant but not the non-pregnant rats. There were no differences between the groups given diets of varying Fe:Zn level. Approximately 25% of the 65Zn was transferred from the mothers to the pups by the time they were 48 h old, and a further 17% during the first 14 d of lactation. The pup 65Zn content did not significantly increase after the first 20 d of lactation but the maternal 65Zn level continued to fall gradually.
Resumo:
Four protocols involving the application of low pressures, either toward the end of frying or after frying, were investigated with the aim of lowering the oil content of potato chips. Protocol 1 involving frying at atmospheric pressure followed by a 3 min draining time constituted the control. Protocol 2 involved lowering of pressure to 13.33 kPa, 40 s before the end of frying, followed by draining for 3 min at the same pressure. Protocol 3 was the same as protocol 2, except that the pressure was lowered 3 s before the end of frying. Protocol 4 involved lowering the pressure to 13.33 kPa after the product was lifted from the oil and holding it at this value over the draining time of 3 min. Protocol 4 gave a product having the lowest oil content (37.12 g oil/100 g defatted dry matter), while protocol 2 gave the product with highest oil content (71.10 g oil/100 g defatted dry matter), followed by those obtained using protocols 1 and 3(68.48 g oil/100 g defatted dry matter and 52.50 g oil/100 g defatted dry matter, respectively). Protocol 4 was further evaluated to study the effects of draining times and vacuum applied, and compared with the control. It was noted that over the modest range of pressures investigated, there was no significant effect of the vacuum applied on the oil content of the product. This study demonstrates that the oil content of potato chips can be lowered significantly by combining atmospheric frying with draining under vacuum.
Resumo:
The chapter starts from the premise that an historically- and institutionally-formed orientation to music education at primary level in European countries privileges a nineteenth century Western European music aesthetic, with its focus on formal characteristics such as melody and rhythm. While there is a move towards a multi-faceted understanding of musical ability, a discrete intelligence and willingness to accept musical styles or 'open-earedness', there remains a paucity of documented evidence of this in research at primary school level. To date there has been no study undertaken which has the potential to provide policy makers and practitioners with insights into the degree of homogeneity or universality in conceptions of musical ability within this educational sector. Against this background, a study was set up to explore the following research questions: 1. What conceptions of musical ability do primary teachers hold a) of themselves and; b) of their pupils? 2. To what extent are these conceptions informed by Western classical practices? A mixed methods approach was used which included survey questionnaire and semi-structured interview. Questionnaires have been sent to all classroom teachers in a random sample of primary schools in the South East of England. This was followed up with a series of semi-structured interviews with a sub-sample of respondents. The main ideas are concerned with the attitudes, beliefs and working theories held by teachers in contemporary primary school settings. By mapping the extent to which a knowledge base for teaching can be resistant to change in schools, we can problematise primary schools as sites for diversity and migration of cultural ideas. Alongside this, we can use the findings from the study undertaken in an English context as a starting point for further investigation into conceptions of music, musical ability and assessment held by practitioners in a variety of primary school contexts elsewhere in Europe; our emphasis here will be on the development of shared understanding in terms of policies and practices in music education. Within this broader framework, our study can have a significant impact internationally, with potential to inform future policy making, curriculum planning and practice.
Resumo:
Television’s long-form storytelling has the potential to allow the rippling of music across episodes and seasons in interesting ways. In the integration of narrative, music and meaning found in The O.C. (Fox, FOX 2003-7), popular song’s allusive and referential qualities are drawn upon to particularly televisual ends. At times embracing its ‘disruptive’ presence, at others suturing popular music into narrative, at times doing both at once. With television studies largely lacking theories of music, this chapter draws on film music theory and close textual analysis to analyse some of the programme's music moments in detail. In particular it considers the series-spanning use of Jeff Buckley’s cover of ‘Hallelujah’ (and its subsequent oppressive presence across multiple televisual texts), the end of episode musical montage and the use of recurring song fragments as theme within single episodes. In doing so it highlights music's role in the fragmentation and flow of the television aesthetic and popular song’s structural presence in television narrative. Illustrating the multiplicity of popular song’s use in television, these moments demonstrate song’s ability to provide narrative commentary, yet also make particular use of what Ian Garwood describes as the ability of ‘a non-diegetic song to exceed the emotional range displayed by diegetic characters’ (2003:115), to ‘speak’ for characters or to their feelings, contributing to both teen TV’s melodramatic affect and narrative expression.
Resumo:
Background—A major problem in procurement of donor hearts is the limited time a donor heart remains viable. After cardiectomy, ischemic hypoxia is the main cause of donor heart degradation. The global myocardial ischemia causes a cascade of oxygen radical formation that cumulates in an elevation in hydrogen ions (decrease in pH), irreversible cellular injury, and potential microvascular changes in perfusion. Objective—To determine the changes of prolonged storage times on donor heart microvasculature and the effects of intermittent antegrade perfusion. Materials and Methods—Using porcine hearts flushed with a Ribosol-based cardioplegic solution, we examined how storage time affects microvascular myocardial perfusion by using contrast-enhanced magnetic resonance imaging at a mean (SD) of 6.1 (0.6) hours (n=13) or 15.6 (0.6) hours (n=11) after cardiectomy. Finally, to determine if administration of cardioplegic solution affects pH and microvascular perfusion, isolated hearts (group 1, n=9) given a single antegrade dose, were compared with hearts (group 2, n=8) given intermittent antegrade cardioplegia (150 mL, every 30 min, 150 mL/min) by a heart preservation device. Khuri pH probes in left and right ventricular tissue continuously measured hydrogen ion levels, and perfusion intensity on magnetic resonance images was plotted against time. Results—Myocardial perfusion measured via magnetic resonance imaging at 6.1 hours was significantly greater than at 15.6 hours (67% vs 30%, P= .00008). In group 1 hearts, the mean (SD) for pH at the end of 6 hours decreased to 6.2 (0.2). In group 2, hearts that received intermittent antegrade cardioplegia, pH at the end of 6 hours was higher at 6.7 (0.3) (P=.0005). Magnetic resonance imaging showed no significant differences between the 2 groups in contrast enhancement (group 1, 62%; group 2, 40%) or in the wet/dry weight ratio. Conclusion—Intermittent perfusion maintains a significantly higher myocardial pH than does a conventional single antegrade dose. This difference may translate into an improved quality of donor hearts procured for transplantation, allowing longer distance procurement, tissue matching, improved outcomes for transplant recipients, and ideally a decrease in transplant-related costs.
Resumo:
Buildings affect people in various ways. They can help us to work more effectively; they also present a wide range of stimuli for our senses to react to. Intelligent buildings are designed to be aesthetic in sensory terms not just visually appealing but ones in which occupants experience delight, freshness, airiness, daylight, views out and social ambience. All these factors contribute to a general aesthetic which gives pleasure and affects one’s mood. If there is to be a common vision, it is essential for architects, engineers and clients to work closely together throughout the planning, design, construction and operational stages which represent the conception, birth and life of the building. There has to be an understanding of how patterns of work are best suited to a particular building form served by appropriate environmental systems. A host of technologies are emerging that help these processes, but in the end it is how we think about achieving responsive buildings that matters. Intelligent buildings should cope with social and technological changes and also be adaptable to short-term and long-term human needs. We live through our senses. They rely on stimulation from the tasks we are focused on; people around us but also the physical environment. We breathe air and its quality affects the olfactory system; temperature is felt by thermoreceptors in the skin; sound enters our ears; the visual scene is beheld by our eyes. All these stimuli are transmitted along the sensory nervous system to the brain for processing from which physiological and psychological reactions and judgments are formed depending on perception, expectancies and past experiences. It is clear that the environmental setting plays a role in this sensory process. This is the essence of sensory design. Space plays its part as well. The flow of communication is partly electronic but also largely by people meeting face to face. Our sense of space wants different things at different times. Sometimes privacy but other times social needs have to be satisfied besides the organizational requirement to have effective human communications throughout the building. In general if the senses are satisfied people feel better and work better.
Resumo:
This special issue of Cold War History offers a retrospective on the end of the Cold War, 25 years after its peaceful conclusion. This peaceful conclusion is an achievement that cannot be celebrated enough, and we must continue to build international relations in conflict and co-operation on this awareness of our common humanity and our common human fallibility.
Resumo:
The incorporation of numerical weather predictions (NWP) into a flood warning system can increase forecast lead times from a few hours to a few days. A single NWP forecast from a single forecast centre, however, is insufficient as it involves considerable non-predictable uncertainties and can lead to a high number of false or missed warnings. Weather forecasts using multiple NWPs from various weather centres implemented on catchment hydrology can provide significantly improved early flood warning. The availability of global ensemble weather prediction systems through the ‘THORPEX Interactive Grand Global Ensemble’ (TIGGE) offers a new opportunity for the development of state-of-the-art early flood forecasting systems. This paper presents a case study using the TIGGE database for flood warning on a meso-scale catchment (4062 km2) located in the Midlands region of England. For the first time, a research attempt is made to set up a coupled atmospheric-hydrologic-hydraulic cascade system driven by the TIGGE ensemble forecasts. A probabilistic discharge and flood inundation forecast is provided as the end product to study the potential benefits of using the TIGGE database. The study shows that precipitation input uncertainties dominate and propagate through the cascade chain. The current NWPs fall short of representing the spatial precipitation variability on such a comparatively small catchment, which indicates need to improve NWPs resolution and/or disaggregating techniques to narrow down the spatial gap between meteorology and hydrology. The spread of discharge forecasts varies from centre to centre, but it is generally large and implies a significant level of uncertainties. Nevertheless, the results show the TIGGE database is a promising tool to forecast flood inundation, comparable with that driven by raingauge observation.
Resumo:
Several studies of different bilingual groups including L2 learners, child bilinguals, heritage speakers and L1 attriters reveal similar performance on syntax-discourse interface properties such as anaphora resolution (Sorace, 2011 and references therein). Specifically, bilinguals seem to allow more optionality in the interpretation of overt subject pronouns in null subject languages, such as Greek, Italian and Spanish while the interpretation of null subject pronouns is indistinguishable from monolingual natives. Nevertheless, there is some evidence pointing to bilingualism effects on the interpretation of null subject pronouns too in heritage speakers’ grammars (Montrul, 2004) due to some form of ‘arrested’ development in this group of bilinguals. The present study seeks to investigate similarities and differences between two Greek–Swedish bilingual groups, heritage speakers and L1 attriters, in anaphora resolution of null and overt subject pronouns in Greek using a self-paced listening with a sentence-picture matching decision task at the end of each sentence. The two groups differ in crucial ways: heritage speakers were simultaneous or early bilinguals while the L1 attriters were adult learners of the second language, Swedish. Our findings reveal differences from monolingual preferences in the interpretation of the overt pronoun for both heritage and attrited speakers while the differences attested between the two groups in the interpretation of null subject pronouns affect only response times with heritage being faster than attrited speakers. We argue that our results do not support an age of onset or differential input effects on bilingual performance in pronoun resolution.