935 resultados para Clean rooms.
Resumo:
This paper examines the ethics of the Clean Development Mechanism (CDM) in its architecture, processes and outcomes and its potential to allocate resources to the poor as ‘ethical development’. Two specific examples of CDM projects help us to explore some of the quandaries that seem to be quickly defining operating procedure for the CDM in its efforts to bring entitlementsto the poor. The paper concludes with reflections on the normative and social complications of the CDM and closes with three key areas of further investigation.
Resumo:
The rate and magnitude of predicted climate change require that we urgently mitigate emissions or sequester carbon on a substantial scale in order to avoid runaway climate change. Geo- and bioengineering solutions are increasingly proposed as viable and practical strategies for tackling global warming. Biotechnology companies are already developing transgenic “super carbon-absorbing” trees, which are sold as a cost-effective and relatively low-risk means of sequestering carbon. The question posed in this article is, Do super carbon trees provide real benefits or are they merely a fanciful illusion? It remains unclear whether growing these trees makes sense in terms of the carbon cost of production and the actual storage of carbon. In particular, it is widely acknowledged that “carbon-eating” trees fail to sequester as much carbon as they oxidize and return to the atmosphere; moreover, there are concerns about the biodiversity impacts of large-scale monoculture plantations. The potential social and ecological risks and opportunities presented by such controversial solutions warrant a societal dialogue.
Resumo:
Global agreements have proliferated in the past ten years. One of these is the Kyoto Protocol, which contains provisions for emissions reductions by trading carbon through the Clean Development Mechanism (CDM). The CDM is a market-based instrument that allows companies in Annex I countries to offset their greenhouse gas emissions through energy and tree offset projects in the global South. I set out to examine the governance challenges posed by the institutional design of carbon sequestration projects under the CDM. I examine three global narratives associated with the design of CDM forest projects, specifically North – South knowledge politics, green developmentalism, and community participation, and subsequently assess how these narratives match with local practices in two projects in Latin America. Findings suggest that governance problems are operating at multiple levels and that the rhetoric of global carbon actors often asserts these schemes in one light, while the rhetoric of those who are immediately involved locally may be different. I also stress the alarmist’s discourse that blames local people for the problems of environmental change. The case studies illustrate the need for vertical communication and interaction and nested governance arrangements as well as horizontal arrangements. I conclude that the global framing of forests as offsets requires better integration of local relationships to forests and their management and more effective institutions at multiple levels to link the very local to the very large scale when dealing with carbon sequestration in the CDM.
Resumo:
Three experiments measured constancy in speech perception, using natural-speech messages or noise-band vocoder versions of them. The eight vocoder-bands had equally log-spaced center-frequencies and the shapes of corresponding “auditory” filters. Consequently, the bands had the temporal envelopes that arise in these auditory filters when the speech is played. The “sir” or “stir” test-words were distinguished by degrees of amplitude modulation, and played in the context; “next you’ll get _ to click on.” Listeners identified test-words appropriately, even in the vocoder conditions where the speech had a “noise-like” quality. Constancy was assessed by comparing the identification of test-words with low or high levels of room reflections across conditions where the context had either a low or a high level of reflections. Constancy was obtained with both the natural and the vocoded speech, indicating that the effect arises through temporal-envelope processing. Two further experiments assessed perceptual weighting of the different bands, both in the test word and in the context. The resulting weighting functions both increase monotonically with frequency, following the spectral characteristics of the test-word’s [s]. It is suggested that these two weighting functions are similar because they both come about through the perceptual grouping of the test-word’s bands.
Resumo:
When speech is in competition with interfering sources in rooms, monaural indicators of intelligibility fail to take account of the listener’s abilities to separate target speech from interfering sounds using the binaural system. In order to incorporate these segregation abilities and their susceptibility to reverberation, Lavandier and Culling [J. Acoust. Soc. Am. 127, 387–399 (2010)] proposed a model which combines effects of better-ear listening and binaural unmasking. A computationally efficient version of this model is evaluated here under more realistic conditions that include head shadow, multiple stationary noise sources, and real-room acoustics. Three experiments are presented in which speech reception thresholds were measured in the presence of one to three interferers using real-room listening over headphones, simulated by convolving anechoic stimuli with binaural room impulse-responses measured with dummy-head transducers in five rooms. Without fitting any parameter of the model, there was close correspondence between measured and predicted differences in threshold across all tested conditions. The model’s components of better-ear listening and binaural unmasking were validated both in isolation and in combination. The computational efficiency of this prediction method allows the generation of complex “intelligibility maps” from room designs. © 2012 Acoustical Society of America
Resumo:
Three-dimensional computational simulations are performed to examine indoor environment and micro-environment around human bodies in an office in terms of thermal environment and air quality. In this study, personal displacement ventilation (PDV), including two cases with all seats taken and two middle seats taken, is compared with overall displacement ventilation (ODV) of all seats taken under the condition that supply temperature is 24℃ and air change rate is 60 l/s per workstation. When using PDV, temperature stratification, the characteristic of displacement ventilation, is obviously observed at the position of occupant’s head and clearer in the case with all seats taken. Verticalertical ertical temperature temperature temperature temperature temperature differences below height of the head areare under under under 2℃ in two cases in two cases in two cases in two cases in two cases in two cases in two cases in two cases with all seats taken,and the temperature with PDV is higher than that with ODV. Verticalertical ertical temperature temperature temperature temperature temperature temperature difference is under 3 under 3under 3 under 3℃ in the case in the case in the case in the case in the case in the case in the case with two middle seats taken. CO2 concentration is lower th is lower th is lower this lower this lower than 2 g/man 2 g/m an 2 g/man 2 g/man 2 g/man 2 g/m 3 in the breath zone. in the breath zone. in the breath zone. in the breath zone. in the breath zone. in the breath zone. in the breath zone. in the breath zone. in the breath zone. The results indicate that PDV can be used in the room with big change of occupants’ number to satisfy the need of thermal comfort and air quality. When not all seats are taken, designers should increase supply air requirement or reduce its temperature for thermal comfort. INDEX TERMS
Resumo:
This work investigated the personal exposure to indoor particulate matters using the intake fraction metric and provided a possible way to trace the particle inhaled from an indoor particle source. A turbulence model validated by the particle measurements in a room with underfloor air distribution (UFAD) system was used to predict the indoor particle concentrations. Inhalation intake fraction of indoor particles was defined and evaluated in two rooms equipped with the UFAD, i.e., the experimental room and a small office. According to the exposure characteristics and a typical respiratory rate, the intake fraction was determined in two rooms with a continuous and episodic (human cough) source of particles, respectively. The findings showed that the well-mixing assumption of indoor air failed to give an accurate estimation of inhalation exposure and the average concentration at return outlet or within the overall room could not relate well the intake fraction to the amount of particle emitted from an indoor source.
Resumo:
This article analyzes two series of photographs and essays on writers’ rooms published in England and Canada in 2007 and 2008. The Guardian’s Writers Rooms series, with photographs by Eamon McCabe, ran in 2007. In the summer of 2008, The Vancouver International Writers and Readers Festival began to post its own version of The Guardian column on its website by displaying, each week leading up to the Festival in September, a different writer’s “writing space” and an accompanying paragraph. I argue that these images of writers’ rooms, which suggest a cultural fascination with authors’ private compositional practices and materials, reveal a great deal about theoretical constructions of authorship implicit in contemporary literary culture. Far from possessing the museum quality of dead authors’ spaces, rooms that are still being used, incorporating new forms of writing technology, and having drafts of manuscripts scattered around them, can offer insight into such well-worn and ineffable areas of speculation as inspiration, singular authorial genius, and literary productivity.
Resumo:
This study investigated the effects of transporting animals from the experimental room to the animal facility in between experimental sessions, a procedure routinely employed in experimental research, on long-term social recognition memory. By using the intruder-resident paradigm, independent groups of Wistar rats exposed to a 2-h encounter with an adult intruder were transported from the experimental room to the animal facility either 0.5 or 6h after the encounter. The following day, residents were exposed to a second encounter with either the same or a different (unfamiliar) intruder. Resident`s social and non-social behaviors were carefully scored and subjected to Principal Component Analysis, thus allowing to parcel out variance and relatedness among these behaviors. Resident rats transported 6h after the first encounter exhibited reduced amount of social investigation towards familiar intruders, but an increase of social investigation when exposed to a different intruder as compared to the first encounter. These effects revealed a consistent long-lasting (24h) social recognition memory in rats. In contrast, resident rats transported 0.5 h after the first encounter did not exhibit social recognition memory. These results indicate that this common, little-noted, laboratory procedure disturbs long-term social recognition memory. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The cleaning procedure consists of two-step-flashing: (i) cycles of low power flashes T similar to 1200 K) at an oxygen partial pressure of P(o2) = 6 x 10(-8) mbar, to remove the carbon from the surface, and (ii) a single high power flash (T similar to 2200 K), to remove the oxide layer. The removal of carbon from the surface through the chemical reaction with oxygen during low power flash cycles is monitored by thermal desorption spectroscopy. The exposure to O(2) leads to the oxidation of the W surface. Using a high power flash, the volatile W-oxides and the atomic oxygen are desorbed, leaving a clean crystal surface at the end of procedure. The method may also be used for cleaning other refractory metals like Mo, Re and It. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We show that a broad class of quantum critical points can be stable against locally correlated disorder even if they are unstable against uncorrelated disorder. Although this result seemingly contradicts the Harris criterion, it follows naturally from the absence of a random-mass term in the associated order parameter field theory. We illustrate the general concept with explicit calculations for quantum spin-chain models. Instead of the infinite-randomness physics induced by uncorrelated disorder, we find that weak locally correlated disorder is irrelevant. For larger disorder, we find a line of critical points with unusual properties such as an increase of the entanglement entropy with the disorder strength. We also propose experimental realizations in the context of quantum magnetism and cold-atom physics. Copyright (C) EPLA, 2011
Resumo:
In order to evaluate the interactions between Au/Cu atoms and clean Si(l 11) surface, we used synchrotron radiation grazing incidence X-ray fluorescence analysis and theoretical calculations. Optimized geometries and energies on different adsorption sites indicate that the binding energies at different adsorption sites are high, suggesting a strong interaction between metal atom and silicon surface. The Au atom showed higher interaction than Cu atom. The theoretical and experimental data showed good agreement. Crown Copyright (C) 2009 Published by Elsevier B.V. All rights reserved.
Clean Code vs Dirty Code : Ett fältexperiment för att förklara hur Clean Code påverkar kodförståelse
Resumo:
Stora och komplexa kodbaser med bristfällig kodförståelse är ett problem som blir allt vanligare bland företag idag. Bristfällig kodförståelse resulterar i längre tidsåtgång vid underhåll och modifiering av koden, vilket för ett företag leder till ökade kostnader. Clean Code anses enligt somliga vara lösningen på detta problem. Clean Code är en samling riktlinjer och principer för hur man skriver kod som är enkel att förstå och underhålla. Ett kunskapsglapp identifierades vad gäller empirisk data som undersöker Clean Codes påverkan på kodförståelse. Studiens frågeställning var: Hur påverkas förståelsen vid modifiering av kod som är refaktoriserad enligt Clean Code principerna för namngivning och att skriva funktioner? För att undersöka hur Clean Code påverkar kodförståelsen utfördes ett fältexperiment tillsammans med företaget CGM Lab Scandinavia i Borlänge, där data om tidsåtgång och upplevd förståelse hos testdeltagare samlades in och analyserades. Studiens resultat visar ingen tydlig förbättring eller försämring av kodförståelsen då endast den upplevda kodförståelsen verkar påverkas. Alla testdeltagare föredrar Clean Code framför Dirty Code även om tidsåtgången inte påverkas. Detta leder fram till slutsatsen att Clean Codes effekter kanske inte är omedelbara då utvecklare inte hunnit anpassa sig till Clean Code, och därför inte kan utnyttja det till fullo. Studien ger en fingervisning om Clean Codes potential att förbättra kodförståelsen.