985 resultados para 1215-1563
Resumo:
In the event of a release of toxic gas in the center of London, the emergency services would need to determine quickly the extent of the area contaminated. The transport of pollutants by turbulent flow within the complex street and building architecture of cities is not straightforward, and we might wonder whether it is at all possible to make a scientifically-reasoned decision. Here we describe recent progress from a major UK project, ‘Dispersion of Air Pollution and its Penetration into the Local Environment’ (DAPPLE, www.dapple.org.uk). In DAPPLE, we focus on the movement of airborne pollutants in cities by developing a greater understanding of atmospheric flow and dispersion within urban street networks. In particular, we carried out full-scale dispersion experiments in central London (UK) during 2003, 2004, 2007, and 2008 to address the extent of the dispersion of tracers following their release at street level. These measurements complemented previous studies because (i) our focus was on dispersion within the first kilometer from the source, when most of the material was expected to remain within the street network rather than being mixed into the boundary layer aloft, (ii) measurements were made under a wide variety of meteorological conditions, and (iii) central London represents a European, rather than North American, city geometry. Interpretation of the results from the full-scale experiments was supported by extensive numerical and wind tunnel modeling, which allowed more detailed analysis under idealized and controlled conditions. In this article, we review the full-scale DAPPLE methodologies and show early results from the analysis of the 2007 field campaign data.
Resumo:
M. R. Banaji and A. G. Greenwald (1995) demonstrated a gender bias in fame judgments—that is, an increase in judged fame due to prior processing that was larger for male than for female names. They suggested that participants shift criteria between judging men and women, using the more liberal criterion for judging men. This "criterion-shift" account appeared problematic for a number of reasons. In this article, 3 experiments are reported that were designed to evaluate the criterion-shift account of the gender bias in the false-fame effect against a distribution-shift account. The results were consistent with the criterion-shift account, and they helped to define more precisely the situations in which people may be ready to shift their response criterion on an item-by-item basis. In addition, the results were incompatible with an interpretation of the criterion shift as an artifact of the experimental situation in the experiments reported by M. R. Banaji and A. G. Greenwald. (PsycINFO Database Record (c) 2010 APA, all rights reserved)
Resumo:
The main objective of this study is to revisit the fundamental postulations of autopoietic self-production wrapped within the autopoietic six-point key and to investigate whether or not firms as specific social systems can be treated as autopoietic unities. In order to do so firms have to be defined as simple and composite unities whereupon their boundaries have to be clearly identifiable. The test of social autopoiesis reveals that firms can be viewed as autopoietic social systems that exist in the communicative space with employees' firm-specific communicative sub-domains as their components. Furthermore, it is argued that the social reification of autopoiesis (autokoinopoiesis) in firms is quintessentially interconnected with physical autopoiesis of their employees (autophysiopoiesis). Discontiguous focus on productivity as firms' obvious physical implication may thus be upgraded by a very social nature of ideactivity, firms' only real survival force.
Resumo:
An experiment on restricted suckling of crossbred dairy cows was conducted at the Livestock Research Centre, Tanga in north-east Tanzania. The objective of the experiment was to evaluate the comparative productivity of Bos taurus x Bos indicus cows of medium and high levels of Bos taurus inheritance, whose calves were either bucket-reared or suckled residual milk. Lactation milk yield, length and persistency were 1563 L, 289 days, and 1.0, respectively, for the bucket-reared and 1592 L, 289 days and 1.4, respectively, for the suckling group. Days to observed oestrus, first insemination and conception for cows whose calves were bucket-reared were 47, 74 and 115 days, respectively, and 57, 81 and 126 days, respectively, for the suckling cows. The calf weights were similar at 1 year of age. The productivity of the cows, measured as the annual milk offtake, was not significantly higher for those that suckled their calves than for those whose calves were bucket-reared.
Resumo:
The mechanism of action and properties of a solid-phase ligand library made of hexapeptides (combinatorial peptide ligand libraries or CPLL), for capturing the "hidden proteome", i.e. the low- and very low-abundance proteins constituting the vast majority of species in any proteome, as applied to plant tissues, are reviewed here. Plant tissues are notoriously recalcitrant to protein extraction and to proteome analysis. Firstly, rigid plant cell walls need to be mechanically disrupted to release the cell content and, in addition to their poor protein yield, plant tissues are rich in proteases and oxidative enzymes, contain phenolic compounds, starches, oils, pigments and secondary metabolites that massively contaminate protein extracts. In addition, complex matrices of polysaccharides, including large amount of anionic pectins, are present. All these species compete with the binding of proteins to the CPLL beads, impeding proper capture and identification / detection of low-abundance species. When properly pre-treated, plant tissue extracts are amenable to capture by the CPLL beads revealing thus many new species among them low-abundance proteins. Examples are given on the treatment of leaf proteins, of corn seed extracts and of exudate proteins (latex from Hevea brasiliensis). In all cases, the detection of unique gene products via CPLL capture is at least twice that of control, untreated sample.
Resumo:
This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so. that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.
Resumo:
The mechanism of action and properties of a solid-phase ligand library made of hexapeptides (combinatorial peptide ligand libraries or CPLL, for capturing the "hidden proteome", i.e. the low- and very low-abundance proteins Constituting the vast majority of species in any proteome. as applied to plant tissues, are reviewed here. Plant tissues are notoriously recalcitrant to protein extraction and to proteome analysis, Firstly, rigid plant cell walls need to be mechanically disrupted to release the cell content and, in addition to their poor protein yield, plant tissues are rich in proteases and oxidative enzymes, contain phenolic Compounds, starches, oils, pigments and secondary metabolites that massively contaminate protein extracts. In addition, complex matrices of polysaccharides, including large amount of anionic pectins, are present. All these species compete with the binding of proteins to the CPLL beads, impeding proper capture and identification I detection of low-abundance species. When properly pre-treated, plant tissue extracts are amenable to capture by the CPLL beads revealing thus many new species among them low-abundance proteins. Examples are given on the treatment of leaf proteins, of corn seed extracts and of exudate proteins (latex from Hevea brasiliensis). In all cases, the detection of unique gene products via CPLL Capture is at least twice that of control, untreated sample. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
Four new antimony sulphides, [T(dien)(2)]Sb6S10 center dot xH(2)O [T = Ni (1), Co (2) x approximate to 0.45], [Co(en)(3)]SbsSI(3) (3) and [Ni(en)(3)]Sb12S19 (4), have been synthesised under solvothermal conditions. In compounds (1) - (3), Sb12S228- secondary building units are connected to form layered structures. In (1) and (2), Sb-6 S-2- layers containing Sb16S16 heterorings are separated by [T(dien]2](2+) cations, whilst in (3), Sb8 S2- layers 10 13 contain [Co(en)3]2+ cations within large Sb22S22 pores. Compound (4) adopts a three-dimensional structure in which [Ni(en)3 12 cations lie within ca. 5 A wide channels. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Background: Consistency of performance across tasks that assess syntactic comprehension in aphasia has clinical and theoretical relevance. In this paper we add to the relatively sparse previous work on how sentence comprehension abilities are influenced by the nature of the assessment task. Aims: Our aims are: (1) to compare linguistic performance across sentence-picture matching, enactment, and truth-value judgement tasks; (2) to investigate the impact of pictorial stimuli on syntactic comprehension. Methods Procedures: We tested a group of 10 aphasic speakers (3 with fluent and 7 with non-fluent aphasia) in three tasks (Experiment 1): (i) sentence-picture matching with four pictures, (ii) sentence-picture matching with two pictures, and (iii) enactment. A further task of truth-value judgement was given to a subgroup of those speakers (n=5, Experiment 2). Similar sentence types across all tasks were used and included canonical (actives, subject clefts) and non-canonical (passives, object clefts) sentences. We undertook two types of analyses: (a) we compared canonical and non-canonical sentences in each task; (b) we compared performance between (i) actives and passives, (ii) subject and object clefts in each task. We examined the results of all participants as a group and as case-series. Outcomes Results: Several task effects emerged. Overall, the two-picture sentence-picture matching and enactment tasks were more discriminating than the four-picture condition. Group performance in the truth-value judgement task was similar to two-picture sentence-picture matching and enactment. At the individual level performance across tasks contrasted to some group results. Conclusions: Our findings revealed task effects across participants. We discuss reasons that could explain the diverse profiles of performance and the implications for clinical practice.
Resumo:
Reliably representing both horizontal cloud inhomogeneity and vertical cloud overlap is fundamentally important for the radiation budget of a general circulation model. Here, we build on the work of Part One of this two-part paper by applying a pair of parameterisations that account for horizontal inhomogeneity and vertical overlap to global re-analysis data. These are applied both together and separately in an attempt to quantify the effects of poor representation of the two components on radiation budget. Horizontal inhomogeneity is accounted for using the “Tripleclouds” scheme, which uses two regions of cloud in each layer of a gridbox as opposed to one; vertical overlap is accounted for using “exponential-random” overlap, which aligns vertically continuous cloud according to a decorrelation height. These are applied to a sample of scenes from a year of ERA-40 data. The largest radiative effect of horizontal inhomogeneity is found to be in areas of marine stratocumulus; the effect of vertical overlap is found to be fairly uniform, but with larger individual short-wave and long-wave effects in areas of deep, tropical convection. The combined effect of the two parameterisations is found to reduce the magnitude of the net top-of-atmosphere cloud radiative forcing (CRF) by 2.25 W m−2, with shifts of up to 10 W m−2 in areas of marine stratocumulus. The effects of the uncertainty in our parameterisations on radiation budget is also investigated. It is found that the uncertainty in the impact of horizontal inhomogeneity is of order ±60%, while the uncertainty in the impact of vertical overlap is much smaller. This suggests an insensitivity of the radiation budget to the exact nature of the global decorrelation height distribution derived in Part One.
Resumo:
Although the use of climate scenarios for impact assessment has grown steadily since the 1990s, uptake of such information for adaptation is lagging by nearly a decade in terms of scientific output. Nonetheless, integration of climate risk information in development planning is now a priority for donor agencies because of the need to prepare for climate change impacts across different sectors and countries. This urgency stems from concerns that progress made against Millennium Development Goals (MDGs) could be threatened by anthropogenic climate change beyond 2015. Up to this time the human signal, though detectable and growing, will be a relatively small component of climate variability and change. This implies the need for a twin-track approach: on the one hand, vulnerability assessments of social and economic strategies for coping with present climate extremes and variability, and, on the other hand, development of climate forecast tools and scenarios to evaluate sector-specific, incremental changes in risk over the next few decades. This review starts by describing the climate outlook for the next couple of decades and the implications for adaptation assessments. We then review ways in which climate risk information is already being used in adaptation assessments and evaluate the strengths and weaknesses of three groups of techniques. Next we identify knowledge gaps and opportunities for improving the production and uptake of climate risk information for the 2020s. We assert that climate change scenarios can meet some, but not all, of the needs of adaptation planning. Even then, the choice of scenario technique must be matched to the intended application, taking into account local constraints of time, resources, human capacity and supporting infrastructure. We also show that much greater attention should be given to improving and critiquing models used for climate impact assessment, as standard practice. Finally, we highlight the over-arching need for the scientific community to provide more information and guidance on adapting to the risks of climate variability and change over nearer time horizons (i.e. the 2020s). Although the focus of the review is on information provision and uptake in developing regions, it is clear that many developed countries are facing the same challenges. Copyright © 2009 Royal Meteorological Society