80 resultados para Box-Cox transformation and quintile-based capability indices
Resumo:
A fundamental principle in data modelling is to incorporate available a priori information regarding the underlying data generating mechanism into the modelling process. We adopt this principle and consider grey-box radial basis function (RBF) modelling capable of incorporating prior knowledge. Specifically, we show how to explicitly incorporate the two types of prior knowledge: (i) the underlying data generating mechanism exhibits known symmetric property, and (ii) the underlying process obeys a set of given boundary value constraints. The class of efficient orthogonal least squares regression algorithms can readily be applied without any modification to construct parsimonious grey-box RBF models with enhanced generalisation capability.
Resumo:
Members of the genus Pseudomonas inhabit a wide variety of environments, which is reflected in their versatile metabolic capacity and broad potential for adaptation to fluctuating environmental conditions. Here, we examine and compare the genomes of a range of Pseudomonas spp. encompassing plant, insect and human pathogens, and environmental saprophytes. In addition to a large number of allelic differences of common genes that confer regulatory and metabolic flexibility, genome analysis suggests that many other factors contribute to the diversity and adaptability of Pseudomonas spp. Horizontal gene transfer has impacted the capability of pathogenic Pseudomonas spp. in terms of disease severity (Pseudomonas aeruginosa) and specificity (Pseudomonas syringae). Genome rearrangements likely contribute to adaptation, and a considerable complement of unique genes undoubtedly contributes to strain- and species-specific activities by as yet unknown mechanisms. Because of the lack of conserved phenotypic differences, the classification of the genus has long been contentious. DNA hybridization and genome-based analyses show close relationships among members of P. aeruginosa, but that isolates within the Pseudomonas fluorescens and P. syringae species are less closely related and may constitute different species. Collectively, genome sequences of Pseudomonas spp. have provided insights into pathogenesis and the genetic basis for diversity and adaptation.
Resumo:
This paper will present a conceptual framework for the examination of land redevelopment based on a complex systems/networks approach. As Alvin Toffler insightfully noted, modern scientific enquiry has become exceptionally good at splitting problems into pieces but has forgotten how to put the pieces back together. Twenty-five years after his remarks, governments and corporations faced with the requirements of sustainability are struggling to promote an ‘integrated’ or ‘holistic’ approach to tackling problems. Despite the talk, both practice and research provide few platforms that allow for ‘joined up’ thinking and action. With socio-economic phenomena, such as land redevelopment, promising prospects open up when we assume that their constituents can make up complex systems whose emergent properties are more than the sum of the parts and whose behaviour is inherently difficult to predict. A review of previous research shows that it has mainly focused on idealised, ‘mechanical’ views of property development processes that fail to recognise in full the relationships between actors, the structures created and their emergent qualities. When reality failed to live up to the expectations of these theoretical constructs then somebody had to be blamed for it: planners, developers, politicians. However, from a ‘synthetic’ point of view the agents and networks involved in property development can be seen as constituents of structures that perform complex processes. These structures interact, forming new more complex structures and networks. Redevelopment then can be conceptualised as a process of transformation: a complex system, a ‘dissipative’ structure involving developers, planners, landowners, state agencies etc., unlocks the potential of previously used sites, transforms space towards a higher order of complexity and ‘consumes’ but also ‘creates’ different forms of capital in the process. Analysis of network relations point toward the ‘dualism’ of structure and agency in these processes of system transformation and change. Insights from actor network theory can be conjoined with notions of complexity and chaos to build an understanding of the ways in which actors actively seek to shape these structures and systems, whilst at the same time are recursively shaped by them in their strategies and actions. This approach transcends the blame game and allows for inter-disciplinary inputs to be placed within a broader explanatory framework that does away with many past dichotomies. Better understanding of the interactions between actors and the emergent qualities of the networks they form can improve our comprehension of the complex socio-spatial phenomena that redevelopment comprises. The insights that this framework provides when applied in UK institutional investment into redevelopment are considered to be significant.
Resumo:
This article discusses emotion as a strategy of political agency in post-Thatcherite documentary theatre. The 1990s saw a renaissance in theatre writing based in directness and immediacy but based in two quite different forms of drama, In-Yer-Face theatre and fact-based drama. There are clear distinctions between these forms: the new brutalist writing was aggressively provocative; documentary theatre engaged the audience by revealing an urgent truth. Both claimed a kind of realism that confronted actuality, be that of situation or experience, through forms of theatre that cultivated emotional engagement. In-Yer-Face theatre used emotional shock to penetrate the numb cynicism that its creators perceived. Documentary theatre used observation and the cultivation of sympathy to enlist its audience in a shared understanding of what was hidden, not understood or not noticed. The article analyses the functioning of emotional enlistment to engage the audience politically in two examples of documentary theatre, Black Watch and Guantanamo
Resumo:
A quantitative assessment of Cloudsat reflectivities and basic ice cloud properties (cloud base, top, and thickness) is conducted in the present study from both airborne and ground-based observations. Airborne observations allow direct comparisons on a limited number of ocean backscatter and cloud samples, whereas the ground-based observations allow statistical comparisons on much longer time series but with some additional assumptions. Direct comparisons of the ocean backscatter and ice cloud reflectivities measured by an airborne cloud radar and Cloudsat during two field experiments indicate that, on average, Cloudsat measures ocean backscatter 0.4 dB higher and ice cloud reflectivities 1 dB higher than the airborne cloud radar. Five ground-based sites have also been used for a statistical evaluation of the Cloudsat reflectivities and basic cloud properties. From these comparisons, it is found that the weighted-mean difference ZCloudsat − ZGround ranges from −0.4 to +0.3 dB when a ±1-h time lag around the Cloudsat overpass is considered. Given the fact that the airborne and ground-based radar calibration accuracy is about 1 dB, it is concluded that the reflectivities of the spaceborne, airborne, and ground-based radars agree within the expected calibration uncertainties of the airborne and ground-based radars. This result shows that the Cloudsat radar does achieve the claimed sensitivity of around −29 dBZ. Finally, an evaluation of the tropical “convective ice” profiles measured by Cloudsat has been carried out over the tropical site in Darwin, Australia. It is shown that these profiles can be used statistically down to approximately 9-km height (or 4 km above the melting layer) without attenuation and multiple scattering corrections over Darwin. It is difficult to estimate if this result is applicable to all types of deep convective storms in the tropics. However, this first study suggests that the Cloudsat profiles in convective ice need to be corrected for attenuation by supercooled liquid water and ice aggregates/graupel particles and multiple scattering prior to their quantitative use.
Resumo:
The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.
Resumo:
The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.
Resumo:
In this paper, we present comprehensive ground-based and space-based in situ geosynchronous observations of a substorm expansion phase onset on 1 October 2005. The Double Star TC-2 and GOES-12 spacecraft were both located within the substorm current wedge during the substorm expansion phase onset, which occurred over the Canadian sector. We find that an onset of ULF waves in space was observed after onset on the ground by extending the AWESOME timing algorithm into space. Furthermore, a population of low-energy field-aligned electrons was detected by the TC-2 PEACE instrument contemporaneous with the ULF waves in space. These electrons appear to be associated with an enhancement of field-aligned Poynting flux into the ionosphere which is large enough to power visible auroral displays. The observations are most consistent with a near-Earth initiation of substorm expansion phase onset, such as the Near-Geosynchronous Onset (NGO) substorm scenario. A lack of data from further downtail, however, means other mechanisms cannot be ruled out.
Resumo:
It is well known that there is a dynamic relationship between cerebral blood flow (CBF) and cerebral blood volume (CBV). With increasing applications of functional MRI, where the blood oxygen-level-dependent signals are recorded, the understanding and accurate modeling of the hemodynamic relationship between CBF and CBV becomes increasingly important. This study presents an empirical and data-based modeling framework for model identification from CBF and CBV experimental data. It is shown that the relationship between the changes in CBF and CBV can be described using a parsimonious autoregressive with exogenous input model structure. It is observed that neither the ordinary least-squares (LS) method nor the classical total least-squares (TLS) method can produce accurate estimates from the original noisy CBF and CBV data. A regularized total least-squares (RTLS) method is thus introduced and extended to solve such an error-in-the-variables problem. Quantitative results show that the RTLS method works very well on the noisy CBF and CBV data. Finally, a combination of RTLS with a filtering method can lead to a parsimonious but very effective model that can characterize the relationship between the changes in CBF and CBV.
The capability-affordance model: a method for analysis and modelling of capabilities and affordances
Resumo:
Existing capability models lack qualitative and quantitative means to compare business capabilities. This paper extends previous work and uses affordance theories to consistently model and analyse capabilities. We use the concept of objective and subjective affordances to model capability as a tuple of a set of resource affordance system mechanisms and action paths, dependent on one or more critical affordance factors. We identify an affordance chain of subjective affordances by which affordances work together to enable an action and an affordance path that links action affordances to create a capability system. We define the mechanism and path underlying capability. We show how affordance modelling notation, AMN, can represent affordances comprising a capability. We propose a method to quantitatively and qualitatively compare capabilities using efficiency, effectiveness and quality metrics. The method is demonstrated by a medical example comparing the capability of syringe and needless anaesthetic systems.
Resumo:
NO2 measurements during 1990–2007, obtained from a zenith-sky spectrometer in the Antarctic, are analysed to determine the long-term changes in NO2. An atmospheric photochemical box model and a radiative transfer model are used to improve the accuracy of determination of the vertical columns from the slant column measurements, and to deduce the amount of NOy from NO2. We find that the NO2 and NOy columns in midsummer have large inter-annual variability superimposed on a broad maximum in 2000, with little or no overall trend over the full time period. These changes are robust to a variety of alternative settings when determining vertical columns from slant columns or determining NOy from NO2. They may signify similar changes in speed of the Brewer-Dobson circulation but with opposite sign, i.e. a broad minimum around 2000. Multiple regressions show significant correlation with solar and quasi-biennial-oscillation indices, and weak correlation with El Nino, but no significant overall trend, corresponding to an increase in Brewer-Dobson circulation of 1.4±3.5%/decade. There remains an unexplained cycle of amplitude and period at least 15% and 17 years, with minimum speed in about 2000.
Resumo:
Although Mar del Plata is the most important city on the Atlantic coast of Argentina, mosquitoes inhabiting such area are almost uncharacterized. To increase our knowledge in their distribution, we sampled specimens of natural populations. After the morphological identification based on taxonomic keys, sequences of DNA from small ribosomal subunit (18S rDNA) and cytochrome c oxidase I (COI) genes were obtained from native species and the phylogenetic analysis of these sequences were done. Fourteen species from the genera Uranotaenia, Culex, Ochlerotatus and Psorophora were found and identified. Our 18S rDNA and COI-based analysis indicates the relationships among groups at the supra-species level in concordance with mosquito taxonomy. The introduction and spread of vectors and diseases carried by them are not known in Mar del Plata, but some of the species found in this study were reported as pathogen vectors.
Resumo:
Aerosols affect the Earth's energy budget directly by scattering and absorbing radiation and indirectly by acting as cloud condensation nuclei and, thereby, affecting cloud properties. However, large uncertainties exist in current estimates of aerosol forcing because of incomplete knowledge concerning the distribution and the physical and chemical properties of aerosols as well as aerosol-cloud interactions. In recent years, a great deal of effort has gone into improving measurements and datasets. It is thus feasible to shift the estimates of aerosol forcing from largely model-based to increasingly measurement-based. Our goal is to assess current observational capabilities and identify uncertainties in the aerosol direct forcing through comparisons of different methods with independent sources of uncertainties. Here we assess the aerosol optical depth (τ), direct radiative effect (DRE) by natural and anthropogenic aerosols, and direct climate forcing (DCF) by anthropogenic aerosols, focusing on satellite and ground-based measurements supplemented by global chemical transport model (CTM) simulations. The multi-spectral MODIS measures global distributions of aerosol optical depth (τ) on a daily scale, with a high accuracy of ±0.03±0.05τ over ocean. The annual average τ is about 0.14 over global ocean, of which about 21%±7% is contributed by human activities, as estimated by MODIS fine-mode fraction. The multi-angle MISR derives an annual average AOD of 0.23 over global land with an uncertainty of ~20% or ±0.05. These high-accuracy aerosol products and broadband flux measurements from CERES make it feasible to obtain observational constraints for the aerosol direct effect, especially over global the ocean. A number of measurement-based approaches estimate the clear-sky DRE (on solar radiation) at the top-of-atmosphere (TOA) to be about -5.5±0.2 Wm-2 (median ± standard error from various methods) over the global ocean. Accounting for thin cirrus contamination of the satellite derived aerosol field will reduce the TOA DRE to -5.0 Wm-2. Because of a lack of measurements of aerosol absorption and difficulty in characterizing land surface reflection, estimates of DRE over land and at the ocean surface are currently realized through a combination of satellite retrievals, surface measurements, and model simulations, and are less constrained. Over the oceans the surface DRE is estimated to be -8.8±0.7 Wm-2. Over land, an integration of satellite retrievals and model simulations derives a DRE of -4.9±0.7 Wm-2 and -11.8±1.9 Wm-2 at the TOA and surface, respectively. CTM simulations derive a wide range of DRE estimates that on average are smaller than the measurement-based DRE by about 30-40%, even after accounting for thin cirrus and cloud contamination. A number of issues remain. Current estimates of the aerosol direct effect over land are poorly constrained. Uncertainties of DRE estimates are also larger on regional scales than on a global scale and large discrepancies exist between different approaches. The characterization of aerosol absorption and vertical distribution remains challenging. The aerosol direct effect in the thermal infrared range and in cloudy conditions remains relatively unexplored and quite uncertain, because of a lack of global systematic aerosol vertical profile measurements. A coordinated research strategy needs to be developed for integration and assimilation of satellite measurements into models to constrain model simulations. Enhanced measurement capabilities in the next few years and high-level scientific cooperation will further advance our knowledge.
Resumo:
Background: Stable-isotope ratios of carbon (13C/12C, expressed as δ13C) and nitrogen (15N/14N, or δ15N) have been proposed as potential nutritional biomarkers to distinguish between meat, fish, and plant-based foods. Objective: The objective was to investigate dietary correlates of δ13C and δ15N and examine the association of these biomarkers with incident type 2 diabetes in a prospective study. Design: Serum δ13C and δ15N (‰) were measured by using isotope ratio mass spectrometry in a case-cohort study (n = 476 diabetes cases; n = 718 subcohort) nested within the European Prospective Investigation into Cancer and Nutrition (EPIC)–Norfolk population-based cohort. We examined dietary (food-frequency questionnaire) correlates of δ13C and δ15N in the subcohort. HRs and 95% CIs were estimated by using Prentice-weighted Cox regression. Results: Mean (±SD) δ13C and δ15N were −22.8 ± 0.4‰ and 10.2 ± 0.4‰, respectively, and δ13C (r = 0.22) and δ15N (r = 0.20) were positively correlated (P < 0.001) with fish protein intake. Animal protein was not correlated with δ13C but was significantly correlated with δ15N (dairy protein: r = 0.11; meat protein: r = 0.09; terrestrial animal protein: r = 0.12, P ≤ 0.013). δ13C was inversely associated with diabetes in adjusted analyses (HR per tertile: 0.74; 95% CI: 0.65, 0.83; P-trend < 0.001], whereas δ15N was positively associated (HR: 1.23; 95% CI: 1.09, 1.38; P-trend = 0.001). Conclusions: The isotope ratios δ13C and δ15N may both serve as potential biomarkers of fish protein intake, whereas only δ15N may reflect broader animal-source protein intake in a European population. The inverse association of δ13C but a positive association of δ15N with incident diabetes should be interpreted in the light of knowledge of dietary intake and may assist in identifying dietary components that are associated with health risks and benefits.
Resumo:
Earthworms are important organisms in soil communities and so are used as model organisms in environmental risk assessments of chemicals. However current risk assessments of soil invertebrates are based on short-term laboratory studies, of limited ecological relevance, supplemented if necessary by site-specific field trials, which sometimes are challenging to apply across the whole agricultural landscape. Here, we investigate whether population responses to environmental stressors and pesticide exposure can be accurately predicted by combining energy budget and agent-based models (ABMs), based on knowledge of how individuals respond to their local circumstances. A simple energy budget model was implemented within each earthworm Eisenia fetida in the ABM, based on a priori parameter estimates. From broadly accepted physiological principles, simple algorithms specify how energy acquisition and expenditure drive life cycle processes. Each individual allocates energy between maintenance, growth and/or reproduction under varying conditions of food density, soil temperature and soil moisture. When simulating published experiments, good model fits were obtained to experimental data on individual growth, reproduction and starvation. Using the energy budget model as a platform we developed methods to identify which of the physiological parameters in the energy budget model (rates of ingestion, maintenance, growth or reproduction) are primarily affected by pesticide applications, producing four hypotheses about how toxicity acts. We tested these hypotheses by comparing model outputs with published toxicity data on the effects of copper oxychloride and chlorpyrifos on E. fetida. Both growth and reproduction were directly affected in experiments in which sufficient food was provided, whilst maintenance was targeted under food limitation. Although we only incorporate toxic effects at the individual level we show how ABMs can readily extrapolate to larger scales by providing good model fits to field population data. The ability of the presented model to fit the available field and laboratory data for E. fetida demonstrates the promise of the agent-based approach in ecology, by showing how biological knowledge can be used to make ecological inferences. Further work is required to extend the approach to populations of more ecologically relevant species studied at the field scale. Such a model could help extrapolate from laboratory to field conditions and from one set of field conditions to another or from species to species.