23 resultados para Environment with multiple obstacles
em CentAUR: Central Archive University of Reading - UK
Resumo:
The evaluation of EU policy in the area of rural land use management often encounters problems of multiple and poorly articulated objectives. Agri-environmental policy has a range of aims, including natural resource protection, biodiversity conservation and the protection and enhancement of landscape quality. Forestry policy, in addition to production and environmental objectives, increasingly has social aims, including enhancement of human health and wellbeing, lifelong learning, and the cultural and amenity value of the landscape. Many of these aims are intangible, making them hard to define and quantify. This article describes two approaches for dealing with such situations, both of which rely on substantial participation by stakeholders. The first is the Agri-Environment Footprint Index, a form of multi-criteria participatory approach. The other, applied here to forestry, has been the development of ‘multi-purpose’ approaches to evaluation, which respond to the diverse needs of stakeholders through the use of mixed methods and a broad suite of indicators, selected through a participatory process. Each makes use of case studies and involves stakeholders in the evaluation process, thereby enhancing their commitment to the programmes and increasing their sustainability. Both also demonstrate more ‘holistic’ approaches to evaluation than the formal methods prescribed in the EU Common Monitoring and Evaluation Framework.
Resumo:
The popularity of wireless local area networks (WLANs) has resulted in their dense deployments around the world. While this increases capacity and coverage, the problem of increased interference can severely degrade the performance of WLANs. However, the impact of interference on throughput in dense WLANs with multiple access points (APs) has had very limited prior research. This is believed to be due to 1) the inaccurate assumption that throughput is always a monotonically decreasing function of interference and 2) the prohibitively high complexity of an accurate analytical model. In this work, firstly we provide a useful classification of commonly found interference scenarios. Secondly, we investigate the impact of interference on throughput for each class based on an approach that determines the possibility of parallel transmissions. Extensive packet-level simulations using OPNET have been performed to support the observations made. Interestingly, results have shown that in some topologies, increased interference can lead to higher throughput and vice versa.
Resumo:
Recent work has shown that the evolution of Drosophila melanogaster resistance to attack by the parasitoid Asobara tabida is constrained by a trade-off with larval competitive ability. However, there are two very important questions that need to be answered. First, is this a general cost, or is it parasitoid specific? Second, does a selected increase in immune response against one parasitoid species result in a correlated change in resistance to other parasitoid species? The answers to both questions will influence the coevolutionary dynamics of these species, and also may have a previously unconsidered, yet important, influence on community structure.
Resumo:
When speech is in competition with interfering sources in rooms, monaural indicators of intelligibility fail to take account of the listener’s abilities to separate target speech from interfering sounds using the binaural system. In order to incorporate these segregation abilities and their susceptibility to reverberation, Lavandier and Culling [J. Acoust. Soc. Am. 127, 387–399 (2010)] proposed a model which combines effects of better-ear listening and binaural unmasking. A computationally efficient version of this model is evaluated here under more realistic conditions that include head shadow, multiple stationary noise sources, and real-room acoustics. Three experiments are presented in which speech reception thresholds were measured in the presence of one to three interferers using real-room listening over headphones, simulated by convolving anechoic stimuli with binaural room impulse-responses measured with dummy-head transducers in five rooms. Without fitting any parameter of the model, there was close correspondence between measured and predicted differences in threshold across all tested conditions. The model’s components of better-ear listening and binaural unmasking were validated both in isolation and in combination. The computational efficiency of this prediction method allows the generation of complex “intelligibility maps” from room designs. © 2012 Acoustical Society of America
Resumo:
Enterobacter sakazakii is an uncommon bacterium that is known to cause severe neonatal infection and is rare among adults. We present a peculiar case of E. sakazakii bacteraemia with multiple splenic abscesses in a 75-year-old institutionalised woman, who was successfully treated with 6 weeks of imipenem and percutaneous drainage of the abscesses.
Resumo:
This paper explores the development of multi-feature classification techniques used to identify tremor-related characteristics in the Parkinsonian patient. Local field potentials were recorded from the subthalamic nucleus and the globus pallidus internus of eight Parkinsonian patients through the implanted electrodes of a Deep brain stimulation (DBS) device prior to device internalization. A range of signal processing techniques were evaluated with respect to their tremor detection capability and used as inputs in a multi-feature neural network classifier to identify the activity of Parkinsonian tremor. The results of this study show that a trained multi-feature neural network is able, under certain conditions, to achieve excellent detection accuracy on patients unseen during training. Overall the tremor detection accuracy was mixed, although an accuracy of over 86% was achieved in four out of the eight patients.
Resumo:
Real-time estimates of output gaps and inflation gaps differ from the values that are obtained using data available long after the event. Part of the problem is that the data on which the real-time estimates are based is subsequently revised. We show that vector-autoregressive models of data vintages provide forecasts of post-revision values of future observations and of already-released observations capable of improving estimates of output and inflation gaps in real time. Our findings indicate that annual revisions to output and inflation data are in part predictable based on their past vintages.
Resumo:
Measurements of weighted dietary intakes and plasma determinations of albumin, iron, zinc, ascorbic acid and TIBC were carried out on twenty female multiple sclerosis patients in a long-stay hospital for disabled people. The group included ten patients with a recent history of pressure sores, closely matched with ten patients without pressure sores. Mean daily intake of carbohydrate was found to be higher in the non-pressure sore group whilst intake of zinc was lower in this group. Intakes of all other nutrients were comparable between the two groups. For both groups, intakes of energy, folate, vitamin D, iron and zinc were less than recommended values. Mean plasma levels of albumin and iron were towards the lower limit of the normal range, whilst that for zinc was considerably less than the normal range. Plasma TIBC was slightly above the normal range. Levels of plasma iron and zinc were significantly lower in the pressure sore group. The data indicate that severely disabled hospitalized patients with multiple sclerosis may be at risk of poor nutritional status. The results suggest that in the presence of pressure sores, there are increased requirements for specific nutrients, notably zinc and iron. Consideration is given to the possible value of supplementation of these individuals.
Resumo:
Innovation in the built environment involves multiple actors with diverse motivations. Policy-makers find it difficult to promote changes that require cooperation from these numerous and dispersed actors and to align their sometimes divergent interests. Established research traditions on the economics and management of innovation pay only limited attention to stakeholder choices, engagement and motivation. This paper reviews the insights that emerge as research in these traditions comes into contact with work on innovation from sociological and political perspectives. It contributes by highlighting growing areas of research on user involvement in complex innovation, collective action, distributed innovation and transition management. To differing extents, these provide approaches to incorporate the motivations of different actors into theoretical understanding. These indicate new directions for research that promise to enrich understanding of innovation.
Resumo:
Soluble reactive phosphorus (SRP) plays a key role in eutrophication, a global problem decreasing habitat quality and in-stream biodiversity. Mitigation strategies are required to prevent SRP fluxes from exceeding critical levels, and must be robust in the face of potential changes in climate, land use and a myriad of other influences. To establish the longevity of these strategies it is therefore crucial to consider the sensitivity of catchments to multiple future stressors. This study evaluates how the water quality and hydrology of a major river system in the UK (the River Thames) respond to alterations in climate, land use and water resource allocations, and investigates how these changes impact the relative performance of management strategies over an 80-year period. In the River Thames, the relative contributions of SRP from diffuse and point sources vary seasonally. Diffuse sources of SRP from agriculture dominate during periods of high runoff, and point sources during low flow periods. SRP concentrations rose under any future scenario which either increased a) surface runoff or b) the area of cultivated land. Under these conditions, SRP was sourced from agriculture, and the most effective single mitigation measures were those which addressed diffuse SRP sources. Conversely, where future scenarios reduced flow e.g. during winters of reservoir construction, the significance of point source inputs increased, and mitigation measures addressing these issues became more effective. In catchments with multiple point and diffuse sources of SRP, an all-encompassing effective mitigation approach is difficult to achieve with a single strategy. In order to attain maximum efficiency, multiple strategies might therefore be employed at different times and locations, to target the variable nature of dominant SRP sources and pathways.
Resumo:
Unorganized traffic is a generalized form of travel wherein vehicles do not adhere to any predefined lanes and can travel in-between lanes. Such travel is visible in a number of countries e.g. India, wherein it enables a higher traffic bandwidth, more overtaking and more efficient travel. These advantages are visible when the vehicles vary considerably in size and speed, in the absence of which the predefined lanes are near-optimal. Motion planning for multiple autonomous vehicles in unorganized traffic deals with deciding on the manner in which every vehicle travels, ensuring no collision either with each other or with static obstacles. In this paper the notion of predefined lanes is generalized to model unorganized travel for the purpose of planning vehicles travel. A uniform cost search is used for finding the optimal motion strategy of a vehicle, amidst the known travel plans of the other vehicles. The aim is to maximize the separation between the vehicles and static obstacles. The search is responsible for defining an optimal lane distribution among vehicles in the planning scenario. Clothoid curves are used for maintaining a lane or changing lanes. Experiments are performed by simulation over a set of challenging scenarios with a complex grid of obstacles. Additionally behaviours of overtaking, waiting for a vehicle to cross and following another vehicle are exhibited.
Resumo:
A full assessment of para-virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-‐metal, as well as on para-‐virtualization. The idea is to see what the overheads of para-‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-‐metal, then on the para-‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-‐native performance. You can deploy both para-‐virtualization and full virtualization across various virtualized systems. Para-‐virtualization is an OS-‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.
Resumo:
The reaction between [Mo(eta(3)-C3H5)(CO)(2)(NCMe)(2)Br] (1) and the ferrocenylamidobenzimidazole ligands FcCO(NH(2)benzim) (L1) and (FcCO)(2)(NHbenzim) (L2) led to a binuclear (2) and a trinuclear (3) Mo-Fe complex, respectively. The single-crystal X-ray structure of [Mo(eta(3)-C3H5)(CO)(2)(L2)Br] [L2 = {[(eta(5)-C5H5)Fe(eta(5)-C5H4CO)](2)(2-NH-benzimidazol-yl)}] shows that L2 is coordinated to the endo Mo(eta(3)-C3H5)(CO)(2) group in a kappa(2)-N,O-bidentate chelating fashion whereas the Mo-II centre displays a pseudooctahedral environment with Br occupying an equatorial position. Complex 2 was formulated as [MO(eta(3)-C3H5)(CO)(2)(L1)Br] on the basis of a combination of spectroscopic data, elemental analysis, conductivity and DFT calculations. L1 acts as a kappa(2)-N,N-bidentate ligand. In both L1 and L2, the HOMOs are mainly localised on iron while the C=O bond(s) contribute to the LUMO(s) and the next highest energy orbitals are Fe-allyl antibonding orbitals. When the ligands bind to Mo(eta(3)-C3H5)(CO)(2)Br, the greatest difference is that Mo becomes the strongest contributor to the HOMO. Electrochemical studies show that, in complex 2, no electronic interaction exists between the two ferrocenyl ligands and that the first electron has been removed from the Mo-II-centred HOMO. (c) Wiley-VCH Verlag GmbH & Co. KGaA.