11 resultados para Scotchbond Multi-Purpose Plus
em CentAUR: Central Archive University of Reading - UK
Resumo:
The evaluation of EU policy in the area of rural land use management often encounters problems of multiple and poorly articulated objectives. Agri-environmental policy has a range of aims, including natural resource protection, biodiversity conservation and the protection and enhancement of landscape quality. Forestry policy, in addition to production and environmental objectives, increasingly has social aims, including enhancement of human health and wellbeing, lifelong learning, and the cultural and amenity value of the landscape. Many of these aims are intangible, making them hard to define and quantify. This article describes two approaches for dealing with such situations, both of which rely on substantial participation by stakeholders. The first is the Agri-Environment Footprint Index, a form of multi-criteria participatory approach. The other, applied here to forestry, has been the development of ‘multi-purpose’ approaches to evaluation, which respond to the diverse needs of stakeholders through the use of mixed methods and a broad suite of indicators, selected through a participatory process. Each makes use of case studies and involves stakeholders in the evaluation process, thereby enhancing their commitment to the programmes and increasing their sustainability. Both also demonstrate more ‘holistic’ approaches to evaluation than the formal methods prescribed in the EU Common Monitoring and Evaluation Framework.
Resumo:
Objective: To investigate the sociodemographic determinants of diet quality of the elderly in four EU countries. Design: Cross-sectional study. For each country, a regression was performed of a multidimensional index of dietary quality v. sociodemographic variables. Setting In Finland, Finnish Household Budget Survey (1998 and 2006); in Sweden, SNAC-K (2001–2004); in the UK, Expenditure & Food Survey (2006–07); in Italy, Multi-purpose Survey of Daily Life (2009). Subjects: One- and two-person households of over-50s (Finland, n 2994; UK, n 4749); over-50 s living alone or in two-person households (Italy, n 7564); over-60 s (Sweden, n 2023). Results: Diet quality among the EU elderly is both low on average and heterogeneous across individuals. The regression models explained a small but significant part of the observed heterogeneity in diet quality. Resource availability was associated with diet quality either negatively (Finland and UK) or in a non-linear or non-statistically significant manner (Italy and Sweden), as was the preference for food parameter. Education, not living alone and female gender were characteristics positively associated with diet quality with consistency across the four countries, unlike socio-professional status, age and seasonality. Regional differences within countries persisted even after controlling for the other sociodemographic variables. Conclusions: Poor dietary choices among the EU elderly were not caused by insufficient resources and informational measures could be successful in promoting healthy eating for healthy ageing. On the other hand, food habits appeared largely set in the latter part of life, with age and retirement having little influence on the healthiness of dietary choices.
Resumo:
Observations of atmospheric conditions and processes in citiesare fundamental to understanding the interactions between the urban surface and weather/climate, improving the performance of urban weather, air quality and climate models, and providing key information for city end-users (e.g. decision-makers, stakeholders, public). In this paper, Shanghai's urban integrated meteorological observation network (SUIMON) and some examples of intended applications are introduced. Its characteristics include being: multi- purpose (e.g. forecast, research, service), multi-function (high impact weather, city climate, special end-users), multi-scale (e.g. macro/meso-, urban-, neighborhood, street canyon), multi-variable (e.g. thermal, dynamic, chemical, bio-meteorological, ecological), and multi- platform (e.g. radar, wind profiler, ground-based, satellite based, in-situ observation/ sampling). Underlying SUIMON is a data management system to facilitate exchange of data and information. The overall aim of the network is to improve coordination strategies and instruments; to identify data gaps based on science and user driven requirements; and to intelligently combine observations from a variety of platforms by using a data assimilation system that is tuned to produce the best estimate of the current state of the urban atmosphere.
Resumo:
This paper reviews the literature concerning the practice of using Online Analytical Processing (OLAP) systems to recall information stored by Online Transactional Processing (OLTP) systems. Such a review provides a basis for discussion on the need for the information that are recalled through OLAP systems to maintain the contexts of transactions with the data captured by the respective OLTP system. The paper observes an industry trend involving the use of OLTP systems to process information into data, which are then stored in databases without the business rules that were used to process information and data stored in OLTP databases without associated business rules. This includes the necessitation of a practice, whereby, sets of business rules are used to extract, cleanse, transform and load data from disparate OLTP systems into OLAP databases to support the requirements for complex reporting and analytics. These sets of business rules are usually not the same as business rules used to capture data in particular OLTP systems. The paper argues that, differences between the business rules used to interpret these same data sets, risk gaps in semantics between information captured by OLTP systems and information recalled through OLAP systems. Literature concerning the modeling of business transaction information as facts with context as part of the modelling of information systems were reviewed to identify design trends that are contributing to the design quality of OLTP and OLAP systems. The paper then argues that; the quality of OLTP and OLAP systems design has a critical dependency on the capture of facts with associated context, encoding facts with contexts into data with business rules, storage and sourcing of data with business rules, decoding data with business rules into the facts with the context and recall of facts with associated contexts. The paper proposes UBIRQ, a design model to aid the co-design of data with business rules storage for OLTP and OLAP purposes. The proposed design model provides the opportunity for the implementation and use of multi-purpose databases, and business rules stores for OLTP and OLAP systems. Such implementations would enable the use of OLTP systems to record and store data with executions of business rules, which will allow for the use of OLTP and OLAP systems to query data with business rules used to capture the data. Thereby ensuring information recalled via OLAP systems preserves the contexts of transactions as per the data captured by the respective OLTP system.
Resumo:
We present a general Multi-Agent System framework for distributed data mining based on a Peer-to-Peer model. Agent protocols are implemented through message-based asynchronous communication. The framework adopts a dynamic load balancing policy that is particularly suitable for irregular search algorithms. A modular design allows a separation of the general-purpose system protocols and software components from the specific data mining algorithm. The experimental evaluation has been carried out on a parallel frequent subgraph mining algorithm, which has shown good scalability performances.
Resumo:
If the fundamental precepts of Farming Systems Research were to be taken literally then it would imply that for each farm 'unique' solutions should be sought. This is an unrealistic expectation, but it has led to the idea of a recommendation domain, implying creating a taxonomy of farms, in order to increase the general applicability of recommendations. Mathematical programming models are an established means of generating recommended solutions, but for such models to be effective they have to be constructed for 'truly' typical or representative situations. The multi-variate statistical techniques provide a means of creating the required typologies, particularly when an exhaustive database is available. This paper illustrates the application of this methodology in two different studies that shared the common purpose of identifying types of farming systems in their respective study areas. The issues related with the use of factor and cluster analyses for farm typification prior to building representative mathematical programming models for Chile and Pakistan are highlighted. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
The purpose of this paper is to present two multi-criteria decision-making models, including an Analytic Hierarchy Process (AHP) model and an Analytic Network Process (ANP) model for the assessment of deconstruction plans and to make a comparison between the two models with an experimental case study. Deconstruction planning is under pressure to reduce operation costs, adverse environmental impacts and duration, in the meanwhile to improve productivity and safety in accordance with structure characteristics, site conditions and past experiences. To achieve these targets in deconstruction projects, there is an impending need to develop a formal procedure for contractors to select a most appropriate deconstruction plan. Because numbers of factors influence the selection of deconstruction techniques, engineers definitely need effective tools to conduct the selection process. In this regard, multi-criteria decision-making methods such as AHP have been adopted to effectively support deconstruction technique selection in previous researches. in which it has been proved that AHP method can help decision-makers to make informed decisions on deconstruction technique selection based on a sound technical framework. In this paper, the authors present the application and comparison of two decision-making models including the AHP model and the ANP model for deconstruction plan assessment. The paper concludes that both AHP and ANP are viable and capable tools for deconstruction plan assessment under the same set of evaluation criteria. However, although the ANP can measure relationship among selection criteria and their sub-criteria, which is normally ignored in the AHP, the authors also indicate that whether the ANP model can provide a more accurate result should be examined in further research.
The TAMORA algorithm: satellite rainfall estimates over West Africa using multi-spectral SEVIRI data
Resumo:
A multi-spectral rainfall estimation algorithm has been developed for the Sahel region of West Africa with the purpose of producing accumulated rainfall estimates for drought monitoring and food security. Radar data were used to calibrate multi-channel SEVIRI data from MSG, and a probability of rainfall at several different rain-rates was established for each combination of SEVIRI radiances. Radar calibrations from both Europe (the SatPrecip algorithm) and Niger (TAMORA algorithm) were used. 10 day estimates were accumulated from SatPrecip and TAMORA and compared with kriged gauge data and TAMSAT satellite rainfall estimates over West Africa. SatPrecip was found to produce large overestimates for the region, probably because of its non-local calibration. TAMORA was negatively biased for areas of West Africa with relatively high rainfall, but its skill was comparable to TAMSAT for the low-rainfall region climatologically similar to its calibration area around Niamey. These results confirm the high importance of local calibration for satellite-derived rainfall estimates. As TAMORA shows no improvement in skill over TAMSAT for dekadal estimates, the extra cloud-microphysical information provided by multi-spectral data may not be useful in determining rainfall accumulations at a ten day timescale. Work is ongoing to determine whether it shows improved accuracy at shorter timescales.
Resumo:
PURPOSE: Multi-species probiotic preparations have been suggested as having a wide spectrum of application, although few studies have compared their efficacy with that of individual component strains at equal concentrations. We therefore tested the ability of 4 single probiotics and 4 probiotic mixtures to inhibit the urinary tract pathogens Escherichia coli NCTC 9001 and Enterococcus faecalis NCTC 00775. METHODS: We used an agar spot test to test the ability of viable cells to inhibit pathogens, while a broth inhibition assay was used to assess inhibition by cell-free probiotic supernatants in both pH-neutralised and non-neutralised forms. RESULTS: In the agar spot test, all probiotic treatments showed inhibition, L. acidophilus was the most inhibitory single strain against E. faecalis, L. fermentum the most inhibitory against E. coli. A commercially available mixture of 14 strains (Bio-Kult(®)) was the most effective mixture, against E. faecalis, the 3-lactobacillus mixture the most inhibitory against E. coli. Mixtures were not significantly more inhibitory than single strains. In the broth inhibition assays, all probiotic supernatants inhibited both pathogens when pH was not controlled, with only 2 treatments causing inhibition at a neutral pH. CONCLUSIONS: Both viable cells of probiotics and supernatants of probiotic cultures were able to inhibit growth of two urinary tract pathogens. Probiotic mixtures prevented the growth of urinary tract pathogens but were not significantly more inhibitory than single strains. Probiotics appear to produce metabolites that are inhibitory towards urinary tract pathogens. Probiotics display potential to reduce the incidence of urinary tract infections via inhibition of colonisation.
Resumo:
Purpose – Multinationals have always needed an operating model that works – an effective plan for executing their most important activities at the right levels of their organization, whether globally, regionally or locally. The choices involved in these decisions have never been obvious, since international firms have consistently faced trade‐offs between tailoring approaches for diverse local markets and leveraging their global scale. This paper seeks a more in‐depth understanding of how successful firms manage the global‐local trade‐off in a multipolar world. Design methodology/approach – This paper utilizes a case study approach based on in‐depth senior executive interviews at several telecommunications companies including Tata Communications. The interviews probed the operating models of the companies we studied, focusing on their approaches to organization structure, management processes, management technologies (including information technology (IT)) and people/talent. Findings – Successful companies balance global‐local trade‐offs by taking a flexible and tailored approach toward their operating‐model decisions. The paper finds that successful companies, including Tata Communications, which is profiled in‐depth, are breaking up the global‐local conundrum into a set of more manageable strategic problems – what the authors call “pressure points” – which they identify by assessing their most important activities and capabilities and determining the global and local challenges associated with them. They then design a different operating model solution for each pressure point, and repeat this process as new strategic developments emerge. By doing so they not only enhance their agility, but they also continually calibrate that crucial balance between global efficiency and local responsiveness. Originality/value – This paper takes a unique approach to operating model design, finding that an operating model is better viewed as several distinct solutions to specific “pressure points” rather than a single and inflexible model that addresses all challenges equally. Now more than ever, developing the right operating model is at the top of multinational executives' priorities, and an area of increasing concern; the international business arena has changed drastically, requiring thoughtfulness and flexibility instead of standard formulas for operating internationally. Old adages like “think global and act local” no longer provide the universal guidance they once seemed to.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.