947 resultados para Indicator Component Framework (ICF)
Resumo:
This paper presents a framework to integrate requirements management and design knowledge reuse. The research approach begins with a literature review in design reuse and requirements management to identify appropriate methods within each domain. A framework is proposed based on the identified requirements. The framework is then demonstrated using a case study example: vacuum pump design. Requirements are presented as a component of the integrated design knowledge framework. The proposed framework enables the application of requirements management as a dynamic process, including capture, analysis and recording of requirements. It takes account of the evolving requirements and the dynamic nature of the interaction between requirements and product structure through the various stages of product development.
Resumo:
Sustainable development depends on maintaining ecosystem services which are concentrated in coastal marine and estuarine ecosystems. Analyses of the science needed to manage human uses of ecosystem services have concentrated on terrestrial ecosystems. Our focus is on the provision of multidisciplinary data needed to inform adaptive, ecosystem-based approaches (EBAs) for maintaining coastal ecosystem services based on comparative ecosystem analyses. Key indicators of pressures on coastal ecosystems, ecosystem states and the impacts of changes in states on services are identified for monitoring and analysis at a global coastal network of sentinel sites nested in the ocean-climate observing system. Biodiversity is targeted as the “master” indicator because of its importance to a broad spectrum of services. Ultimately, successful implementation of EBAs will depend on establishing integrated, holistic approaches to ocean governance that oversee the development of integrated, operational ocean observing systems based on the data and information requirements specified by a broad spectrum of stakeholders for sustainable development. Sustained engagement of such a spectrum of stakeholders on a global scale is not feasible. The global coastal network will need to be customized locally and regionally based on priorities established by stakeholders in their respective regions. The E.U. Marine Strategy Framework Directive and the U.S. Recommendations of the Interagency Ocean Policy Task Force are important examples of emerging regional scale approaches. The effectiveness of these policies will depend on the co-evolution of ocean policy and the observing system under the auspices of integrated ocean governance.
Resumo:
Phytoplankton size structure is an important indicator of the state of the pelagic ecosystem. Stimulated by the paucity of in situ observations on size structure, and by the sampling advantages of autonomous remote platforms, new efforts are being made to infer the size-structure of the phytoplankton from oceanographic variables that may be measured at high temporal and spatial resolution, such as total chlorophyll concentration. Large-scale analysis of in situ data has revealed coherent relationships between size-fractionated chlorophyll and total chlorophyll that can be quantified using the three-component model of Brewin et al. (2010). However, there are variations surrounding these general relationships. In this paper, we first revise the three-component model using a global dataset of surface phytoplankton pigment measurements. Then, using estimates of the average irradiance in the mixed-layer, we investigate the influence of ambient light on the parameters of the three-component model. We observe significant relationships between model parameters and the average irradiance in the mixed-layer, consistent with ecological knowledge. These relationships are incorporated explicitly into the three-component model to illustrate variations in the relationship between size-structure and total chlorophyll, ensuing from variations in light availability. The new model may be used as a tool to investigate modifications in size-structure in the context of a changing climate.
Resumo:
The purpose of this study is to produce a series of Conceptual Ecological Models (CEMs) that represent sublittoral rock habitats in the UK. CEMs are diagrammatic representations of the influences and processes that occur within an ecosystem. They can be used to identify critical aspects of an ecosystem that may be studied further, or serve as the basis for the selection of indicators for environmental monitoring purposes. The models produced by this project are control diagrams, representing the unimpacted state of the environment free from anthropogenic pressures. It is intended that the models produced by this project will be used to guide indicator selection for the monitoring of this habitat in UK waters. CEMs may eventually be produced for a range of habitat types defined under the UK Marine Biodiversity Monitoring R&D Programme (UKMBMP), which, along with stressor models, are designed to show the interactions within impacted habitats, would form the basis of a robust method for indicator selection. This project builds on the work to develop CEMs for shallow sublittoral coarse sediment habitats (Alexander et al 2014). The project scope included those habitats defined as ‘sublittoral rock’. This definition includes those habitats that fall into the EUNIS Level 3 classifications A3.1 Atlantic and Mediterranean high energy infralittoral rock, A3.2 Atlantic and Mediterranean moderate energy infralittoral rock, A3.3 Atlantic and Mediterranean low energy infralittoral rock, A4.1 Atlantic and Mediterranean high energy circalittoral rock, A4.2 Atlantic and Mediterranean moderate energy circalittoral rock, and A4.3 Atlantic and Mediterranean low energy circalittoral rock as well as the constituent Level 4 and 5 biotopes that are relevant to UK waters. A species list of characterising fauna to be included within the scope of the models was identified using an iterative process to refine the full list of species found within the relevant Level 5 biotopes. A literature review was conducted using a pragmatic and iterative approach to gather evidence regarding species traits and information that would be used to inform the models and characterise the interactions that occur within the sublittoral rock habitat. All information gathered during the literature review was entered into a data logging pro-forma spreadsheet that accompanies this report. Wherever possible, attempts were made to collect information from UK-specific peer-reviewed studies, although other sources were used where necessary. All data gathered was subject to a detailed confidence assessment. Expert judgement by the project team was utilised to provide information for aspects of the models for which references could not be sourced within the project timeframe. A multivariate analysis approach was adopted to assess ecologically similar groups (based on ecological and life history traits) of fauna from the identified species to form the basis of the models. A model hierarchy was developed based on these ecological groups. One general control model was produced that indicated the high-level drivers, inputs, biological assemblages, ecosystem processes and outputs that occur in sublittoral rock habitats. In addition to this, seven detailed sub-models were produced, which each focussed on a particular ecological group of fauna within the habitat: ‘macroalgae’, ‘temporarily or permanently attached active filter feeders’, ‘temporarily or permanently attached passive filter feeders’, ‘bivalves, brachiopods and other encrusting filter feeders’, ‘tube building fauna’, ‘scavengers and predatory fauna’, and ‘non-predatory mobile fauna’. Each sub-model is accompanied by an associated confidence model that presents confidence in the links between each model component. The models are split into seven levels and take spatial and temporal scale into account through their design, as well as magnitude and direction of influence. The seven levels include regional to global drivers, water column processes, local inputs/processes at the seabed, habitat and biological assemblage, output processes, local ecosystem functions, and regional to global ecosystem functions. The models indicate that whilst the high level drivers that affect each ecological group are largely similar, the output processes performed by the biota and the resulting ecosystem functions vary both in number and importance between groups. Confidence within the models as a whole is generally high, reflecting the level of information gathered during the literature review. Physical drivers which influence the ecosystem were found to be of high importance for the sublittoral rock habitat, with factors such as wave exposure, water depth and water currents noted to be crucial in defining the biological assemblages. Other important factors such as recruitment/propagule supply, and those which affect primary production, such as suspended sediments, light attenuation and water chemistry and temperature, were also noted to be key and act to influence the food sources consumed by the biological assemblages of the habitat, and the biological assemblages themselves. Output processes performed by the biological assemblages are variable between ecological groups depending on the specific flora and fauna present and the role they perform within the ecosystem. Of particular importance are the outputs performed by the macroalgae group, which are diverse in nature and exert influence over other ecological groups in the habitat. Important output processes from the habitat as a whole include primary and secondary production, bioengineering, biodeposition (in mixed sediment habitats) and the supply of propagules; these in turn influence ecosystem functions at the local scale such as nutrient and biogeochemical cycling, supply of food resources, sediment stability (in mixed sediment habitats), habitat provision and population and algae control. The export of biodiversity and organic matter, biodiversity enhancement and biotope stability are the resulting ecosystem functions that occur at the regional to global scale. Features within the models that are most useful for monitoring habitat status and change due to natural variation have been identified, as have those that may be useful for monitoring to identify anthropogenic causes of change within the ecosystem. Biological, physical and chemical features of the ecosystem have been identified as potential indicators to monitor natural variation, whereas biological factors and those physical /chemical factors most likely to affect primary production have predominantly been identified as most likely to indicate change due to anthropogenic pressures.
Resumo:
The Healthy and Biologically Diverse Seas Evidence Group (HBDSEG) has been tasked with providing the technical advice for the implementation of the Marine Strategy Framework Directive (MSFD) with respect to descriptors linked to biodiversity. A workshop was held in London to address one of the Research and Development (R&D) proposals entitled: ‘Mapping the extent and distribution of habitats using acoustic and remote techniques, relevant to indicators for area/extent/habitat loss.’ The aim of the workshop was to identify, define and assess the feasibility of potential indicators of benthic habitat distribution and extent, and identify the R&D work which could be required to fully develop these indicators. The main points that came out of the workshop were: (i) There are many technical aspects of marine habitat mapping that still need to be resolved if cost-effective spatial indicators are to be developed. Many of the technical aspects that need addressing surround issues of consistency, confidence and repeatability. These areas should be tackled by the JNCC Habitat Mapping and Classification Working Group and the HBDSEG Seabed Mapping Working Group. (ii) There is a need for benthic ecologists (through the HBDSEG Benthic Habitats Subgroup and the JNCC Marine Indicators Group) to finalise the list of habitats for which extent and/or distribution indicators should be considered for development, building upon the recommendations from this report. When reviewing the list of indicators, benthic habitats could also be distinguished into those habitats that are defined/determined primarily by physical parameters (although including biological assemblages) (e.g. subtidal shallow sand) and those defined primarily by their biological assemblage (e.g. seagrass beds). This distinction is important as some anthropogenic pressures may influence the biological component of the ecosystem despite not having a quantifiable effect on the physical habitat distribution/extent. (iii) The scale and variety of UK benthic habitats makes any attempt to undertake comprehensive direct mapping exercises prohibitively expensive (especially where there is a need for repeat surveys for assessment). There is a clear need therefore to develop a risk-based approach that uses indirect indicators (e.g. modelling), such as habitats at risk from pressures caused by current human activities, to develop priorities for information gathering. The next steps that came out of the workshop were: (i) A combined approach should be developed by the JNCC Marine Indicators Group together with the HBDSEG Benthic Habitats Subgroup, which will compile and ultimately synthesise all the criteria used by the three different groups from the workshop. The agreed combined approach will be used to undertake a final review of the habitats considered during the workshop, and to evaluate any remaining habitats in order to produce a list of habitats for indicator development for which extent and/or distribution indicators could be appropriate. (ii) The points of advice raised at this workshop, alongside the combined approach aforementioned, and the final list of habitats for extent and/or distribution indicator development will be used to develop a prioritised list of actions to inform the next round of R&D proposals for benthic habitat indicator development in 2014. This will be done through technical discussions within JNCC and the relevant HBDSEG Subgroups. The preparation of recommendations by these groups should take into account existing work programmes, and consider the limited resources available to undertake any further R&D work.
Resumo:
A recognised aim of science education is to promote critical engagement with science in the media. Evidence would suggest that this is challenging for both teachers and pupils and that at science education does not yet adequately prepare young people for this task. Furthermore, in the absence of clear guidance as to what this means and how this may be achieved it is difficult for teachers to develop approaches and resources that address the matter and that systematically promote such critical engagement within their teaching programmes. Twenty-six individuals with recognised expertise or interest in science in the media, drawn from a range of disciplines and areas of practice, constituted a specialist panel in this study. The question this research sought to answer was ‘what are the elements of knowledge, skill and attitude which underpin critical reading of science based news reports’? During in-depth individual interviews the panel were asked to explore what they considered to be essential elements of knowledge, skills and attitude which people need to enable them to respond critically to news reports with a science component. Analysis of the data revealed fourteen fundamental elements which together contribute to an individual’s capacity to engage critically with science-based news. These are classified in five categories ‘knowledge of science’, ‘knowledge of writing and language’, ‘knowledge about news, newspapers and journalism’, ‘skills’ and ‘attitudes’. Illustrative profiles of each category along with indicators of critical engagement are presented. The implications for curriculum planning and pedagogy are considered.
Resumo:
At the outset of a discussion of evaluating digital musical instruments, that is to say instruments whose sound generators are digital and separable though not necessarily separate from their control interfaces (Malloch, 2006), it is reasonable to ask what the term evaluation in this context really means. After all, there may be many perspectives from which to view the effectiveness or otherwise of the instruments we build. For most performers, performance on an instrument becomes a means of evaluating how well it functions in the context of live music making, and their measure of success is the response of the audience to their performance. Audiences evaluate performances on the basis of how engaged they feel they have been by what they have seen and heard. When questioned, they are likely to describe good performances as “exciting,” “skillful,” “musical.” Bad performances are “boring,” and those which are marred by technical malfunction are often dismissed out of hand. If performance is considered to be a valid means of evaluating a musical instrument, then it follows that, for the field of DMI design, a much broader definition of the term “evaluation” than that typically used in human-computer interaction (HCI) is required to reflect the fact that there are a number of stakeholders involved in the design and evaluation of DMIs. In addition to players and audiences, there are also composers, instrument builders, component manufacturers, and perhaps even customers, each of whom will have a different concept of what is meant by “evaluation.”
Resumo:
Aim: Two Type I diabetes and control group comparator studies were conducted to assess the reproducibility of FMD and to analyse blood flow data normally discarded during FMD measurement.
Design: The studies were sequential and differed only with regard to operator and ultrasound machine. Seventy-two subjects with diabetes and 71 controls were studied in total.
Methods: Subjects had FMD measured conventionally. Blood velocity waveforms were averaged over 10 pulses post forearm ischaemia and their component frequencies analysed using the wavelet transform, a mathematical tool for waveform analysis. The component frequencies were grouped into 11 bands to facilitate analysis.
Results: Subjects were well-matched between studies. In Study 1, FMD was significantly impaired in subjects with Type I diabetes vs. controls (median 4.35%, interquartile range 3.10-4.80 vs. 6.50, 4.79-9.42, P < 0.001). No differences were detected between groups in Study 2, however. However, analysis of blood velocity waveforms yielded significant differences between groups in two frequency bands in each study.
Conclusions: This report highlights concerns over the reproducibility of FMD measures. Further work is required to fully elucidate the role of analysing velocity waveforms after forearm ischaemia.
Resumo:
The finite element method plays an extremely important role in forging process design as it provides a valid means to quantify forging errors and thereby govern die shape modification to improve the dimensional accuracy of the component. However, this dependency on process simulation could raise significant problems and present a major drawback if the finite element simulation results were inaccurate. This paper presents a novel approach to assess the dimensional accuracy and shape quality of aeroengine blades formed from finite element hot-forging simulation. The proposed virtual inspection system uses conventional algorithms adopted by modern coordinate measurement processes as well as the latest free-form surface evaluation techniques to provide a robust framework for virtual forging error assessment. Established techniques for the physical registration of real components have been adapted to localise virtual models in relation to a nominal Design Coordinate System. Blades are then automatically analysed using a series of intelligent routines to generate measurement data and compute dimensional errors. The results of a comparison study indicate that the virtual inspection results and actual coordinate measurement data are highly comparable, validating the approach as an effective and accurate means to quantify forging error in a virtual environment. Consequently, this provides adequate justification for the implementation of the virtual inspection system in the virtual process design, modelling and validation of forged aeroengine blades in industry.
Resumo:
We detail the calculations of North Sea Large Fish Indicator values for 2009-2011, demonstrating an apparent stall in recovery. Therefore, recovery to the Marine Strategy Framework Directive's good environmental status of 0.3 by the 2020 deadline now looks less certain and may take longer than was expected using data from 2006 to 2008.
Resumo:
The maintenance of biodiversity is a fundamental theme of the Marine Strategy Framework Directive. Appropriate indicators to monitor change in biodiversity, along with associated targets representing "good environmental status" (GES), are required to be in place by July 2012. A method for selecting species-specific metrics to fulfil various specified indicator roles is proposed for demersal fish communities. Available data frequently do not extend far enough back in time to allow GES to be defined empirically. In such situations, trends-based targets offer a pragmatic solution. A method is proposed for setting indicator-level targets for the number of species-specific metrics required to meet their trends-based metric-level targets. This is based on demonstrating significant departures from the binomial distribution. The procedure is trialled using North Sea demersal fish survey data. Although fisheries management in the North Sea has improved in recent decades, management goals to stop further decline in biodiversity, and to initiate recovery, are yet to be met.
Resumo:
The operation of supply chains (SCs) has for many years been focused on efficiency, leanness and responsiveness. This has resulted in reduced slack in operations, compressed cycle times, increased productivity and minimised inventory levels along the SC. Combined with tight tolerance settings for the realisation of logistics and production processes, this has led to SC performances that are frequently not robust. SCs are becoming increasingly vulnerable to disturbances, which can decrease the competitive power of the entire chain in the market. Moreover, in the case of food SCs non-robust performances may ultimately result in empty shelves in grocery stores and supermarkets.
The overall objective of this research is to contribute to Supply Chain Management (SCM) theory by developing a structured approach to assess SC vulnerability, so that robust performances of food SCs can be assured. We also aim to help companies in the food industry to evaluate their current state of vulnerability, and to improve their performance robustness through a better understanding of vulnerability issues. The following research questions (RQs) stem from these objectives:
RQ1: What are the main research challenges related to (food) SC robustness?
RQ2: What are the main elements that have to be considered in the design of robust SCs and what are the relationships between these elements?
RQ3: What is the relationship between the contextual factors of food SCs and the use of disturbance management principles?
RQ4: How to systematically assess the impact of disturbances in (food) SC processes on the robustness of (food) SC performances?
To answer these RQs we used different methodologies, both qualitative and quantitative. For each question, we conducted a literature survey to identify gaps in existing research and define the state of the art of knowledge on the related topics. For the second and third RQ, we conducted both exploration and testing on selected case studies. Finally, to obtain more detailed answers to the fourth question, we used simulation modelling and scenario analysis for vulnerability assessment.
Main findings are summarised as follows.
Based on an extensive literature review, we answered RQ1. The main research challenges were related to the need to define SC robustness more precisely, to identify and classify disturbances and their causes in the context of the specific characteristics of SCs and to make a systematic overview of (re)design strategies that may improve SC robustness. Also, we found that it is useful to be able to discriminate between varying degrees of SC vulnerability and to find a measure that quantifies the extent to which a company or SC shows robust performances when exposed to disturbances.
To address RQ2, we define SC robustness as the degree to which a SC shows an acceptable performance in (each of) its Key Performance Indicators (KPIs) during and after an unexpected event that caused a disturbance in one or more logistics processes. Based on the SCM literature we identified the main elements needed to achieve robust performances and structured them together to form a conceptual framework for the design of robust SCs. We then explained the logic of the framework and elaborate on each of its main elements: the SC scenario, SC disturbances, SC performance, sources of food SC vulnerability, and redesign principles and strategies.
Based on three case studies, we answered RQ3. Our major findings show that the contextual factors have a consistent relationship to Disturbance Management Principles (DMPs). The product and SC environment characteristics are contextual factors that are hard to change and these characteristics initiate the use of specific DMPs as well as constrain the use of potential response actions. The process and the SC network characteristics are contextual factors that are easier to change, and they are affected by the use of the DMPs. We also found a notable relationship between the type of DMP likely to be used and the particular combination of contextual factors present in the observed SC.
To address RQ4, we presented a new method for vulnerability assessments, the VULA method. The VULA method helps to identify how much a company is underperforming on a specific Key Performance Indicator (KPI) in the case of a disturbance, how often this would happen and how long it would last. It ultimately informs the decision maker about whether process redesign is needed and what kind of redesign strategies should be used in order to increase the SC’s robustness. The VULA method is demonstrated in the context of a meat SC using discrete-event simulation. The case findings show that performance robustness can be assessed for any KPI using the VULA method.
To sum-up the project, all findings were incorporated within an integrated framework for designing robust SCs. The integrated framework consists of the following steps: 1) Description of the SC scenario and identification of its specific contextual factors; 2) Identification of disturbances that may affect KPIs; 3) Definition of the relevant KPIs and identification of the main disturbances through assessment of the SC performance robustness (i.e. application of the VULA method); 4) Identification of the sources of vulnerability that may (strongly) affect the robustness of performances and eventually increase the vulnerability of the SC; 5) Identification of appropriate preventive or disturbance impact reductive redesign strategies; 6) Alteration of SC scenario elements as required by the selected redesign strategies and repeat VULA method for KPIs, as defined in Step 3.
Contributions of this research are listed as follows. First, we have identified emerging research areas - SC robustness, and its counterpart, vulnerability. Second, we have developed a definition of SC robustness, operationalized it, and identified and structured the relevant elements for the design of robust SCs in the form of a research framework. With this research framework, we contribute to a better understanding of the concepts of vulnerability and robustness and related issues in food SCs. Third, we identified the relationship between contextual factors of food SCs and specific DMPs used to maintain robust SC performances: characteristics of the product and the SC environment influence the selection and use of DMPs; processes and SC networks are influenced by DMPs. Fourth, we developed specific metrics for vulnerability assessments, which serve as a basis of a VULA method. The VULA method investigates different measures of the variability of both the duration of impacts from disturbances and the fluctuations in their magnitude.
With this project, we also hope to have delivered practical insights into food SC vulnerability. First, the integrated framework for the design of robust SCs can be used to guide food companies in successful disturbance management. Second, empirical findings from case studies lead to the identification of changeable characteristics of SCs that can serve as a basis for assessing where to focus efforts to manage disturbances. Third, the VULA method can help top management to get more reliable information about the “health” of the company.
The two most important research opportunities are: First, there is a need to extend and validate our findings related to the research framework and contextual factors through further case studies related to other types of (food) products and other types of SCs. Second, there is a need to further develop and test the VULA method, e.g.: to use other indicators and statistical measures for disturbance detection and SC improvement; to define the most appropriate KPI to represent the robustness of a complete SC. We hope this thesis invites other researchers to pick up these challenges and help us further improve the robustness of (food) SCs.
Resumo:
This paper shows that current multivariate statistical monitoring technology may not detect incipient changes in the variable covariance structure nor changes in the geometry of the underlying variable decomposition. To overcome these deficiencies, the local approach is incorporated into the multivariate statistical monitoring framework to define two new univariate statistics for fault detection. Fault isolation is achieved by constructing a fault diagnosis chart which reveals changes in the covariance structure resulting from the presence of a fault. A theoretical analysis is presented and the proposed monitoring approach is exemplified using application studies involving recorded data from two complex industrial processes. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
We report the results of stable carbon and nitrogen isotope analysis of 354 human and faunal samples from five archaeological cultures of the Minusinsk Basin, Southern Siberia – Afanasyevo, Okunevo, Andronovo, Karasuk and Tagar (ca. 2700–1 BC) – a key location in Eurasia due to its position on a northern corridor linking China and central Eurasia. The results indicate that the diet of Eneolithic to Middle Bronze Age (Afanasyevo to Andronovo) populations was primarily C3-based, with C4 plants only becoming an important component of the diet in the Late Bronze Age Karasuk and Early Iron Age Tagar cultures. Freshwater fish seems to have been an important constituent of the diets in all groups. The findings constitute the earliest concrete evidence for the substantial use of millet in the eastern Eurasian steppe. We propose that it was probably introduced from Northwestern China during the Karasuk culture at the start of the Late Bronze Age, ca. 1500 BC. We conclude with a discussion of the implications for the nature of pastoralist economies on the steppes.
Resumo:
Three photocatalyst inks based on the redox dyes, Resazurin (Rz), Basic Blue 66 (BB66) and Acid Violet 7 (AV7), are used to assess the photocatalytic activities of a variety of different materials, such as commercial paint, tiles and glass and laboratory made samples of sol–gel coated glass and paint, which collectively exhibit a wide range of activities that cannot currently be probed by any one of the existing ISO tests. Unlike the ISO tests, the ink tests are fast (typically <10 min), simple to employ and inexpensive. Previous work indicates that the Rz ink test at least correlates linearly with other photocatalytic tests such as the photomineralisation of stearic acid. The average time to bleach 90% of the key RGB colour component of the ink, red for Rz and BB66 inks and green for AV7 ink, is determined, ttb(90), for eight samples of each of the different materials tested. Five laboratories conducted the tests and the results revealed an average repeatability and reproducibility of: ca. 11% and ca 21%, respectively, which compare well with those reported for the current ISO tests. Additional work on commercial self-cleaning glass using an Rz ink showed that the change in the red component of the RGB image of the ink correlated linearly with that of the change of absorbance at 608 nm, as measured using UV/vis spectroscopy, and the change in the a* component of the Lab colour analysis of the ink, as measured using diffuse reflectance spectroscopy. As a consequence, all three methods generate the same ttb(90). The advantages of the RGB digital image analysis method are discussed briefly.