28 resultados para Building Life Cycle, Data Mining, Management
Resumo:
This paper proposes a more profound discussion of the philosophical underpins of sustainability than currently exists in the MOT literature and considers their influence on the construction of the theories on green operations and technology management. Ultimately, it also debates the link between theory and practice on this subject area. The paper is derived from insights gained in three research projects completed during the past twelve years, primarily involving the first author. From 2000 to 2002, an investigation using scenario analysis, aimed at reducing atmospheric pollution in urban centres by substituting natural gas for petrol and diesel, provided the first set of insights about public policy, environmental impacts, investment analysis, and technological feasibility. The second research project, from 2003 to 2005, using a survey questionnaire, was aimed at improving environmental performance in livestock farming and explored the issues of green supply chain scope, environmental strategy and priorities. Finally, the third project, from 2006 to 2011, investigated environmental decisions in manufacturing organisations through case study research and examined the underlying sustainability drivers and decision-making processes. By integrating the findings and conclusions from these projects, the link between philosophy, theory, and practice of green operations and technology management is debated. The findings from all these studies show that the philosophical debate seems to have little influence on theory building so far. For instance, although ‘sustainable development’ emphasises ‘meeting the needs of current and future generation’, no theory links essentiality and environmental impacts. Likewise, there is a weak link between theory and the practical issues of green operations and technology management. For example, the well-known ‘life-cycle analysis’ has little application in many cases because the life cycle of products these days is dispersed within global production and consumption systems and there are different stakeholders for each life cycle stage. The results from this paper are relevant to public policy making and corporate environmental strategy and decision making. Most of the past and current studies in the subject of green operations and sustainability management deal with only a single sustainability dimension at any one time. Here the value and originality of this paper lies in its integration between philosophy, theory, and practice of green technology and operations management.
Resumo:
Hierarchical visualization systems are desirable because a single two-dimensional visualization plot may not be sufficient to capture all of the interesting aspects of complex high-dimensional data sets. We extend an existing locally linear hierarchical visualization system PhiVis [1] in several directions: bf(1) we allow for em non-linear projection manifolds (the basic building block is the Generative Topographic Mapping -- GTM), bf(2) we introduce a general formulation of hierarchical probabilistic models consisting of local probabilistic models organized in a hierarchical tree, bf(3) we describe folding patterns of low-dimensional projection manifold in high-dimensional data space by computing and visualizing the manifold's local directional curvatures. Quantities such as magnification factors [3] and directional curvatures are helpful for understanding the layout of the nonlinear projection manifold in the data space and for further refinement of the hierarchical visualization plot. Like PhiVis, our system is statistically principled and is built interactively in a top-down fashion using the EM algorithm. We demonstrate the visualization system principle of the approach on a complex 12-dimensional data set and mention possible applications in the pharmaceutical industry.
Resumo:
Time, cost and quality achievements on large-scale construction projects are uncertain because of technological constraints, involvement of many stakeholders, long durations, large capital requirements and improper scope definitions. Projects that are exposed to such an uncertain environment can effectively be managed with the application of risk management throughout the project life cycle. Risk is by nature subjective. However, managing risk subjectively poses the danger of non-achievement of project goals. Moreover, risk analysis of the overall project also poses the danger of developing inappropriate responses. This article demonstrates a quantitative approach to construction risk management through an analytic hierarchy process (AHP) and decision tree analysis. The entire project is classified to form a few work packages. With the involvement of project stakeholders, risky work packages are identified. As all the risk factors are identified, their effects are quantified by determining probability (using AHP) and severity (guess estimate). Various alternative responses are generated, listing the cost implications of mitigating the quantified risks. The expected monetary values are derived for each alternative in a decision tree framework and subsequent probability analysis helps to make the right decision in managing risks. In this article, the entire methodology is explained by using a case application of a cross-country petroleum pipeline project in India. The case study demonstrates the project management effectiveness of using AHP and DTA.
Resumo:
Conventional project management techniques are not always sufficient for ensuring time, cost and quality achievement of large-scale construction projects due to complexity in planning and implementation processes. The main reasons for project non-achievement are changes in scope and design, changes in Government policies and regulations, unforeseen inflation) under-estimation and improper estimation. Projects that are exposed to such an uncertain environment can be effectively managed with the application of risk numagement throughout project life cycle. However, the effectiveness of risk management depends on the technique in which the effects of risk factors are analysed and! or quantified. This study proposes Analytic Hierarchy Process (AHP), a multiple attribute decision-making technique as a tool for risk analysis because it can handle subjective as well as objective factors in decision model that are conflicting in nature. This provides a decision support system (DSS) to project managenumt for making the right decision at the right time for ensuring project success in line with organisation policy, project objectives and competitive business environment. The whole methodology is explained through a case study of a cross-country petroleum pipeline project in India and its effectiveness in project1nana.gement is demonstrated.
Resumo:
This paper reports preliminary results of a project investigating how staff in UK organisations perceive knowledge management in their organisation as a group. The group setting appears to be effective in surfacing opinions and enabling progress in both understanding and action to be made. Among the findings thus far are the importance of the knowledge champion role and the state of the “knowledge management life cycle” in each organisation, and continuing confusion between knowledge, information and mechanisms.
Resumo:
When applying multivariate analysis techniques in information systems and social science disciplines, such as management information systems (MIS) and marketing, the assumption that the empirical data originate from a single homogeneous population is often unrealistic. When applying a causal modeling approach, such as partial least squares (PLS) path modeling, segmentation is a key issue in coping with the problem of heterogeneity in estimated cause-and-effect relationships. This chapter presents a new PLS path modeling approach which classifies units on the basis of the heterogeneity of the estimates in the inner model. If unobserved heterogeneity significantly affects the estimated path model relationships on the aggregate data level, the methodology will allow homogenous groups of observations to be created that exhibit distinctive path model estimates. The approach will, thus, provide differentiated analytical outcomes that permit more precise interpretations of each segment formed. An application on a large data set in an example of the American customer satisfaction index (ACSI) substantiates the methodology’s effectiveness in evaluating PLS path modeling results.
Resumo:
This paper reports results from an ongoing project examining what managers think about knowledge management in the context of their organisation. This was done in a facilitated computerassisted group workshop environment. Here we compare the outcomes of workshops held for two relatively large UK organisations, one public sector and the other private. Our conclusions are that there are relatively few differences between the perceptions of these two groups of managers, and that these differences stem more from the stage of the knowledge management life cycle that the two organisations have reached, rather than from the difference in context between public and private sector. © iKMS & World Scientific Publishing Co.
Resumo:
Retrospective clinical data presents many challenges for data mining and machine learning. The transcription of patient records from paper charts and subsequent manipulation of data often results in high volumes of noise as well as a loss of other important information. In addition, such datasets often fail to represent expert medical knowledge and reasoning in any explicit manner. In this research we describe applying data mining methods to retrospective clinical data to build a prediction model for asthma exacerbation severity for pediatric patients in the emergency department. Difficulties in building such a model forced us to investigate alternative strategies for analyzing and processing retrospective data. This paper describes this process together with an approach to mining retrospective clinical data by incorporating formalized external expert knowledge (secondary knowledge sources) into the classification task. This knowledge is used to partition the data into a number of coherent sets, where each set is explicitly described in terms of the secondary knowledge source. Instances from each set are then classified in a manner appropriate for the characteristics of the particular set. We present our methodology and outline a set of experiential results that demonstrate some advantages and some limitations of our approach. © 2008 Springer-Verlag Berlin Heidelberg.
Resumo:
Korea has increasingly adopted design-build for public construction projects in the last few years. There is a much greater awareness of the need to change a system based on ‘Value for Money’ which is high on the government's agenda. A whole life performance bid evaluation model is proposed to aid decision makers in the selection of a design-builder. This is based on the integration of a framework using an analytic hierarchy process as the bid awarding system is being changed from one based on lowest price, to one based on best value over the life-cycle. Key criteria like whole life cost, service life planning and design quality are important through the key stages of evaluation process. The model uses a systematic and holistic approach which enables a public sector to make better decisions in design-builder selection, which will deliver whole life benefits, based on long term cost-effectiveness and whole life.
Resumo:
Conventional project management techniques are not always sufficient to ensure that schedule, cost and quality goals are met on large-scale construction projects. These jobs require complex planning, designing and implementation processes. The main reasons for a project's nonachievement in India's hydrocarbon processing industry are changes in scope and design, altered government policies and regulations, unforeseen inflation, under and/or improper estimation. Projects that are exposed to such an uncertain environment can be effectively managed by applying risk management throughout the project life cycle.
Resumo:
This paper presents research from part of a larger project focusing on the potential development of commercial opportunities for the reuse of batteries on the electricity grid system, subsequent to their primary use in low and ultra-low carbon vehicles, and investigating the life cycle issues surrounding the batteries. The work has three main areas; examination of electric vehicle fleet data in detail to investigate usage in first life. Batteries that have passed through a battery recycler at the end of their first life have been tested within the laboratory to confirm the general assumption that remaining capacity of 80% after use in transportation is a reasonable assumption as a basis for second-life applications. The third aspect of the paper is an investigation of the equivalent usage for three different second-life applications based on connection to the electricity grid. Additionally, the paper estimates the time to cell failure of the batteries within their second-life application to estimate lifespan for use within commercial investigations. © 2014 IEEE.
Resumo:
GraphChi is the first reported disk-based graph engine that can handle billion-scale graphs on a single PC efficiently. GraphChi is able to execute several advanced data mining, graph mining and machine learning algorithms on very large graphs. With the novel technique of parallel sliding windows (PSW) to load subgraph from disk to memory for vertices and edges updating, it can achieve data processing performance close to and even better than those of mainstream distributed graph engines. GraphChi mentioned that its memory is not effectively utilized with large dataset, which leads to suboptimal computation performances. In this paper we are motivated by the concepts of 'pin ' from TurboGraph and 'ghost' from GraphLab to propose a new memory utilization mode for GraphChi, which is called Part-in-memory mode, to improve the GraphChi algorithm performance. The main idea is to pin a fixed part of data inside the memory during the whole computing process. Part-in-memory mode is successfully implemented with only about 40 additional lines of code to the original GraphChi engine. Extensive experiments are performed with large real datasets (including Twitter graph with 1.4 billion edges). The preliminary results show that Part-in-memory mode memory management approach effectively reduces the GraphChi running time by up to 60% in PageRank algorithm. Interestingly it is found that a larger portion of data pinned in memory does not always lead to better performance in the case that the whole dataset cannot be fitted in memory. There exists an optimal portion of data which should be kept in the memory to achieve the best computational performance.
Resumo:
The Semantic Web has come a long way since its inception in 2001, especially in terms of technical development and research progress. However, adoption by non- technical practitioners is still an ongoing process, and in some areas this process is just now starting. Emergency response is an area where reliability and timeliness of information and technologies is of essence. Therefore it is quite natural that more widespread adoption in this area has not been seen until now, when Semantic Web technologies are mature enough to support the high requirements of the application area. Nevertheless, to leverage the full potential of Semantic Web research results for this application area, there is need for an arena where practitioners and researchers can meet and exchange ideas and results. Our intention is for this workshop, and hopefully coming workshops in the same series, to be such an arena for discussion. The Extended Semantic Web Conference (ESWC - formerly the European Semantic Web conference) is one of the major research conferences in the Semantic Web field, whereas this is a suitable location for this workshop in order to discuss the application of Semantic Web technology to our specific area of applications. Hence, we chose to arrange our first SMILE workshop at ESWC 2013. However, this workshop does not focus solely on semantic technologies for emergency response, but rather Semantic Web technologies in combination with technologies and principles for what is sometimes called the "social web". Social media has already been used successfully in many cases, as a tool for supporting emergency response. The aim of this workshop is therefore to take this to the next level and answer questions like: "how can we make sense of, and furthermore make use of, all the data that is produced by different kinds of social media platforms in an emergency situation?" For the first edition of this workshop the chairs collected the following main topics of interest: • Semantic Annotation for understanding the content and context of social media streams. • Integration of Social Media with Linked Data. • Interactive Interfaces and visual analytics methodologies for managing multiple large-scale, dynamic, evolving datasets. • Stream reasoning and event detection. • Social Data Mining. • Collaborative tools and services for Citizens, Organisations, Communities. • Privacy, ethics, trustworthiness and legal issues in the Social Semantic Web. • Use case analysis, with specific interest for use cases that involve the application of Social Media and Linked Data methodologies in real-life scenarios. All of these, applied in the context of: • Crisis and Disaster Management • Emergency Response • Security and Citizen Journalism The workshop received 6 high-quality paper submissions and based on a thorough review process, thanks to our program committee, the decision was made to accept four of these papers for the workshop (67% acceptance rate). These four papers can be found later in this proceedings volume. Three out of four of these papers particularly discuss the integration and analysis of social media data, using Semantic Web technologies, e.g. for detecting complex events in social media streams, for visualizing and analysing sentiments with respect to certain topics in social media, or for detecting small-scale incidents entirely through the use of social media information. Finally, the fourth paper presents an architecture for using Semantic Web technologies in resource management during a disaster. Additionally, the workshop featured an invited keynote speech by Dr. Tomi Kauppinen from Aalto university. Dr. Kauppinen shared experiences from his work on applying Semantic Web technologies to application fields such as geoinformatics and scientific research, i.e. so-called Linked Science, but also recent ideas and applications in the emergency response field. His input was also highly valuable for the roadmapping discussion, which was held at the end of the workshop. A separate summary of the roadmapping session can be found at the end of these proceedings. Finally, we would like to thank our invited speaker Dr. Tomi Kauppinen, all our program committee members, as well as the workshop chair of ESWC2013, Johanna Völker (University of Mannheim), for helping us to make this first SMILE workshop a highly interesting and successful event!