950 resultados para Software repository mining. Process mining. Software developer contribution


Relevância:

50.00% 50.00%

Publicador:

Resumo:

El objetivo de este trabajo es hacer un estudio sobre la cadena de suministros en organizaciones empresariales desde la Dinámica de Sistemas y como esta puede aportar al desempeño y el control de las cadenas de suministros. Se buscará Abordar el cocimiento sobre tres perspectivas de Supply Chain y su relación con la dinámica de sistemas. También se buscará identificar los tipos de integración en las actividades de la gestión en la cadena de suministros y sus horizontes de planeación. Por último, se pretende analizar las aplicaciones de Supply Chain Management que se han basado en el uso de la metodología de dinámica de sistemas. Para esto, la investigación empezará por definir la problemática alrededor de unir estas dos áreas y definirá el marco teórico que fundan estas dos disciplinas. Luego se abordará la metodología usada por la Dinámica de Sistemas y los diferentes aspectos de la cadena de suministros. Se Ahondará en el acercamiento de las dos disciplinas y como convergen ayudando la SD a la SCM (Supply Chain Management). En este punto también se describirán los trabajos en los diferentes enfoques que se han hecho a partir de uso de la dinámica de sistemas. Por último, presentaremos las correspondientes conclusiones y comentarios acerca de este campo de investigación y su pertinencia en el campo de la Supply Chain. Esta investigación abarca dos grandes corrientes de pensamiento, una sistémica, a través de la metodología de dinámica de sistemas y la otra, lógico analítica la cual es usada en Supply Chain. Se realizó una revisión de la literatura sobre las aplicaciones de dinámica de sistemas (SD) en el área de Supply Chain, sus puntos en común y se documentaron importantes empleos de esta metodología que se han hecho en la gestión de la cadena de suministros.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Los Sistemas de Información Geográfica (SIG) son una herramienta válida para el estudio de los paisajes antiguos. Los SIG se pueden configurar como un conjunto de medios analíticos útiles para comprender la dimensión espacial de las formaciones sociales y su dinámica histórica. En otras palabras, los SIG posibilitan un acercamiento válido a la racionalidad de las conductas espaciales de una comunidad y a las pautas globales de una sociedad que quedan plasmadas en la morfología de un paisaje. Atendiendo a la abundante y creciente oferta de programas informáticos que procesan y analizan información espacial, enfocaremos las ventajas que supone la adopción de soluciones libres y de código abierto para la investigación arqueológica de los paisajes. Como ejemplo presentaremos el modelado coste-distancia aplicado a un problema locacional arqueológico: la evaluación de la ubicación de los asentamientos respecto a los recursos disponibles en su entorno. El enfoque experimental ha sido aplicado al poblamiento castreño de la comarca de La Cabrera (León). Se presentará una descripción detallada de cómo crear tramos isócronos basados en el cálculo de los costes anisótropos inherentes a la locomoción pedestre. Asimismo, la ventaja que supone la adopción del SIG GRASS para la implementación del análisis

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Trace elements may present an environmental hazard in the vicinity of mining and smelting activities. However, the factors controlling trace element distribution in soils around ancient and modem mining and smelting areas are not always clear. Tharsis, Riotinto and Huelva are located in the Iberian Pyrite Belt in SW Spain. Tharsis and Riotinto mines have been exploited since 2500 B.C., with intensive smelting taking place. Huelva, established in 1970 and using the Flash Furnace Outokumpu process, is currently one of the largest smelter in the world. Pyrite and chalcopyrite ore have been intensively smelted for Cu. However, unusually for smelters and mines of a similar size, the elevated trace element concentrations in soils were found to be restricted to the immediate vicinity of the mines and smelters, being found up to a maximum of 2 kin from the mines and smelters at Tharsis, Riotinto and Huelva. Trace element partitioning (over 2/3 of trace elements found in the residual immobile fraction of soils at Tharsis) and soil particles examination by SEM-EDX showed that trace elements were not adsorbed onto soil particles, but were included within the matrix of large trace element-rich Fe silicate slag particles (i.e. 1 min circle divide at least 1 wt.% As, Cu and Zn, and 2 wt.% Pb). Slag particle large size (I mm 0) was found to control the geographically restricted trace element distribution in soils at Tharsis, Riotinto and Huelva, since large heavy particles could not have been transported long distances. Distribution and partitioning indicated that impacts to the environment as a result of mining and smelting should remain minimal in the region. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In molecular biology, it is often desirable to find common properties in large numbers of drug candidates. One family of methods stems from the data mining community, where algorithms to find frequent graphs have received increasing attention over the past years. However, the computational complexity of the underlying problem and the large amount of data to be explored essentially render sequential algorithms useless. In this paper, we present a distributed approach to the frequent subgraph mining problem to discover interesting patterns in molecular compounds. This problem is characterized by a highly irregular search tree, whereby no reliable workload prediction is available. We describe the three main aspects of the proposed distributed algorithm, namely, a dynamic partitioning of the search space, a distribution process based on a peer-to-peer communication framework, and a novel receiverinitiated load balancing algorithm. The effectiveness of the distributed method has been evaluated on the well-known National Cancer Institute’s HIV-screening data set, where we were able to show close-to linear speedup in a network of workstations. The proposed approach also allows for dynamic resource aggregation in a non dedicated computational environment. These features make it suitable for large-scale, multi-domain, heterogeneous environments, such as computational grids.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Structured data represented in the form of graphs arises in several fields of the science and the growing amount of available data makes distributed graph mining techniques particularly relevant. In this paper, we present a distributed approach to the frequent subgraph mining problem to discover interesting patterns in molecular compounds. The problem is characterized by a highly irregular search tree, whereby no reliable workload prediction is available. We describe the three main aspects of the proposed distributed algorithm, namely a dynamic partitioning of the search space, a distribution process based on a peer-to-peer communication framework, and a novel receiver-initiated, load balancing algorithm. The effectiveness of the distributed method has been evaluated on the well-known National Cancer Institute’s HIV-screening dataset, where the approach attains close-to linear speedup in a network of workstations.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper critiques the approach taken by the Ghanaian Government to address mercury pollution in the artisanal and small-scale gold mining sector. Unmonitored releases of mercury-used in the gold-amalgamation process-have caused numerous environmental complications throughout rural Ghana. Certain policy, technological and educational initiatives taken to address the mounting problem, however, have proved marginally effective at best, having been designed and implemented without careful analysis of mine community dynamics, the organization of activities, operators' needs and local geological conditions. Marked improvements can only be achieved in this area through increased government-initiated dialogue with the now-ostracized illegal galamsey mining community; introducing simple, cost-effective techniques for the reduction of mercury emissions; and effecting government-sponsored participatory training exercises as mediums for communicating information about appropriate technologies and the environment. (c) 2006 Elsevier Inc. All rights reserved.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper examines the dynamics of the ongoing conflict in Prestea, Ghana, where indigenous galamsey mining groups are operating illegally on a concession awarded to Bogoso Gold Limited (BGL), property of the Canadian-listed multinational Gold Star Resources. Despite being issued firm orders by the authorities to abandon their activities, galamsey leaders maintain that they are working areas of the concession that are of little interest to the company; they further counter that there are few alternative sources of local employment, which is why they are mining in the first place. Whilst the Ghanaian Government is in the process of setting aside plots to relocate illegal mining parties and is developing alternative livelihood projects, efforts are far from encouraging: in addition to a series of overlooked logistical problems, the areas earmarked for relocation have not yet been prospected to ascertain gold content, and the alternative income-earning activities identified are inappropriate. As has been the case throughout mineral-rich sub-Saharan Africa, the conflict in Prestea has come about largely because the national mining sector reform program, which prioritizes the expansion of predominantly foreign-controlled large-scale projects, has neglected the concerns of indigenous subsistence groups.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

A wireless sensor network (WSN) is a group of sensors linked by wireless medium to perform distributed sensing tasks. WSNs have attracted a wide interest from academia and industry alike due to their diversity of applications, including home automation, smart environment, and emergency services, in various buildings. The primary goal of a WSN is to collect data sensed by sensors. These data are characteristic of being heavily noisy, exhibiting temporal and spatial correlation. In order to extract useful information from such data, as this paper will demonstrate, people need to utilise various techniques to analyse the data. Data mining is a process in which a wide spectrum of data analysis methods is used. It is applied in the paper to analyse data collected from WSNs monitoring an indoor environment in a building. A case study is given to demonstrate how data mining can be used to optimise the use of the office space in a building.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper addresses the need for accurate predictions on the fault inflow, i.e. the number of faults found in the consecutive project weeks, in highly iterative processes. In such processes, in contrast to waterfall-like processes, fault repair and development of new features run almost in parallel. Given accurate predictions on fault inflow, managers could dynamically re-allocate resources between these different tasks in a more adequate way. Furthermore, managers could react with process improvements when the expected fault inflow is higher than desired. This study suggests software reliability growth models (SRGMs) for predicting fault inflow. Originally developed for traditional processes, the performance of these models in highly iterative processes is investigated. Additionally, a simple linear model is developed and compared to the SRGMs. The paper provides results from applying these models on fault data from three different industrial projects. One of the key findings of this study is that some SRGMs are applicable for predicting fault inflow in highly iterative processes. Moreover, the results show that the simple linear model represents a valid alternative to the SRGMs, as it provides reasonably accurate predictions and performs better in many cases.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Knowledge-elicitation is a common technique used to produce rules about the operation of a plant from the knowledge that is available from human expertise. Similarly, data-mining is becoming a popular technique to extract rules from the data available from the operation of a plant. In the work reported here knowledge was required to enable the supervisory control of an aluminium hot strip mill by the determination of mill set-points. A method was developed to fuse knowledge-elicitation and data-mining to incorporate the best aspects of each technique, whilst avoiding known problems. Utilisation of the knowledge was through an expert system, which determined schedules of set-points and provided information to human operators. The results show that the method proposed in this paper was effective in producing rules for the on-line control of a complex industrial process. (C) 2005 Elsevier Ltd. All rights reserved.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Knowledge-elicitation is a common technique used to produce rules about the operation of a plant from the knowledge that is available from human expertise. Similarly, data-mining is becoming a popular technique to extract rules from the data available from the operation of a plant. In the work reported here knowledge was required to enable the supervisory control of an aluminium hot strip mill by the determination of mill set-points. A method was developed to fuse knowledge-elicitation and data-mining to incorporate the best aspects of each technique, whilst avoiding known problems. Utilisation of the knowledge was through an expert system, which determined schedules of set-points and provided information to human operators. The results show that the method proposed in this paper was effective in producing rules for the on-line control of a complex industrial process.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Aircraft Maintenance, Repair and Overhaul (MRO) feedback commonly includes an engineer’s complex text-based inspection report. Capturing and normalizing the content of these textual descriptions is vital to cost and quality benchmarking, and provides information to facilitate continuous improvement of MRO process and analytics. As data analysis and mining tools requires highly normalized data, raw textual data is inadequate. This paper offers a textual-mining solution to efficiently analyse bulk textual feedback data. Despite replacement of the same parts and/or sub-parts, the actual service cost for the same repair is often distinctly different from similar previously jobs. Regular expression algorithms were incorporated with an aircraft MRO glossary dictionary in order to help provide additional information concerning the reason for cost variation. Professional terms and conventions were included within the dictionary to avoid ambiguity and improve the outcome of the result. Testing results show that most descriptive inspection reports can be appropriately interpreted, allowing extraction of highly normalized data. This additional normalized data strongly supports data analysis and data mining, whilst also increasing the accuracy of future quotation costing. This solution has been effectively used by a large aircraft MRO agency with positive results.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

A new electronic software distribution (ESD) life cycle analysis (LCA)methodology and model structure were constructed to calculate energy consumption and greenhouse gas (GHG) emissions. In order to counteract the use of high level, top-down modeling efforts, and to increase result accuracy, a focus upon device details and data routes was taken. In order to compare ESD to a relevant physical distribution alternative,physical model boundaries and variables were described. The methodology was compiled from the analysis and operational data of a major online store which provides ESD and physical distribution options. The ESD method included the calculation of power consumption of data center server and networking devices. An in-depth method to calculate server efficiency and utilization was also included to account for virtualization and server efficiency features. Internet transfer power consumption was analyzed taking into account the number of data hops and networking devices used. The power consumed by online browsing and downloading was also factored into the model. The embedded CO2e of server and networking devices was proportioned to each ESD process. Three U.K.-based ESD scenarios were analyzed using the model which revealed potential CO2e savings of 83% when ESD was used over physical distribution. Results also highlighted the importance of server efficiency and utilization methods.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

OBJECTIVES: The prediction of protein structure and the precise understanding of protein folding and unfolding processes remains one of the greatest challenges in structural biology and bioinformatics. Computer simulations based on molecular dynamics (MD) are at the forefront of the effort to gain a deeper understanding of these complex processes. Currently, these MD simulations are usually on the order of tens of nanoseconds, generate a large amount of conformational data and are computationally expensive. More and more groups run such simulations and generate a myriad of data, which raises new challenges in managing and analyzing these data. Because the vast range of proteins researchers want to study and simulate, the computational effort needed to generate data, the large data volumes involved, and the different types of analyses scientists need to perform, it is desirable to provide a public repository allowing researchers to pool and share protein unfolding data. METHODS: To adequately organize, manage, and analyze the data generated by unfolding simulation studies, we designed a data warehouse system that is embedded in a grid environment to facilitate the seamless sharing of available computer resources and thus enable many groups to share complex molecular dynamics simulations on a more regular basis. RESULTS: To gain insight into the conformational fluctuations and stability of the monomeric forms of the amyloidogenic protein transthyretin (TTR), molecular dynamics unfolding simulations of the monomer of human TTR have been conducted. Trajectory data and meta-data of the wild-type (WT) protein and the highly amyloidogenic variant L55P-TTR represent the test case for the data warehouse. CONCLUSIONS: Web and grid services, especially pre-defined data mining services that can run on or 'near' the data repository of the data warehouse, are likely to play a pivotal role in the analysis of molecular dynamics unfolding data.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Pocket Data Mining (PDM) describes the full process of analysing data streams in mobile ad hoc distributed environments. Advances in mobile devices like smart phones and tablet computers have made it possible for a wide range of applications to run in such an environment. In this paper, we propose the adoption of data stream classification techniques for PDM. Evident by a thorough experimental study, it has been proved that running heterogeneous/different, or homogeneous/similar data stream classification techniques over vertically partitioned data (data partitioned according to the feature space) results in comparable performance to batch and centralised learning techniques.