950 resultados para data-flow management
Resumo:
The level of insolvencies in the construction industry is high, when compared to other industry sectors. Given the management expertise and experience that is available to the construction industry, it seems strange that, according to the literature, the major causes of failure are lack of financial control and poor management. This indicates that with a good cash flow management, companies could be kept operating and financially healthy. It is possible to prevent failure. Although there are financial models that can be used to predict failure, they are based on company accounts, which have been shown to be an unreliable source of data. There are models available for cash flow management and forecasting and these could be used as a starting point for managers in rethinking their cash flow management practices. The research reported here has reached the stage of formulating researchable questions for an in-depth study including issues such as how contractors manage their cash flow, how payment practices can be managed without damaging others in the supply chain and the relationships between companies’ financial structures and the payment regimes to which they are subjected.
Resumo:
A készpénz-optimalizálás az operációkutatás régóta kutatott területe. Ebben a cikkben valós adatokon mutatok be egy banki készpénz-optimalizálást, melyet lineáris programozási feladatok segítségével végeztem el. A cikkben összehasonlítottam a determinisztikus és a sztochasztikus megközelítéseket is. A hagyományos készpénz-optimalizáción két területen léptem túl: egyrészt vizsgáltam a bankfiók valutagazdálkodását is, másrészről a bankfiókok közötti készpénzszállítás lehetőségét is. A vegyes egészértékű lineáris programozási feladatok megoldására a glpk nevű szabad hozzáférésű szoftvert használtam, így a cikkből képet kaphatunk a megoldó (solver) felhasználhatóságáról és korlátairól is. ___________ In recent years both operational research and quantitative ¯nance have paid much attention to cash management issues. In this paper we present a cash management study which is based on real world data and uses a mixed integer linear programming (MILP) model as the main tool. In the paper we compare deterministic and stochastic approaches. The classical cash management problem is extended in two ways: we considered the possibility of bank offices keeping more than one currency and also investigated the opportunity of cash transports between bank offices. The MILP problem was solved with glpk (GNU Linear Programming Kit), a free software. The reader can also get a feel of how to use this solver.
Resumo:
Accepted in 13th IEEE Symposium on Embedded Systems for Real-Time Multimedia (ESTIMedia 2015), Amsterdam, Netherlands.
Resumo:
In a networked business environment the visibility requirements towards the supply operations and customer interface has become tighter. In order to meet those requirements the master data of case company is seen as an enabler. However the current state of master data and its quality are not seen good enough to meet those requirements. In this thesis the target of research was to develop a process for managing master data quality as a continuous process and find solutions to cleanse the current customer and supplier data to meet the quality requirements defined in that process. Based on the theory of Master Data Management and data cleansing, small amount of master data was analyzed and cleansed using one commercial data cleansing solution available on the market. This was conducted in cooperation with the vendor as a proof of concept. In the proof of concept the cleansing solution’s applicability to improve the quality of current master data was proved. Based on those findings and the theory of data management the recommendations and proposals for improving the quality of data were given. In the results was also discovered that the biggest reasons for poor data quality is the lack of data governance in the company, and the current master data solutions and its restrictions.
Resumo:
Short set of slides explaining the workflow from a university website to equipment.data.ac.uk
Resumo:
En la presente tesis los propósitos principales son la adecuada conceptualización y presentación del manejo de flujos materiales; nueva tendencia que ha generado expectativas en el exterior principalmente en industrias europeas y de Norte América. Además se busca dentro del presente trabajo analizar las connotaciones y oportunidades de la aplicación del manejo de flujos materiales en una empresa especifica de nuestro país, “PROSISA” Productos sintéticos S.A., considerando también las incidencias que esto podría tener en la industria general de Plásticos. Con el prenombrado análisis también se busca identificar las perspectivas que la empresa de sintéticos pueda tener al aplicar el manejo de flujos materiales dentro de su proceso productivo principalmente con los desperdicios y posibles soluciones para optimizar los recursos empleados.
Resumo:
Having the objective of minimizing costs and improving their image in the consumer and export market, car battery industries began to seek environmental alternatives geared towards sustainable development. Reverse logistics flow represents an unprecedented tool for the economic and operational development of company activities, as well as a differential in the search for competitive advantages through environmentally correct practices. The aim of this paper is to describe the reverse logistics chain adopted by automotive battery industries in the midwest of the state of São Paulo, proposing a reverse logistics framework for small manufacturers that creates actions aimed at not harming the environment. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Includes bibliography
Resumo:
Data Distribution Management (DDM) is a core part of High Level Architecture standard, as its goal is to optimize the resources used by simulation environments to exchange data. It has to filter and match the set of information generated during a simulation, so that each federate, that is a simulation entity, only receives the information it needs. It is important that this is done quickly and to the best in order to get better performances and avoiding the transmission of irrelevant data, otherwise network resources may saturate quickly. The main topic of this thesis is the implementation of a super partes DDM testbed. It evaluates the goodness of DDM approaches, of all kinds. In fact it supports both region and grid based approaches, and it may support other different methods still unknown too. It uses three factors to rank them: execution time, memory and distance from the optimal solution. A prearranged set of instances is already available, but we also allow the creation of instances with user-provided parameters. This is how this thesis is structured. We start introducing what DDM and HLA are and what do they do in details. Then in the first chapter we describe the state of the art, providing an overview of the most well known resolution approaches and the pseudocode of the most interesting ones. The third chapter describes how the testbed we implemented is structured. In the fourth chapter we expose and compare the results we got from the execution of four approaches we have implemented. The result of the work described in this thesis can be downloaded on sourceforge using the following link: https://sourceforge.net/projects/ddmtestbed/. It is licensed under the GNU General Public License version 3.0 (GPLv3).
Resumo:
Il Data Distribution Management (DDM) è un componente dello standard High Level Architecture. Il suo compito è quello di rilevare le sovrapposizioni tra update e subscription extent in modo efficiente. All'interno di questa tesi si discute la necessità di avere un framework e per quali motivi è stato implementato. Il testing di algoritmi per un confronto equo, librerie per facilitare la realizzazione di algoritmi, automatizzazione della fase di compilazione, sono motivi che sono stati fondamentali per iniziare la realizzazione framework. Il motivo portante è stato che esplorando articoli scientifici sul DDM e sui vari algoritmi si è notato che in ogni articolo si creavano dei dati appositi per fare dei test. L'obiettivo di questo framework è anche quello di riuscire a confrontare gli algoritmi con un insieme di dati coerente. Si è deciso di testare il framework sul Cloud per avere un confronto più affidabile tra esecuzioni di utenti diversi. Si sono presi in considerazione due dei servizi più utilizzati: Amazon AWS EC2 e Google App Engine. Sono stati mostrati i vantaggi e gli svantaggi dell'uno e dell'altro e il motivo per cui si è scelto di utilizzare Google App Engine. Si sono sviluppati quattro algoritmi: Brute Force, Binary Partition, Improved Sort, Interval Tree Matching. Sono stati svolti dei test sul tempo di esecuzione e sulla memoria di picco utilizzata. Dai risultati si evince che l'Interval Tree Matching e l'Improved Sort sono i più efficienti. Tutti i test sono stati svolti sulle versioni sequenziali degli algoritmi e che quindi ci può essere un riduzione nel tempo di esecuzione per l'algoritmo Interval Tree Matching.