4 resultados para Memory Management (Computer science)
em Universidade do Minho
Resumo:
Risk management is an important component of project management. Nevertheless, such process begins with risk assessment and evaluation. In this research project, a detailed analysis of the methodologies used to treat risks in investment projects adopted by the Banco da Amazonia S.A. was made. Investment projects submitted to the FNO (Constitutional Fund for Financing the North) during 2011 and 2012 were considered for that purpose. It was found that the evaluators of this credit institution use multiple indicators for risk assessment which assume a central role in terms of decision-making and contribute for the approval or the rejection of the submitted projects; namely, the proven ability to pay, the financial records of project promotors, several financial restrictions, level of equity, level of financial indebtedness, evidence of the existence of a consumer market, the proven experience of the partners/owners in the business, environmental aspects, etc. Furthermore, the bank has technological systems to support the risk assessment process, an internal communication system and a unique system for the management of operational risk.
Resumo:
Large scale distributed data stores rely on optimistic replication to scale and remain highly available in the face of net work partitions. Managing data without coordination results in eventually consistent data stores that allow for concurrent data updates. These systems often use anti-entropy mechanisms (like Merkle Trees) to detect and repair divergent data versions across nodes. However, in practice hash-based data structures are too expensive for large amounts of data and create too many false conflicts. Another aspect of eventual consistency is detecting write conflicts. Logical clocks are often used to track data causality, necessary to detect causally concurrent writes on the same key. However, there is a nonnegligible metadata overhead per key, which also keeps growing with time, proportional with the node churn rate. Another challenge is deleting keys while respecting causality: while the values can be deleted, perkey metadata cannot be permanently removed without coordination. Weintroduceanewcausalitymanagementframeworkforeventuallyconsistentdatastores,thatleveragesnodelogicalclocks(BitmappedVersion Vectors) and a new key logical clock (Dotted Causal Container) to provides advantages on multiple fronts: 1) a new efficient and lightweight anti-entropy mechanism; 2) greatly reduced per-key causality metadata size; 3) accurate key deletes without permanent metadata.
Resumo:
This article presents a work performed in the maintenance department of a furniture company in Portugal, in order to develop and implement autonomous maintenance. The main objective of the project was related to the objective to increase and make effective the autonomous maintenance tasks performed by production operators, and in this way avoiding unplanned downtime due to equipment failures. Although some autonomous maintenance tasks were already carried out within the company, a preliminary study revealed weaknesses in the application of this tool. In the initial phase of this pilot project, the main problems encountered at the level of autonomous maintenance were related to the lack of time to carry out these tasks, showing that the stipulated procedures were far from the real needs of the company. To solve these problems a pilot project was conducted, making several changes in the performance of autonomous maintenance tasks, making them standard and adapted to reality of each production line. There was a general improvement in the factory indicators, and essentially there was a behavioral change, since the operators felt that their opinions were taking into account and began to understand the importance of small tasks performed by them.
Resumo:
Traffic Engineering (TE) approaches are increasingly impor- tant in network management to allow an optimized configuration and resource allocation. In link-state routing, the task of setting appropriate weights to the links is both an important and a challenging optimization task. A number of different approaches has been put forward towards this aim, including the successful use of Evolutionary Algorithms (EAs). In this context, this work addresses the evaluation of three distinct EAs, a single and two multi-objective EAs, in two tasks related to weight setting optimization towards optimal intra-domain routing, knowing the network topology and aggregated traffic demands and seeking to mini- mize network congestion. In both tasks, the optimization considers sce- narios where there is a dynamic alteration in the state of the system, in the first considering changes in the traffic demand matrices and in the latter considering the possibility of link failures. The methods will, thus, need to simultaneously optimize for both conditions, the normal and the altered one, following a preventive TE approach towards robust configurations. Since this can be formulated as a bi-objective function, the use of multi-objective EAs, such as SPEA2 and NSGA-II, came nat- urally, being those compared to a single-objective EA. The results show a remarkable behavior of NSGA-II in all proposed tasks scaling well for harder instances, and thus presenting itself as the most promising option for TE in these scenarios.