943 resultados para Shopping for Computer Restaurant Management
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics
Resumo:
Risk management is an important component of project management. Nevertheless, such process begins with risk assessment and evaluation. In this research project, a detailed analysis of the methodologies used to treat risks in investment projects adopted by the Banco da Amazonia S.A. was made. Investment projects submitted to the FNO (Constitutional Fund for Financing the North) during 2011 and 2012 were considered for that purpose. It was found that the evaluators of this credit institution use multiple indicators for risk assessment which assume a central role in terms of decision-making and contribute for the approval or the rejection of the submitted projects; namely, the proven ability to pay, the financial records of project promotors, several financial restrictions, level of equity, level of financial indebtedness, evidence of the existence of a consumer market, the proven experience of the partners/owners in the business, environmental aspects, etc. Furthermore, the bank has technological systems to support the risk assessment process, an internal communication system and a unique system for the management of operational risk.
Resumo:
Large scale distributed data stores rely on optimistic replication to scale and remain highly available in the face of net work partitions. Managing data without coordination results in eventually consistent data stores that allow for concurrent data updates. These systems often use anti-entropy mechanisms (like Merkle Trees) to detect and repair divergent data versions across nodes. However, in practice hash-based data structures are too expensive for large amounts of data and create too many false conflicts. Another aspect of eventual consistency is detecting write conflicts. Logical clocks are often used to track data causality, necessary to detect causally concurrent writes on the same key. However, there is a nonnegligible metadata overhead per key, which also keeps growing with time, proportional with the node churn rate. Another challenge is deleting keys while respecting causality: while the values can be deleted, perkey metadata cannot be permanently removed without coordination. Weintroduceanewcausalitymanagementframeworkforeventuallyconsistentdatastores,thatleveragesnodelogicalclocks(BitmappedVersion Vectors) and a new key logical clock (Dotted Causal Container) to provides advantages on multiple fronts: 1) a new efficient and lightweight anti-entropy mechanism; 2) greatly reduced per-key causality metadata size; 3) accurate key deletes without permanent metadata.
Resumo:
The research described in this thesis has been developed as a part of the Reliability and Field Data Management for Multi-component Products (REFIDAM) Project. This project was founded under the Applied Research Grants Scheme administered by Enterprise Ireland. The project was a partnership between Galway-Mayo Institute of Technology and Thermo King Europe. The project aimed to develop a system in order to manage the information required for reliability assessment and improvement of multi-component products, by establishing information flows within the company and information exchange with fleet users.
Resumo:
Quality Management, Integrated Technical Management Systems, ITMS, Technical Elements, Environment, Occupational Health and safety, OH&S, Standards, ISO, General Regulations, Integration, Management Functions, Computer Centre, Suc-cess Concepts, Documentation, PCT, QMS, EMS, OH&S-MS, Portioning, Evaluation, Technical Cycle, Technical Compliance, Framework
Resumo:
El projecte Gestió de comandes d'un restaurant amb .NET consisteix en el disseny i implementació d'una solució integral per a gestionar el negoci d'un restaurant de forma completa.
Resumo:
In this paper a novel methodology aimed at minimizing the probability of network failure and the failure impact (in terms of QoS degradation) while optimizing the resource consumption is introduced. A detailed study of MPLS recovery techniques and their GMPLS extensions are also presented. In this scenario, some features for reducing the failure impact and offering minimum failure probabilities at the same time are also analyzed. Novel two-step routing algorithms using this methodology are proposed. Results show that these methods offer high protection levels with optimal resource consumption
Resumo:
Due to the high cost of a large ATM network working up to full strength to apply our ideas about network management, i.e., dynamic virtual path (VP) management and fault restoration, we developed a distributed simulation platform for performing our experiments. This platform also had to be capable of other sorts of tests, such as connection admission control (CAC) algorithms, routing algorithms, and accounting and charging methods. The platform was posed as a very simple, event-oriented and scalable simulation. The main goal was the simulation of a working ATM backbone network with a potentially large number of nodes (hundreds). As research into control algorithms and low-level, or rather cell-level methods, was beyond the scope of this study, the simulation took place at a connection level, i.e., there was no real traffic of cells. The simulated network behaved like a real network accepting and rejecting SNMP ones, or experimental tools using the API node
Resumo:
We present a system for dynamic network resource configuration in environments with bandwidth reservation and path restoration mechanisms. Our focus is on the dynamic bandwidth management results, although the main goal of the system is the integration of the different mechanisms that manage the reserved paths (bandwidth, restoration, and spare capacity planning). The objective is to avoid conflicts between these mechanisms. The system is able to dynamically manage a logical network such as a virtual path network in ATM or a label switch path network in MPLS. This system has been designed to be modular in the sense that in can be activated or deactivated, and it can be applied only in a sub-network. The system design and implementation is based on a multi-agent system (MAS). We also included details of its architecture and implementation
Resumo:
Estudi basat en l’anàlisi econòmica d’una empresa turística existent aplicant tècniques i mètodes basats en la comptabilitat de costos per poder prendre decisions. L’objectiu és aconseguir determinar el cost derivat de la producció de l’activitat de l’empresa analitzada, i així plantejar una sèrie d’actuacions estratègiques per poder ésser competitius envers la resta d’empreses de restauració de l’entorn. L'empresa analitzada és el restaurant Empòrium
Resumo:
Technological limitations and power constraints are resulting in high-performance parallel computing architectures that are based on large numbers of high-core-count processors. Commercially available processors are now at 8 and 16 cores and experimental platforms, such as the many-core Intel Single-chip Cloud Computer (SCC) platform, provide much higher core counts. These trends are presenting new sets of challenges to HPC applications including programming complexity and the need for extreme energy efficiency.In this work, we first investigate the power behavior of scientific PGAS application kernels on the SCC platform, and explore opportunities and challenges for power management within the PGAS framework. Results obtained via empirical evaluation of Unified Parallel C (UPC) applications on the SCC platform under different constraints, show that, for specific operations, the potential for energy savings in PGAS is large; and power/performance trade-offs can be effectively managed using a cross-layerapproach. We investigate cross-layer power management using PGAS language extensions and runtime mechanisms that manipulate power/performance tradeoffs. Specifically, we present the design, implementation and evaluation of such a middleware for application-aware cross-layer power management of UPC applications on the SCC platform. Finally, based on our observations, we provide a set of recommendations and insights that can be used to support similar power management for PGAS applications on other many-core platforms.
Resumo:
BACKGROUND To compare outcomes for patients with recurrent or persistent papillary thyroid cancer (PTC) who had metastatic tumors that were fluorodeoxyglucose-positron emission tomography (FDG-PET) positive or negative, and to determine whether the FDG-PET scan findings changed the outcome of medical and surgical management. METHODS From a prospective thyroid cancer database, we retrospectively identified patients with recurrent or persistent PTC and reviewed data on demographics, initial stage, location and extent of persistent or recurrent disease, clinical management, disease-free survival and outcome. We further identified subsets of patients who had an FDG-PET scan or an FDG-PET/CT scan and whole-body radioactive iodine scans and categorized them by whether they had one or more FDG-PET-avid (PET-positive) lesions or PET-negative lesions. The medical and surgical treatments and outcome of these patients were compared. RESULTS Between 1984 and 2008, 41 of 141 patients who had recurrent or persistent PTC underwent FDG-PET (n = 11) or FDG-PET/CT scans (n = 30); 22 patients (54%) had one or more PET-positive lesion(s), 17 (41%) had PET-negative lesions, and two had indeterminate lesions. Most PET-positive lesions were located in the neck (55%). Patients who had a PET-positive lesion had a significantly higher TNM stage (P = 0.01), higher age (P = 0.03), and higher thyroglobulin (P = 0.024). Only patients who had PET-positive lesions died (5/22 vs. 0/17 for PET-negative lesions; P = 0.04). In two of the seven patients who underwent surgical resection of their PET-positive lesions, loco-regional control was obtained without evidence of residual disease. CONCLUSION Patients with recurrent or persistent PTC and FDG-PET-positive lesions have a worse prognosis. In some patients loco-regional control can be obtained without evidence of residual disease by reoperation if the lesion is localized in the neck or mediastinum.
Resumo:
This paper describes a Computer-Supported Collaborative Learning (CSCL) case study in engineering education carried out within the context of a network management course. The case study shows that the use of two computing tools developed by the authors and based on Free- and Open-Source Software (FOSS) provide significant educational benefits over traditional engineering pedagogical approaches in terms of both concepts and engineering competencies acquisition. First, the Collage authoring tool guides and supports the course teacher in the process of authoring computer-interpretable representations (using the IMS Learning Design standard notation) of effective collaborative pedagogical designs. Besides, the Gridcole system supports the enactment of that design by guiding the students throughout the prescribed sequence of learning activities. The paper introduces the goals and context of the case study, elaborates onhow Collage and Gridcole were employed, describes the applied evaluation methodology, anddiscusses the most significant findings derived from the case study.
Resumo:
We present the case of a young man with compression of both renal arteries by the crura of the diaphragm. Correct diagnosis of renal artery entrapment is difficult but crucial. The investigations rely on an high index of suspicion and include Doppler ultrasound and spiral computed tomography angiography, which permits visualization of the diaphragm and its relationships with the aorta. This pathology, unlike common renal artery stenoses, requires surgical decompression and sometimes aortorenal bypass graft.