865 resultados para IT Service Management
Resumo:
Current studies indicate a need to integrate environmental management with manufacturing strategy, including topics like cross-functional integration, environmental impact, and waste reduction. Nevertheless, such studies are relatively rare, existing still a need for research in specific regional contexts. At the same time, the results found are not unanimous. Due to these gaps, the objective of this article is to analyze if environmental management can be considered a new competitive priority for manufacturing enterprises located in Brazil. A cross-sectional survey was conducted with Brazilian companies certified by ISO 14001. Sixty-five valid questionnaires were analyzed through Structural Equation Modelling (SEM). The first conclusion is that environmental management presents a preventive approach in the sample analyzed, focused on eco-efficiency, what potentially do not to create a competitive advantage. This preventive approach inhibits environmental management from being regarded as a new competitive manufacturing priority, in the full sense as defined by the literature. Another important result is that environmental management, although following a preventive focus, may influence positively the four manufacturing priorities: cost, quality, flexibility and delivery. (C) 2011 Elsevier Ltd. All rights reserved.
Assessment of referrals to an OT consultation-liaison service: a retrospective and comparative study
Resumo:
The objective was to conduct a retrospective and comparative study of the requests for consultation-liaison (RCLs), during a period of six years, sent to the Occupational Therapy (OT) team that acts as the Consultation-liaison Service in Mental Health. During the studied period 709 RCLs were made and 633 patients received OT consultations. The comparison group was extended to 1 129 consecutive referrals to the psychiatric CL service, within the same period and that were also retrospectively reviewed. Regarding to RCLs to the OT team, most of the subjects were women with incomplete elementary schooling, with a mean age of 39.2 years, and were self-employed or retired. Internal Medicine was responsible for most of the RCLs. The mean length of hospitalization was 51 days and the mean rate of referral was 0.5%, with the most frequent reason for the request being related to the emotional aspects and the most frequent psychiatric diagnosis was mood disorder. It is concluded that there is a clear demand for the development of consultation-liaison in OT, particularly with regard to the promotion of mental health in general hospitals.
Resumo:
The aim of this paper is to analyze the determining factors for the pricing of handsets sold with service plans, using the hedonic price method. This was undertaken by building a database comprising 48 handset models, under nine different service plans, over a period of 53 weeks in 2008, and resulted in 27 different attributes and a total number of nearly 300,000 data registers. The results suggest that the value of monthly subscriptions and calling minutes are important to explain the prices of handsets. Furthermore, both the physical volume and number of megapixels of a camera had an effect on the prices. The bigger the handset, the cheaper it becomes, and the more megapixels a camera phone has, the more expensive it becomes. Additionally, it was found that in 2008 Brazilian phone companies were subsidizing enabled data connection handsets.
Resumo:
Supply chain starts with a demand arisen and ends with material transport and delivery at its final destination. With this in mind, most of manufacturing, processors or distribution companies of consumer goods, spare parts and components for production, processing and finished goods, within national or international markets, may not have information and control over its supply chain performance. This article presents concept and logistics models evolution, purchase order and international supplier management, control tower and its logistics information systems. This also presents a real process implementation for a global high tech manufacturer company.
Resumo:
Asset Management (AM) is a set of procedures operable at the strategic-tacticaloperational level, for the management of the physical asset’s performance, associated risks and costs within its whole life-cycle. AM combines the engineering, managerial and informatics points of view. In addition to internal drivers, AM is driven by the demands of customers (social pull) and regulators (environmental mandates and economic considerations). AM can follow either a top-down or a bottom-up approach. Considering rehabilitation planning at the bottom-up level, the main issue would be to rehabilitate the right pipe at the right time with the right technique. Finding the right pipe may be possible and practicable, but determining the timeliness of the rehabilitation and the choice of the techniques adopted to rehabilitate is a bit abstruse. It is a truism that rehabilitating an asset too early is unwise, just as doing it late may have entailed extra expenses en route, in addition to the cost of the exercise of rehabilitation per se. One is confronted with a typical ‘Hamlet-isque dilemma’ – ‘to repair or not to repair’; or put in another way, ‘to replace or not to replace’. The decision in this case is governed by three factors, not necessarily interrelated – quality of customer service, costs and budget in the life cycle of the asset in question. The goal of replacement planning is to find the juncture in the asset’s life cycle where the cost of replacement is balanced by the rising maintenance costs and the declining level of service. System maintenance aims at improving performance and maintaining the asset in good working condition for as long as possible. Effective planning is used to target maintenance activities to meet these goals and minimize costly exigencies. The main objective of this dissertation is to develop a process-model for asset replacement planning. The aim of the model is to determine the optimal pipe replacement year by comparing, temporally, the annual operating and maintenance costs of the existing asset and the annuity of the investment in a new equivalent pipe, at the best market price. It is proposed that risk cost provide an appropriate framework to decide the balance between investment for replacing or operational expenditures for maintaining an asset. The model describes a practical approach to estimate when an asset should be replaced. A comprehensive list of criteria to be considered is outlined, the main criteria being a visà- vis between maintenance and replacement expenditures. The costs to maintain the assets should be described by a cost function related to the asset type, the risks to the safety of people and property owing to declining condition of asset, and the predicted frequency of failures. The cost functions reflect the condition of the existing asset at the time the decision to maintain or replace is taken: age, level of deterioration, risk of failure. The process model is applied in the wastewater network of Oslo, the capital city of Norway, and uses available real-world information to forecast life-cycle costs of maintenance and rehabilitation strategies and support infrastructure management decisions. The case study provides an insight into the various definitions of ‘asset lifetime’ – service life, economic life and physical life. The results recommend that one common value for lifetime should not be applied to the all the pipelines in the stock for investment planning in the long-term period; rather it would be wiser to define different values for different cohorts of pipelines to reduce the uncertainties associated with generalisations for simplification. It is envisaged that more criteria the municipality is able to include, to estimate maintenance costs for the existing assets, the more precise will the estimation of the expected service life be. The ability to include social costs enables to compute the asset life, not only based on its physical characterisation, but also on the sensitivity of network areas to social impact of failures. The type of economic analysis is very sensitive to model parameters that are difficult to determine accurately. The main value of this approach is the effort to demonstrate that it is possible to include, in decision-making, factors as the cost of the risk associated with a decline in level of performance, the level of this deterioration and the asset’s depreciation rate, without looking at age as the sole criterion for making decisions regarding replacements.
Resumo:
Large scale wireless adhoc networks of computers, sensors, PDAs etc. (i.e. nodes) are revolutionizing connectivity and leading to a paradigm shift from centralized systems to highly distributed and dynamic environments. An example of adhoc networks are sensor networks, which are usually composed by small units able to sense and transmit to a sink elementary data which are successively processed by an external machine. Recent improvements in the memory and computational power of sensors, together with the reduction of energy consumptions, are rapidly changing the potential of such systems, moving the attention towards datacentric sensor networks. A plethora of routing and data management algorithms have been proposed for the network path discovery ranging from broadcasting/floodingbased approaches to those using global positioning systems (GPS). We studied WGrid, a novel decentralized infrastructure that organizes wireless devices in an adhoc manner, where each node has one or more virtual coordinates through which both message routing and data management occur without reliance on either flooding/broadcasting operations or GPS. The resulting adhoc network does not suffer from the deadend problem, which happens in geographicbased routing when a node is unable to locate a neighbor closer to the destination than itself. WGrid allow multidimensional data management capability since nodes' virtual coordinates can act as a distributed database without needing neither special implementation or reorganization. Any kind of data (both single and multidimensional) can be distributed, stored and managed. We will show how a location service can be easily implemented so that any search is reduced to a simple query, like for any other data type. WGrid has then been extended by adopting a replication methodology. We called the resulting algorithm WRGrid. Just like WGrid, WRGrid acts as a distributed database without needing neither special implementation nor reorganization and any kind of data can be distributed, stored and managed. We have evaluated the benefits of replication on data management, finding out, from experimental results, that it can halve the average number of hops in the network. The direct consequence of this fact are a significant improvement on energy consumption and a workload balancing among sensors (number of messages routed by each node). Finally, thanks to the replications, whose number can be arbitrarily chosen, the resulting sensor network can face sensors disconnections/connections, due to failures of sensors, without data loss. Another extension to {WGrid} is {W*Grid} which extends it by strongly improving network recovery performance from link and/or device failures that may happen due to crashes or battery exhaustion of devices or to temporary obstacles. W*Grid guarantees, by construction, at least two disjoint paths between each couple of nodes. This implies that the recovery in W*Grid occurs without broadcasting transmissions and guaranteeing robustness while drastically reducing the energy consumption. An extensive number of simulations shows the efficiency, robustness and traffic road of resulting networks under several scenarios of device density and of number of coordinates. Performance analysis have been compared to existent algorithms in order to validate the results.
Resumo:
The European External Action Service (EEAS or Service) is one of the most significant and most debated innovations introduced by the Lisbon Treaty. This analysis intends to explain the anomalous design of the EEAS in light of its function, which consists in the promotion of external action coherence. Coherence is a principle of the EU legal system, which requires synergy in the external actions of the Union and its Members. It can be enforced only through the coordination of European policy-makers' initiatives, by bridging the gap between the 'Communitarian' and intergovernmental approaches. This is the 'Union method' envisaged by A. Merkel: "coordinated action in a spirit of solidarity - each of us in the area for which we are responsible but all working towards the same goal". The EEAS embodies the 'Union method', since it is institutionally linked to both Union organs and Member States. It is also capable of enhancing synergy in policy management and promoting unity in international representation, since its field of action is delimited not by an abstract concern for institutional balance but by a pragmatic assessment of the need for coordination in each sector. The challenge is now to make sure that this pragmatic approach is applied with respect to all the activities of the Service, in order to reinforce its effectiveness. The coordination brought by the EEAS is in fact the only means through which a European foreign policy can come into being: the choice is not between the Community method and the intergovernmental method, but between a coordinated position and nothing at all.
Resumo:
The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.
Resumo:
Our research asked the following main questions: how the characteristics of professionals service firms allow them to successfully innovate in exploiting through exploring by combining internal and external factors of innovation and how these ambidextrous organisations perceive these factors; and how do successful innovators in professional service firms use corporate entrepreneurship models in their new service development processes? With a goal to shed light on innovation in professional knowledge intensive business service firms’ (PKIBS), we concluded a qualitative analysis of ten globally acting law firms, providing business legal services. We analyse the internal and factors of innovation that are critical for PKIBS’ innovation. We suggest how these firms become ambidextrous in changing environment. Our findings show that this kind of firms has particular type of ambidexterity due to their specific characteristics. As PKIBS are very dependant on its human capital, governance structure, and the high expectations of their clients, their ambidexterity is structural, but also contextual at the same time. In addition, we suggest 3 types of corporate entrepreneurship models that international PKIBS use to enhance innovation in turbulent environments. We looked at how law firms going through turbulent environments were using corporate entrepreneurship activities as a part of their strategies to be more innovative. Using visual mapping methodology, we developed three types of innovation patterns in the law firms. We suggest that corporate entrepreneurship models depend on successful application of mainly three elements: who participates in corporate entrepreneurship initiatives; what are the formal processes that enhances these initiatives; and what are the policies applied to this type of behaviour.
Resumo:
Abstract Background: Turner syndrome (TS) is a chromosomal abnormality (total or partial absence of one of the sexual chromosomes in some or all cells of the body), which affects approximately 1:2000 female. Principal characteristics are short stature and gonadal disgenesis. Clinical management consist of Growth Hormone (GH) treatment and oestrogen replacement therapy (HRT), to induce development of secondary characteristics and to avoid the sequelae of oestrogen deficiency. Aim of the study: To assess clinical management, quality of life (QoL) and general psychosocial adjustment of women with TS. Population: 70 adult Caucasian females with TS (mean age: 27.8, ± 7.6; range 18-48 y.). Setting: Specialist service for Rare Disease care, University Hospital. Methods: Subjects were required to fill in questionnaires collecting ASR, WHOQOL, and 8 open questions. Data were compared with those of the Italian population or to those collected in a comparison group (70 healthy females, mean age: 27.9, ±7.3, range 21-48 y.). Results: Women with TS are educated as well as the Italian Population, but they have a less successful professional life. They show good QoL in general, but they appeared less satisfied in social area. They had statistically higher scores than the comparison group for depression, anxiety and withdrawal. Are less involved in a love relationship. Diagnosis communication was mostly performed by doctors or parents, satisfaction was higher when information was given by parents. Main preoccupation about TS are infertility, feeling of being different and future health problem. Conclusions: Italian people with TS were generally well adapted and have a good QoL, but lived more often with parents and show impaired sentimental and sexual life. They have higher degree of psychological distress compared to a comparison group. Psychological intervention should firstly address parents in order to encourage an open communication on diagnosis issues and on sexual education.
Resumo:
Resource management is of paramount importance in network scenarios and it is a long-standing and still open issue. Unfortunately, while technology and innovation continue to evolve, our network infrastructure system has been maintained almost in the same shape for decades and this phenomenon is known as “Internet ossification”. Software-Defined Networking (SDN) is an emerging paradigm in computer networking that allows a logically centralized software program to control the behavior of an entire network. This is done by decoupling the network control logic from the underlying physical routers and switches that forward traffic to the selected destination. One mechanism that allows the control plane to communicate with the data plane is OpenFlow. The network operators could write high-level control programs that specify the behavior of an entire network. Moreover, the centralized control makes it possible to define more specific and complex tasks that could involve many network functionalities, e.g., security, resource management and control, into a single framework. Nowadays, the explosive growth of real time applications that require stringent Quality of Service (QoS) guarantees, brings the network programmers to design network protocols that deliver certain performance guarantees. This thesis exploits the use of SDN in conjunction with OpenFlow to manage differentiating network services with an high QoS. Initially, we define a QoS Management and Orchestration architecture that allows us to manage the network in a modular way. Then, we provide a seamless integration between the architecture and the standard SDN paradigm following the separation between the control and data planes. This work is a first step towards the deployment of our proposal in the University of California, Los Angeles (UCLA) campus network with differentiating services and stringent QoS requirements. We also plan to exploit our solution to manage the handoff between different network technologies, e.g., Wi-Fi and WiMAX. Indeed, the model can be run with different parameters, depending on the communication protocol and can provide optimal results to be implemented on the campus network.
Resumo:
In the last 10 years the number of mobile devices has grown rapidly. Each person usually brings at least two personal devices and researchers says that in a near future this number could raise up to ten devices per person. Moreover, all the devices are becoming more integrated to our life than in the past, therefore the amount of data exchanged increases accordingly to the improvement of people's lifestyle. This is what researchers call Internet of Things. Thus, in the future there will be more than 60 billions of nodes and the current infrastructure is not ready to keep track of all the exchanges of data between them. Therefore, infrastructure improvements have been proposed in the last years, like MobileIP and HIP in order to facilitate the exchange of packets in mobility, however none of them have been optimized for the purpose. In the last years, researchers from Mid Sweden University created The MediaSense Framework. Initially, this framework was based on the Chord protocol in order to route packets in a big network, but the most important change has been the introduction of PGrids in order to create the Overlay and the persistence. Thanks to this technology, a lookup in the trie takes up to 0.5*log(N), where N is the total number of nodes in the network. This result could be improved by further optimizations on the management of the nodes, for example by the dynamic creation of groups of nodes. Moreover, since the nodes move, an underlaying support for connectivity management is needed. SCTP has been selected as one of the most promising upcoming standards for simultaneous multiple connection's management.
Resumo:
SMARTDIAB is a platform designed to support the monitoring, management, and treatment of patients with type 1 diabetes mellitus (T1DM), by combining state-of-the-art approaches in the fields of database (DB) technologies, communications, simulation algorithms, and data mining. SMARTDIAB consists mainly of two units: 1) the patient unit (PU); and 2) the patient management unit (PMU), which communicate with each other for data exchange. The PMU can be accessed by the PU through the internet using devices, such as PCs/laptops with direct internet access or mobile phones via a Wi-Fi/General Packet Radio Service access network. The PU consists of an insulin pump for subcutaneous insulin infusion to the patient and a continuous glucose measurement system. The aforementioned devices running a user-friendly application gather patient's related information and transmit it to the PMU. The PMU consists of a diabetes data management system (DDMS), a decision support system (DSS) that provides risk assessment for long-term diabetes complications, and an insulin infusion advisory system (IIAS), which reside on a Web server. The DDMS can be accessed from both medical personnel and patients, with appropriate security access rights and front-end interfaces. The DDMS, apart from being used for data storage/retrieval, provides also advanced tools for the intelligent processing of the patient's data, supporting the physician in decision making, regarding the patient's treatment. The IIAS is used to close the loop between the insulin pump and the continuous glucose monitoring system, by providing the pump with the appropriate insulin infusion rate in order to keep the patient's glucose levels within predefined limits. The pilot version of the SMARTDIAB has already been implemented, while the platform's evaluation in clinical environment is being in progress.