987 resultados para cost structure
Resumo:
This synopsis summarizes the key chemical and bacteriological characteristics of β-lactams, penicillins, cephalosporins, carbanpenems, monobactams and others. Particular notice is given to first-generation to fifth-generation cephalosporins. This reviewalso summarizes the main resistancemechanism to antibiotics, focusing particular attention to those conferring resistance to broad-spectrum cephalosporins by means of production of emerging cephalosporinases (extended-spectrum β-lactamases and AmpC β-lactamases), target alteration (penicillin-binding proteins from methicillin-resistant Staphylococcus aureus) and membrane transporters that pump β-lactams out of the bacterial cell.
Resumo:
Many-core platforms based on Network-on-Chip (NoC [Benini and De Micheli 2002]) present an emerging technology in the real-time embedded domain. Although the idea to group the applications previously executed on separated single-core devices, and accommodate them on an individual many-core chip offers various options for power savings, cost reductions and contributes to the overall system flexibility, its implementation is a non-trivial task. In this paper we address the issue of application mapping onto a NoCbased many-core platform when considering fundamentals and trends of current many-core operating systems, specifically, we elaborate on a limited migrative application model encompassing a message-passing paradigm as a communication primitive. As the main contribution, we formulate the problem of real-time application mapping, and propose a three-stage process to efficiently solve it. Through analysis it is assured that derived solutions guarantee the fulfilment of posed time constraints regarding worst-case communication latencies, and at the same time provide an environment to perform load balancing for e.g. thermal, energy, fault tolerance or performance reasons.We also propose several constraints regarding the topological structure of the application mapping, as well as the inter- and intra-application communication patterns, which efficiently solve the issues of pessimism and/or intractability when performing the analysis.
Resumo:
The principal topic of this work is the application of data mining techniques, in particular of machine learning, to the discovery of knowledge in a protein database. In the first chapter a general background is presented. Namely, in section 1.1 we overview the methodology of a Data Mining project and its main algorithms. In section 1.2 an introduction to the proteins and its supporting file formats is outlined. This chapter is concluded with section 1.3 which defines that main problem we pretend to address with this work: determine if an amino acid is exposed or buried in a protein, in a discrete way (i.e.: not continuous), for five exposition levels: 2%, 10%, 20%, 25% and 30%. In the second chapter, following closely the CRISP-DM methodology, whole the process of construction the database that supported this work is presented. Namely, it is described the process of loading data from the Protein Data Bank, DSSP and SCOP. Then an initial data exploration is performed and a simple prediction model (baseline) of the relative solvent accessibility of an amino acid is introduced. It is also introduced the Data Mining Table Creator, a program developed to produce the data mining tables required for this problem. In the third chapter the results obtained are analyzed with statistical significance tests. Initially the several used classifiers (Neural Networks, C5.0, CART and Chaid) are compared and it is concluded that C5.0 is the most suitable for the problem at stake. It is also compared the influence of parameters like the amino acid information level, the amino acid window size and the SCOP class type in the accuracy of the predictive models. The fourth chapter starts with a brief revision of the literature about amino acid relative solvent accessibility. Then, we overview the main results achieved and finally discuss about possible future work. The fifth and last chapter consists of appendices. Appendix A has the schema of the database that supported this thesis. Appendix B has a set of tables with additional information. Appendix C describes the software provided in the DVD accompanying this thesis that allows the reconstruction of the present work.
Resumo:
Mestrado em Contabilidade e Análise Financeira
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização de Edificações
Resumo:
This paper studies fractional variable structure controllers. Two cases are considered namely, the sliding reference model and the control action, that are generalized from integer into fractional orders. The test bed consists in a mechanical manipulator and the effect of the fractional approach upon the system performance is evaluated. The results show that fractional dynamics, both in the switching surface and the control law are important design algorithms in variable structure controllers.
Resumo:
In this cross-sectional study we analyzed, whether team climate for innovation mediates the relationship between team task structure and innovative behavior, job satisfaction, affective organizational commitment, and work stress. 310 employees in 20 work teams of an automotive company participated in this study. 10 teams had been changed from a restrictive to a more self-regulating team model by providing task variety, autonomy, team-specific goals, and feedback in order to increase team effectiveness. Data support the supposed causal chain, although only with respect to team innovative behavior all required effects were statistically significant. Longitudinal designs and larger samples are needed to prove the assumed causal relationships, but results indicate that implementing self-regulating teams might be an effective strategy for improving innovative behavior and thus team and company effectiveness.
Resumo:
This paper explores the management structure of the team-based organization. First it provides a theoretical model of structures and processes of work teams. The structure determines the team’s responsibilities in terms of authority and expertise about specific regulation tasks. The responsiveness of teams to these responsibilities are the processes of teamwork, in terms of three dimensions, indicating to what extent teams indeed use the space provided to them. The research question that this paper addresses is to what extent the position of responsibilities in the team-based organization affect team responsiveness. This is done by two hypotheses. First, the effect of the so-called proximity of regulation tasks is tested. It is expected that the responsibility for tasks positioned higher in the organization (i.e. further from the team) generally has a negative effect on team responsiveness, whereas tasks positioned lower in the organization (i.e. closer to the team) will have a positive effect on the way in which teams respond. Second, the relationship between the number of tasks for which the team is responsible with team responsiveness is tested. Theory suggests that teams being responsible for a larger number of tasks perform better, i.e. show higher responsiveness. These hypotheses are tested by a study of 109 production teams in the automotive industry. The results show that, as the theory predicts, increasing numbers of responsibilities have positive effects on team responsiveness. However, the delegation of expertise to teams seems to be the most important predictor of responsiveness. Also, not all regulation tasks show to have effects on team responsiveness. Most tasks do not show to have any significant effect at all. A number of tasks affects team responsiveness positively, when their responsibility is positioned lower in the organization, but also a number of tasks affects team responsiveness positively when located higher in the organization, i.e. further from the teams in the production. The results indicate that more attention can be paid to the distribution of responsibilities, in particular expertise, to teams. Indeed delegating more expertise improve team responsiveness, however some tasks might be located better at higher organizational levels, indicating that there are limitations to what responsibilities teams can handle.
Resumo:
This paper presents a novel approach to WLAN propagation models for use in indoor localization. The major goal of this work is to eliminate the need for in situ data collection to generate the Fingerprinting map, instead, it is generated by using analytical propagation models such as: COST Multi-Wall, COST 231 average wall and Motley- Keenan. As Location Estimation Algorithms kNN (K-Nearest Neighbour) and WkNN (Weighted K-Nearest Neighbour) were used to determine the accuracy of the proposed technique. This work is based on analytical and measurement tools to determine which path loss propagation models are better for location estimation applications, based on Receive Signal Strength Indicator (RSSI).This study presents different proposals for choosing the most appropriate values for the models parameters, like obstacles attenuation and coefficients. Some adjustments to these models, particularly to Motley-Keenan, considering the thickness of walls, are proposed. The best found solution is based on the adjusted Motley-Keenan and COST models that allows to obtain the propagation loss estimation for several environments.Results obtained from two testing scenarios showed the reliability of the adjustments, providing smaller errors in the measured values values in comparison with the predicted values.
Resumo:
Mestrado em Engenharia Civil - Ramo Tecnologia e gestão das Construções
Resumo:
The problem of selecting suppliers/partners is a crucial and important part in the process of decision making for companies that intend to perform competitively in their area of activity. The selection of supplier/partner is a time and resource-consuming task that involves data collection and a careful analysis of the factors that can positively or negatively influence the choice. Nevertheless it is a critical process that affects significantly the operational performance of each company. In this work, there were identified five broad selection criteria: Quality, Financial, Synergies, Cost, and Production System. Within these criteria, it was also included five sub-criteria. After the identification criteria, a survey was elaborated and companies were contacted in order to understand which factors have more weight in their decisions to choose the partners. Interpreted the results and processed the data, it was adopted a model of linear weighting to reflect the importance of each factor. The model has a hierarchical structure and can be applied with the Analytic Hierarchy Process (AHP) method or Value Analysis. The goal of the paper it's to supply a selection reference model that can represent an orientation/pattern for a decision making on the suppliers/partners selection process
Resumo:
OBJECTIVE To analyze the cost-effectiveness of treatment regimens with cyclosporine or tacrolimus, five years after renal transplantation.METHODS This cost-effectiveness analysis was based on historical cohort data obtained between 2000 and 2004 and involved 2,022 patients treated with cyclosporine or tacrolimus, matched 1:1 for gender, age, and type and year of transplantation. Graft survival and the direct costs of medical care obtained from the National Health System (SUS) databases were used as outcome results.RESULTS Most of the patients were women, with a mean age of 36.6 years. The most frequent diagnosis of chronic renal failure was glomerulonephritis/nephritis (27.7%). In five years, the tacrolimus group had an average life expectancy gain of 3.96 years at an annual cost of R$78,360.57 compared with the cyclosporine group with a gain of 4.05 years and an annual cost of R$61,350.44.CONCLUSIONS After matching, the study indicated better survival of patients treated with regimens using tacrolimus. However, regimens containing cyclosporine were more cost-effective.
Resumo:
In practice the robotic manipulators present some degree of unwanted vibrations. The advent of lightweight arm manipulators, mainly in the aerospace industry, where weight is an important issue, leads to the problem of intense vibrations. On the other hand, robots interacting with the environment often generate impacts that propagate through the mechanical structure and produce also vibrations. In order to analyze these phenomena a robot signal acquisition system was developed. The manipulator motion produces vibrations, either from the structural modes or from endeffector impacts. The instrumentation system acquires signals from several sensors that capture the joint positions, mass accelerations, forces and moments, and electrical currents in the motors. Afterwards, an analysis package, running off-line, reads the data recorded by the acquisition system and extracts the signal characteristics. Due to the multiplicity of sensors, the data obtained can be redundant because the same type of information may be seen by two or more sensors. Because of the price of the sensors, this aspect can be considered in order to reduce the cost of the system. On the other hand, the placement of the sensors is an important issue in order to obtain the suitable signals of the vibration phenomenon. Moreover, the study of these issues can help in the design optimization of the acquisition system. In this line of thought a sensor classification scheme is presented. Several authors have addressed the subject of the sensor classification scheme. White (White, 1987) presents a flexible and comprehensive categorizing scheme that is useful for describing and comparing sensors. The author organizes the sensors according to several aspects: measurands, technological aspects, detection means, conversion phenomena, sensor materials and fields of application. Michahelles and Schiele (Michahelles & Schiele, 2003) systematize the use of sensor technology. They identified several dimensions of sensing that represent the sensing goals for physical interaction. A conceptual framework is introduced that allows categorizing existing sensors and evaluates their utility in various applications. This framework not only guides application designers for choosing meaningful sensor subsets, but also can inspire new systems and leads to the evaluation of existing applications. Today’s technology offers a wide variety of sensors. In order to use all the data from the diversity of sensors a framework of integration is needed. Sensor fusion, fuzzy logic, and neural networks are often mentioned when dealing with problem of combing information from several sensors to get a more general picture of a given situation. The study of data fusion has been receiving considerable attention (Esteban et al., 2005; Luo & Kay, 1990). A survey of the state of the art in sensor fusion for robotics can be found in (Hackett & Shah, 1990). Henderson and Shilcrat (Henderson & Shilcrat, 1984) introduced the concept of logic sensor that defines an abstract specification of the sensors to integrate in a multisensor system. The recent developments of micro electro mechanical sensors (MEMS) with unwired communication capabilities allow a sensor network with interesting capacity. This technology was applied in several applications (Arampatzis & Manesis, 2005), including robotics. Cheekiralla and Engels (Cheekiralla & Engels, 2005) propose a classification of the unwired sensor networks according to its functionalities and properties. This paper presents a development of a sensor classification scheme based on the frequency spectrum of the signals and on a statistical metrics. Bearing these ideas in mind, this paper is organized as follows. Section 2 describes briefly the robotic system enhanced with the instrumentation setup. Section 3 presents the experimental results. Finally, section 4 draws the main conclusions and points out future work.
Resumo:
The widespread employment of carbon-epoxy laminates in high responsibility and severely loaded applications introduces an issue regarding their handling after damage. Repair of these structures should be evaluated, instead of their disposal, for cost saving and ecological purposes. Under this perspective, the availability of efficient repair methods is essential to restore the strength of the structure. The development and validation of accurate predictive tools for the repairs behaviour are also extremely important, allowing the reduction of costs and time associated to extensive test programmes. Comparing with strap repairs, scarf repairs have the advantages of a higher efficiency and the absence of aerodynamic disturbance. This work reports on a numerical study of the tensile behaviour of three-dimensional scarf repairs in carbon-epoxy structures, using a ductile adhesive (Araldite® 2015). The finite elements analysis was performed in ABAQUS® and Cohesive Zone Modelling was used for the simulation of damage onset and growth in the adhesive layer. Trapezoidal cohesive laws in each pure mode were used to account for the ductility of the specific adhesive mentioned. A parametric study was performed on the repair width and scarf angle. The use of over-laminating plies covering the repaired region at the outer or both repair surfaces was also tested as an attempt to increase the repairs efficiency. The obtained results allowed the proposal of design principles for repairing composite structures.
Resumo:
Laminate composite multi-cell structures have to support both axial and shear stresses when sustaining variable twist. Thus the properties and design of the laminate may not be the most adequate at all cross-sections to support the torsion imposed on the cells. In this work, the effect of some material and geometric parameters on the optimal mechanical behaviour of a multi-cell composite laminate structure is studied when torsion is present. A particle swarm optimization technique is used to maximize the multi-cell structure torsion constant that can be used to obtain the angle of twist of the composite laminate profile.