838 resultados para Resource-based and complementarity theory
Resumo:
Motivated by a novel stylized fact { countries with isolated capital cities display worse quality of governance { we provide a framework of endogenous institutional choice based on the idea that elites are constrained by the threat of rebellion, and that this threat is rendered less e ective by distance from the seat of political power. In established democracies, the threat of insurgencies is not a binding constraint, and the model predicts no correlation between isolated capitals and misgovernance. In contrast, a correlation emerges in equilibrium in the case of autocracies. Causality runs both ways: broader power sharing (associated with better governance) means that any rents have to be shared more broadly, hence the elite has less of an incentive to protect its position by isolating the capital city; conversely, a more isolated capital city allows the elite to appropriate a larger share of output, so the costs of better governance for the elite, in terms of rents that would have to be shared, are larger. We show evidence that this pattern holds true robustly in the data. We also show that isolated capitals are associated with less power sharing, a larger income premium enjoyed by capital city inhabitants, and lower levels of military spending by ruling elites, as predicted by the theory.
Resumo:
The pressure for a new pattern of sustainable development began to require of modern organizations the conciliation between competitiveness and a environmental protection. In this sense, a tool that acts in the implementation of structured strategies is the Environmental Management System (EMS), which focuses on improving environmental performance. This improvement, in turn, can generate to the organizations many benefits , among which, obtaining competitive advantages, susceptible of measurement from different perspectives. One of these is the application of VRIO model, reasoned by the Resource-Based View (RBV), which considers that differences between companies occurs due to differences between its internal resources and capabilities. However, although was been found some studies in the literature that evaluate the competitive potential of certain organizations , such assessments are not performed on specific objects, like the SEM s. Thus, the aim of this study was to evaluate the resources and capabilities (environmental strategies) adopted by the SGA of the Verdegreen Hotel, identifying which of these have the potential to generate competitive advantage. For this, this exploratory-descriptive character study and delineated as field research and case study was used as data collection tools: a literature survey, semi-structured interviews, document research and participant observation. The interpretation of results and consolidation of information were conducted from a qualitative approach, using two techniques of data analysis, namely: content analysis and analysis through VRIO model. The results show that the hotel is quite structured in relation to their EMS, as well as reaching related to improving the management of environmental factors, strengthening the image and gains in competitiveness benefits. On the other hand, the main difficulties for the implementation of the system are related to employees and suppliers. With regard to environmental strategies adopted, of the 25 strategies identified, 10 showed the potential to generate competitive advantage
Resumo:
In this work it is proposed an optimized dynamic response of parallel operation of two single-phase inverters with no control communication. The optimization aims the tuning of the slopes of P-ω and Q-V curves so that the system is stable, damped and minimum settling time. The slopes are tuned using an algorithm based on evolutionary theory. Simulation and experimental results are presented to prove the feasibility of the proposed approach. © 2010 IEEE.
Resumo:
Includes bibliography
Resumo:
Includes bibliography
Resumo:
The main objective of this study is to verify the influence of Environmental Management (EM) on Operational Performance (OP) in Brazilian automotive companies, analyzing whether Lean Manufacturing (LM) and Human Resources (HR) interfere in the greening of these companies. Therefore, a conceptual framework listing these concepts was proposed, and three research hypotheses were presented. A questionnaire was elaborated based on this theoretical background and sent to respondents occupying the highest positions in the production/operations areas of Brazilian automotive companies. The data, collected from 75 companies, were analyzed using structural equation modeling. The main results are as follows: (a) the model tested revealed an adequate goodness of fit, showing that overall, the relations proposed between EM and OP and between HR, LM and EM tend to be statistically valid; (b) EM tends to influence OP in a positive and statistically weak manner; (c) LM has a greater influence on EM when compared to the influence HR has over EM; (d) HR has a positive relationship over EM, but the statistical significance of this relationship is less than that of the other evaluated relationships. The originality of this paper lies in its gathering the concepts of EM, LM, HR and OP in a single study, as they generally tend not to be treated jointly. This paper also provided valid empirical evidence for a littlestudied context: the Brazilian automotive sector. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
This work is supported by Brazilian agencies Fapesp, CAPES and CNPq
Resumo:
This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.
Resumo:
This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.
Resumo:
OBJECTIVE: To determine the impact of a community based Helicobacter pylori screening and eradication programme on the incidence of dyspepsia, resource use, and quality of life, including a cost consequences analysis. DESIGN: H pylori screening programme followed by randomised placebo controlled trial of eradication. SETTING: Seven general practices in southwest England. PARTICIPANTS: 10,537 unselected people aged 20-59 years were screened for H pylori infection (13C urea breath test); 1558 of the 1636 participants who tested positive were randomised to H pylori eradication treatment or placebo, and 1539 (99%) were followed up for two years. INTERVENTION: Ranitidine bismuth citrate 400 mg and clarithromycin 500 mg twice daily for two weeks or placebo. MAIN OUTCOME MEASURES: Primary care consultation rates for dyspepsia (defined as epigastric pain) two years after randomisation, with secondary outcomes of dyspepsia symptoms, resource use, NHS costs, and quality of life. RESULTS: In the eradication group, 35% fewer participants consulted for dyspepsia over two years compared with the placebo group (55/787 v 78/771; odds ratio 0.65, 95% confidence interval 0.46 to 0.94; P = 0.021; number needed to treat 30) and 29% fewer participants had regular symptoms (odds ratio 0.71, 0.56 to 0.90; P = 0.05). NHS costs were 84.70 pounds sterling (74.90 pounds sterling to 93.91 pounds sterling) greater per participant in the eradication group over two years, of which 83.40 pounds sterling (146 dollars; 121 euro) was the cost of eradication treatment. No difference in quality of life existed between the two groups. CONCLUSIONS: Community screening and eradication of H pylori is feasible in the general population and led to significant reductions in the number of people who consulted for dyspepsia and had symptoms two years after treatment. These benefits have to be balanced against the costs of eradication treatment, so a targeted eradication strategy in dyspeptic patients may be preferable.
Resumo:
INTRODUCTION: The paucity of data on resource use in critically ill patients with hematological malignancy and on these patients' perceived poor outcome can lead to uncertainty over the extent to which intensive care treatment is appropriate. The aim of the present study was to assess the amount of intensive care resources needed for, and the effect of treatment of, hemato-oncological patients in the intensive care unit (ICU) in comparison with a nononcological patient population with a similar degree of organ dysfunction. METHODS: A retrospective cohort study of 101 ICU admissions of 84 consecutive hemato-oncological patients and 3,808 ICU admissions of 3,478 nononcological patients over a period of 4 years was performed. RESULTS: As assessed by Therapeutic Intervention Scoring System points, resource use was higher in hemato-oncological patients than in nononcological patients (median (interquartile range), 214 (102 to 642) versus 95 (54 to 224), P < 0.0001). Severity of disease at ICU admission was a less important predictor of ICU resource use than necessity for specific treatment modalities. Hemato-oncological patients and nononcological patients with similar admission Simplified Acute Physiology Score scores had the same ICU mortality. In hemato-oncological patients, improvement of organ function within the first 48 hours of the ICU stay was the best predictor of 28-day survival. CONCLUSION: The presence of a hemato-oncological disease per se is associated with higher ICU resource use, but not with increased mortality. If withdrawal of treatment is considered, this decision should not be based on admission parameters but rather on the evolutional changes in organ dysfunctions.
Resumo:
Modern cloud-based applications and infrastructures may include resources and services (components) from multiple cloud providers, are heterogeneous by nature and require adjustment, composition and integration. The specific application requirements can be met with difficulty by the current static predefined cloud integration architectures and models. In this paper, we propose the Intercloud Operations and Management Framework (ICOMF) as part of the more general Intercloud Architecture Framework (ICAF) that provides a basis for building and operating a dynamically manageable multi-provider cloud ecosystem. The proposed ICOMF enables dynamic resource composition and decomposition, with a main focus on translating business models and objectives to cloud services ensembles. Our model is user-centric and focuses on the specific application execution requirements, by leveraging incubating virtualization techniques. From a cloud provider perspective, the ecosystem provides more insight into how to best customize the offerings of virtualized resources.
Resumo:
Traditional methods do not actually measure peoples’ risk attitude naturally and precisely. Therefore, a fuzzy risk attitude classification method is developed. Since the prospect theory is usually considered as an effective model of decision making, the personalized parameters in prospect theory are firstly fuzzified to distinguish people with different risk attitudes, and then a fuzzy classification database schema is applied to calculate the exact value of risk value attitude and risk be- havior attitude. Finally, by applying a two-hierarchical clas- sification model, the precise value of synthetical risk attitude can be acquired.
Resumo:
Numerical calculations describing weathering of the Poços de Caldas alkaline complex (Minas Gerais, Brazil) by infiltrating groundwater are carried out for time spans up to two million years in the absence of pyrite, and up to 500,000 years with pyrite present. Deposition of uranium resulting from infiltration of oxygenated, uranium bearing groundwater through the hydrothermally altered phonolitic host rock at the Osamu Utsumi uranium mine is also included in the latter calculation. The calculations are based on the quasi-stationary state approximation to mass conservation equations for pure advective transport. This approximation enables the prediction of solute concentrations, mineral abundances and porosity as functions of time and distance over geologic time spans. Mineral reactions are described by kinetic rate laws for both precipitation and dissolution. Homogeneous equilibrium is assumed to be maintained within the aqueous phase. No other constraints are imposed on the calculations other than the initial composition of the unaltered host rock and the composition of the inlet fluid, taken as rainwater modified by percolation through a soil zone. The results are in qualitative agreement with field observations at the Osamu Utsumi uranium mine. They predict a lateritic cover followed by a highly porous saprolitic zone, a zone of oxidized rock with pyrite replaced by iron-hydroxide, a sharp redox front at which uranium is deposited, and the reduced unweathered host rock. Uranium is deposited in a narrow zone located on the reduced side of the redox front in association with pyrite, in agreement with field observations. The calculations predict the formation of a broad dissolution front of primary kaolinite that penetrates deep into the host rock accompanied by the precipitation of secondary illite. Secondary kaolinite occurs in a saprolitic zone near the surface and in the vicinity of the redox front. Gibbsite forms a bi-modal distribution consisting of a maximum near the surface followed by a thin tongue extending downward into the weathered profile in agreement with field observations. The results are found to be insensitive to the kinetic rate constants used to describe mineral reactions.