888 resultados para Search of Optimal Paths
Resumo:
OBJECTIVE: In search of an optimal compression therapy for venous leg ulcers, a systematic review and meta-analysis was performed of randomized controlled trials (RCT) comparing compression systems based on stockings (MCS) with divers bandages. METHODS: RCT were retrieved from six sources and reviewed independently. The primary endpoint, completion of healing within a defined time frame, and the secondary endpoints, time to healing, and pain were entered into a meta-analysis using the tools of the Cochrane Collaboration. Additional subjective endpoints were summarized. RESULTS: Eight RCT (published 1985-2008) fulfilled the predefined criteria. Data presentation was adequate and showed moderate heterogeneity. The studies included 692 patients (21-178/study, mean age 61 years, 56% women). Analyzed were 688 ulcerated legs, present for 1 week to 9 years, sizing 1 to 210 cm(2). The observation period ranged from 12 to 78 weeks. Patient and ulcer characteristics were evenly distributed in three studies, favored the stocking groups in four, and the bandage group in one. Data on the pressure exerted by stockings and bandages were reported in seven and two studies, amounting to 31-56 and 27-49 mm Hg, respectively. The proportion of ulcers healed was greater with stockings than with bandages (62.7% vs 46.6%; P < .00001). The average time to healing (seven studies, 535 patients) was 3 weeks shorter with stockings (P = .0002). In no study performed bandages better than MCS. Pain was assessed in three studies (219 patients) revealing an important advantage of stockings (P < .0001). Other subjective parameters and issues of nursing revealed an advantage of MCS as well. CONCLUSIONS: Leg compression with stockings is clearly better than compression with bandages, has a positive impact on pain, and is easier to use.
Resumo:
The system of development unstable processes prediction is given. It is based on a decision-tree method. The processing technique of the expert information is offered. It is indispensable for constructing and processing by a decision-tree method. In particular data is set in the fuzzy form. The original search algorithms of optimal paths of development of the forecast process are described. This one is oriented to processing of trees of large dimension with vector estimations of arcs.
Resumo:
In this paper we propose a range of dynamic data envelopment analysis (DEA) models which allow information on costs of adjustment to be incorporated into the DEA framework. We first specify a basic dynamic DEA model predicated on a number or simplifying assumptions. We then outline a number of extensions to this model to accommodate asymmetric adjustment costs, non-static output quantities, non-static input prices, and non-static costs of adjustment, technological change, quasi-fixed inputs and investment budget constraints. The new dynamic DEA models provide valuable extra information relative to the standard static DEA models-they identify an optimal path of adjustment for the input quantities, and provide a measure of the potential cost savings that result from recognising the costs of adjusting input quantities towards the optimal point. The new models are illustrated using data relating to a chain of 35 retail department stores in Chile. The empirical results illustrate the wealth of information that can be derived from these models, and clearly show that static models overstate potential cost savings when adjustment costs are non-zero.
Resumo:
A growing literature integrates theories of debt management into models of optimal fiscal policy. One promising theory argues that the composition of government debt should be chosen so that fluctuations in the market value of debt offset changes in expected future deficits. This complete market approach to debt management is valid even when the government only issues non-contingent bonds. A number of authors conclude from this approach that governments should issue long term debt and invest in short term assets. We argue that the conclusions of this approach are too fragile to serve as a basis for policy recommendations. This is because bonds at different maturities have highly correlated returns, causing the determination of the optimal portfolio to be ill-conditioned. To make this point concrete we examine the implications of this approach to debt management in various models, both analytically and using numerical methods calibrated to the US economy. We find the complete market approach recommends asset positions which are huge multiples of GDP. Introducing persistent shocks or capital accumulation only worsens this problem. Increasing the volatility of interest rates through habits partly reduces the size of these simulations we find no presumption that governments should issue long term debt ? policy recommendations can be easily reversed through small perturbations in the specification of shocks or small variations in the maturity of bonds issued. We further extend the literature by removing the assumption that governments every period costlessly repurchase all outstanding debt. This exacerbates the size of the required positions, worsens their volatility and in some cases produces instability in debt holdings. We conclude that it is very difficult to insulate fiscal policy from shocks by using the complete markets approach to debt management. Given the limited variability of the yield curve using maturities is a poor way to substitute for state contingent debt. The result is the positions recommended by this approach conflict with a number of features that we believe are important in making bond markets incomplete e.g allowing for transaction costs, liquidity effects, etc.. Until these features are all fully incorporated we remain in search of a theory of debt management capable of providing robust policy insights.
Resumo:
Despite the rapid change in today's business environment there are relatively few studies about corporate renewal. This study aims for its part at filling that research gap by studying the concepts of strategy, corporate renewal, innovation and corporate venturing. Its purpose is to enhance our understanding of how established companies operating in dynamic and global environment can benefit from their corporate venturing activities. The theoretical part approaches the research problem in corporate and venture levels. Firstly, it focuses on mapping the determinants of strategy and suggests using industry, location, resources, knowledge, structure and culture, market, technology and business model to assess the environment and using these determinants to optimize speed and magnitude of change.Secondly, it concludes that the choice of innovation strategy is dependent on the type and dimensions of innovation and suggests assessing market, technology, business model as well as novelty and complexity related to each of them for choosing an optimal context for developing innovations further. Thirdly, it directsattention on processes through which corporate renewal takes place. On corporate level these processes are identified as strategy formulation, strategy formation and strategy implementation. On the venture level the renewal processes are identified as learning, leveraging and nesting. The theoretical contribution of this study, the framework of strategic corporate venturing, joins corporate and venture level management issues together and concludes that strategy processes and linking processes are the mechanism through which continuous corporate renewaltakes place. The framework of strategic corporate venturing proposed by this study is a new way to illustrate the role of corporate venturing as a purposefullybuilt, different view of a company's business environment. The empirical part extended the framework by enhancing our understanding of the link between corporate renewal and corporate venturing in its real life environment in three Finnish companies: Metso, Nokia and TeliaSonera. Characterizing companies' environmentwith the determinants of strategy identified in this study provided a structured way to analyze their competitive position and renewal challenges that they arefacing. More importantly the case studies confirmed that a link between corporate renewal and corporate venturing exists and found out that the link is not as straight forward as indicated by the theory. Furthermore, the case studies enhanced the framework by indicating a sequence according to which the processes work. Firstly, the induced strategy processes strategy formulation and strategy implementation set the scene for corporate venturing context and management processes and leave strategy formation for the venture. Only after that can strategies formed by ventures come back to the corporate level - and if found viable in the corporate level be formalized through formulation and implementation. With the help of the framework of strategic corporate venturing the link between corporaterenewal and corporate venturing can be found and managed. The suggested response to the continuous need for change is continuous renewal i.e. institutionalizing corporate renewal in the strategy processes of the company. As far as benefiting from venturing is concerned the answer lies in deliberately managing venturing in a context different to the mainstream businesses and establishing efficientlinking processes to exploit the renewal potential of individual ventures.
Resumo:
Wind power is a rapidly developing, low-emission form of energy production. In Fin-land, the official objective is to increase wind power capacity from the current 1 005 MW up to 3 500–4 000 MW by 2025. By the end of April 2015, the total capacity of all wind power project being planned in Finland had surpassed 11 000 MW. As the amount of projects in Finland is record high, an increasing amount of infrastructure is also being planned and constructed. Traditionally, these planning operations are conducted using manual and labor-intensive work methods that are prone to subjectivity. This study introduces a GIS-based methodology for determining optimal paths to sup-port the planning of onshore wind park infrastructure alignment in Nordanå-Lövböle wind park located on the island of Kemiönsaari in Southwest Finland. The presented methodology utilizes a least-cost path (LCP) algorithm for searching of optimal paths within a high resolution real-world terrain dataset derived from airborne lidar scannings. In addition, planning data is used to provide a realistic planning framework for the anal-ysis. In order to produce realistic results, the physiographic and planning datasets are standardized and weighted according to qualitative suitability assessments by utilizing methods and practices offered by multi-criteria evaluation (MCE). The results are pre-sented as scenarios to correspond various different planning objectives. Finally, the methodology is documented by using tools of Business Process Management (BPM). The results show that the presented methodology can be effectively used to search and identify extensive, 20 to 35 kilometers long networks of paths that correspond to certain optimization objectives in the study area. The utilization of high-resolution terrain data produces a more objective and more detailed path alignment plan. This study demon-strates that the presented methodology can be practically applied to support a wind power infrastructure alignment planning process. The six-phase structure of the method-ology allows straightforward incorporation of different optimization objectives. The methodology responds well to combining quantitative and qualitative data. Additional-ly, the careful documentation presents an example of how the methodology can be eval-uated and developed as a business process. This thesis also shows that more emphasis on the research of algorithm-based, more objective methods for the planning of infrastruc-ture alignment is desirable, as technological development has only recently started to realize the potential of these computational methods.
Resumo:
We consider entry-level medical markets for physicians in the United Kingdom. These markets experienced failures which led to the adoption of centralized market mechanisms in the 1960's. However, different regions introduced different centralized mechanisms. We advise physicians who do not have detailed information about the rank-order lists submitted by the other participants. We demonstrate that in each of these markets in a low information environment it is not beneficial to reverse the true ranking of any two acceptable hospital positions. We further show that (i) in the Edinburgh 1967 market, ranking unacceptable matches as acceptable is not profitable for any participant and (ii) in any other British entry-level medical market, it is possible that only strategies which rank unacceptable positions as acceptable are optimal for a physician.
Resumo:
The purpose of this work is to provide a brief overview of the literature on the optimal design of unemployment insurance systems by analyzing some of the most influential articles published over the last three decades on the subject and extend the main results to a multiple aggregate shocks environment. The properties of optimal contracts are discussed in light of the key assumptions commonly made in theoretical publications on the area. Moreover, the implications of relaxing each of these hypothesis is reckoned as well. The analysis of models of only one unemployment spell starts from the seminal work of Shavell and Weiss (1979). In a simple and common setting, unemployment benefits policies, wage taxes and search effort assignments are covered. Further, the idea that the UI distortion of the relative price of leisure and consumption is the only explanation for the marginal incentives to search for a job is discussed, putting into question the reduction in labor supply caused by social insurance, usually interpreted as solely an evidence of a dynamic moral hazard caused by a substitution effect. In addition, the paper presents one characterization of optimal unemployment insurance contracts in environments in which workers experience multiple unemployment spells. Finally, an extension to multiple aggregate shocks environment is considered. The paper ends with a numerical analysis of the implications of i.i.d. shocks to the optimal unemployment insurance mechanism.
Resumo:
Like other regions of the world, the EU is developing biofuels in the transport sector to reduce oil consumption and mitigate climate change. To promote them, it has adopted favourable legislation since the 2000s. In 2009 it even decided to oblige each Member State to ensure that by 2020 the share of energy coming from renewable sources reached at least 10% of their final consumption of energy in the transport sector. Biofuels are considered the main instrument to reach that percentage since the development of other alternatives (such as hydrogen and electricity) will take much longer than expected. Meanwhile, these various legislative initiatives have driven the production and consumption of biofuels in the EU. Biofuels accounted for 4.7% of EU transport fuel consumption in 2011. They have also led to trade and investment in biofuels on a global scale. This large-scale expansion of biofuels has, however, revealed numerous negative impacts. These stem from the fact that first-generation biofuels (i.e., those produced from food crops), of which the most important types are biodiesel and bioethanol, are used almost exclusively to meet the EU’s renewable 10% target in transport. Their negative impacts are: socioeconomic (food price rises), legal (land-grabbing), environmental (for instance, water stress and water pollution; soil erosion; reduction of biodiversity), climatic (direct and indirect land-use effects resulting in more greenhouse gas emissions) and public finance issues (subsidies and tax relief). The extent of such negative impacts depends on how biofuel feedstocks are produced and processed, the scale of production, and in particular, how they influence direct land use change (DLUC) and indirect land use change (ILUC) and the international trade. These negative impacts have thus provoked mounting debates in recent years, with a particular focus on ILUC. They have forced the EU to re-examine how it deals with biofuels and submit amendments to update its legislation. So far, the EU legislation foresees that only sustainable biofuels (produced in the EU or imported) can be used to meet the 10% target and receive public support; and to that end, mandatory sustainability criteria have been defined. Yet they have a huge flaw. Their measurement of greenhouse gas savings from biofuels does not take into account greenhouse gas emissions resulting from ILUC, which represent a major problem. The Energy Council of June 2014 agreed to set a limit on the extent to which firstgeneration biofuels can count towards the 10% target. But this limit appears to be less stringent than the ones made previously by the European Commission and the European Parliament. It also agreed to introduce incentives for the use of advanced (second- and third-generation) biofuels which would be allowed to count double towards the 10% target. But this again appears extremely modest by comparison with what was previously proposed. Finally, the approach chosen to take into account the greenhouse gas emissions due to ILUC appears more than cautious. The Energy Council agreed that the European Commission will carry out a reporting of ILUC emissions by using provisional estimated factors. A review clause will permit the later adjustment of these ILUC factors. With such legislative orientations made by the Energy Council, one cannot consider yet that there is a major shift in the EU biofuels policy. Bolder changes would have probably meant risking the collapse of the high-emission conventional biodiesel industry which currently makes up the majority of Europe’s biofuel production. The interests of EU farmers would have also been affected. There is nevertheless a tension between these legislative orientations and the new Commission’s proposals beyond 2020. In any case, many uncertainties remain on this issue. As long as solutions have not been found to minimize the important collateral damages provoked by the first generation biofuels, more scientific studies and caution are needed. Meanwhile, it would be wise to improve alternative paths towards a sustainable transport sector, i.e., stringent emission and energy standards for all vehicles, better public transport systems, automobiles that run on renewable energy other than biofuels, or other alternatives beyond the present imagination.
Resumo:
The purpose of discussed optimal valid partitioning (OVP) methods is uncovering of ordinal or continuous explanatory variables effect on outcome variables of different types. The OVP approach is based on searching partitions of explanatory variables space that in the best way separate observations with different levels of outcomes. Partitions of single variables ranges or two-dimensional admissible areas for pairs of variables are searched inside corresponding families. Statistical validity associated with revealed regularities is estimated with the help of permutation test repeating search of optimal partition for each permuted dataset. Method for output regularities selection is discussed that is based on validity evaluating with the help of two types of permutation tests.
Resumo:
Background: In areas with limited structure in place for microscopy diagnosis, rapid diagnostic tests (RDT) have been demonstrated to be effective. Method: The cost-effectiveness of the Optimal (R) and thick smear microscopy was estimated and compared. Data were collected on remote areas of 12 municipalities in the Brazilian Amazon. Data sources included the National Malaria Control Programme of the Ministry of Health, the National Healthcare System reimbursement table, hospitalization records, primary data collected from the municipalities, and scientific literature. The perspective was that of the Brazilian public health system, the analytical horizon was from the start of fever until the diagnostic results provided to patient and the temporal reference was that of year 2006. The results were expressed in costs per adequately diagnosed cases in 2006 U. S. dollars. Sensitivity analysis was performed considering key model parameters. Results: In the case base scenario, considering 92% and 95% sensitivity for thick smear microscopy to Plasmodium falciparum and Plasmodium vivax, respectively, and 100% specificity for both species, thick smear microscopy is more costly and more effective, with an incremental cost estimated at US$ 549.9 per adequately diagnosed case. In sensitivity analysis, when sensitivity and specificity of microscopy for P. vivax were 0.90 and 0.98, respectively, and when its sensitivity for P. falciparum was 0.83, the RDT was more cost-effective than microscopy. Conclusion: Microscopy is more cost-effective than OptiMal (R) in these remote areas if high accuracy of microscopy is maintained in the field. Decision regarding use of rapid tests for diagnosis of malaria in these areas depends on current microscopy accuracy in the field.
Resumo:
This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Religious belief and practice plays an important role in the lives of millions of people worldwide, and yet little is Known of the spiritual lives of people with a disability. This review explores the realm of disability, religion and health, and draws together literature from a variety of sources to illustrate the diversity of the sparse research in the field. An historical, cross-cultural and religious textual overview of attitudes toward disability throughout the centuries is presented. Studies in religious orientation, health and well-being are reviewed, highlighting the potential of religion to effect the lives of people with a disability, their families and caregivers. Finally, the spiritual dimensions of disability are explored to gain some understanding of the spiritual lives and existential challenges of people with a disability, and a discussion ensues on the importance of further research into this new field of endeavour.
Resumo:
The aim of this study was to investigate the association between false belief comprehension, the exhibition of pretend play and the use of mental state terms in pre-school children. Ferry children, aged between 36 and 54 months were videotaped engaging in free play with each parent. The exhibit-ion of six distinct acts of pretend play and the expression of 16 mental sr:ate terms were coded during play. Each child was also administered a pantomime task and three standard false belief casks. Reliable associations were also found between false belief performance and the pretence categories of object substitution and role assignment, and the exhibition of imaginary object pantomimes. Moreover, the use of mental state terms was positively correlated with false belief and the pretence categories of object substitution, imaginary play and role assignment, and negatively correlated with the exhibition of body part object pantomimes. These findings indicate that the development of a mental state lexicon and some, bur not all, components of pretend play are dependent on the capacity for metarepresentational cognition.