958 resultados para OPTIMAL ESTIMATES OF STABILITY REGION


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Piotr Omenzetter and Simon Hoell's work within the Lloyd's Register Foundation Centre for Safety and Reliability Engineering at the University of Aberdeen is supported by Lloyd’s Register Foundation. The Foundation helps to protect life and property by supporting engineering-related education, public engagement and the application of research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Piotr Omenzetter and Simon Hoell's work within the Lloyd's Register Foundation Centre for Safety and Reliability Engineering at the University of Aberdeen is supported by Lloyd’s Register Foundation. The Foundation helps to protect life and property by supporting engineering-related education, public engagement and the application of research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Acknowledgements We are grateful to Stefan Seibert for advice on reconciling the Monfreda datasets of yield and area and the Portmann dataset for irrigated area of rice. We thank Deepak Ray and Jonathan Foley for helpful comments. Research support to J.G. K.C., N.M, and P.W. was primarily provided by the Gordon and Betty Moore Foundation and the Institute on Environment, with additional support from NSF Hydrologic Sciences grant 1521210 for N.M., and additional support to J.G. and P.W. whose efforts contribute to Belmont Forum/FACCE-JPI funded DEVIL project (NE/M021327/1). M.H. was supported by CSIRO's OCE Science Leaders Programme and the Agriculture Flagship. Funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Piotr Omenzetter and Simon Hoell’s work within the Lloyd’s Register Foundation Centre for Safety and Reliability Engineering at the University of Aberdeen is supported by Lloyd’s Register Foundation. The Foundation helps to protect life and property by supporting engineering-related education, public engagement and the application of research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We recently published an article (García-Pérez & Alcalá- Quintana, 2010) reanalyzing data presented by Lapid, Ulrich, and Rammsayer (2008) and discussing a theoretical argument developed by Ulrich and Vorberg (2009). The purpose of this note is to correct an error in our study that has some theoretical importance, although it does not affect the conclusion that was raised. The error lies in that asymptote parameters reflecting lapses or finger errors should not enter the constraint relating the psychometric functions that describe performance when the comparison stimulus in a two-alternative forced choice (2AFC) discrimination task is presented in the first or second interval.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Supply chain operations directly affect service levels. Decision on amendment of facilities is generally decided based on overall cost, leaving out the efficiency of each unit. Decomposing the supply chain superstructure, efficiency analysis of the facilities (warehouses or distribution centers) that serve customers can be easily implemented. With the proposed algorithm, the selection of a facility is based on service level maximization and not just cost minimization as this analysis filters all the feasible solutions utilizing Data Envelopment Analysis (DEA) technique. Through multiple iterations, solutions are filtered via DEA and only the efficient ones are selected leading to cost minimization. In this work, the problem of optimal supply chain networks design is addressed based on a DEA based algorithm. A Branch and Efficiency (B&E) algorithm is deployed for the solution of this problem. Based on this DEA approach, each solution (potentially installed warehouse, plant etc) is treated as a Decision Making Unit, thus is characterized by inputs and outputs. The algorithm through additional constraints named “efficiency cuts”, selects only efficient solutions providing better objective function values. The applicability of the proposed algorithm is demonstrated through illustrative examples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research and development costs of 106 randomly selected new drugs were obtained from a survey of 10 pharmaceutical firms. These data were used to estimate the average pre-tax cost of new drug and biologics development. The costs of compounds abandoned during testing were linked to the costs of compounds that obtained marketing approval. The estimated average out-of-pocket cost per approved new compound is $1395 million (2013 dollars). Capitalizing out-of-pocket costs to the point of marketing approval at a real discount rate of 10.5% yields a total pre-approval cost estimate of $2558 million (2013 dollars). When compared to the results of the previous study in this series, total capitalized costs were shown to have increased at an annual rate of 8.5% above general price inflation. Adding an estimate of post-approval R&D costs increases the cost estimate to $2870 million (2013 dollars).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The waste’s rise is a problem that affects the environment as a whole and we cannot forget about it. A good waste’s management is the key to improve the future prospect, and the waste collection is key within the management activities. To find out the better way to collect wastes leads to a reduction of the social, economic and environmental cost. With the use of the Geographic Information Systems it has been intended to elaborate a methodology which allowed us to identify the most suitable places for the location of the collection containers of the different sorts of the solid urban wastes. Taking into account that different types of wastes exist, not all of them should be managed in the same way. Therefore we have to differentiate between models where we apply efficiency and models where we apply equity for the collection of wastes, bearing in mind the necessities of each waste.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Peer effects in adolescent cannabis are difficult to estimate, due in part to the lack of appropriate data on behaviour and social ties. This paper exploits survey data that have many desirable properties and have not previously been used for this purpose. The data set, collected from teenagers in three annual waves from 2002-2004 contains longitudinal information about friendship networks within schools (N = 5,020). We exploit these data on network structure to estimate peer effects on adolescents from their nominated friends within school using two alternative approaches to identification. First, we present a cross-sectional instrumental variable (IV) estimate of peer effects that exploits network structure at the second degree, i.e. using information on friends of friends who are not themselves ego’s friends to instrument for the cannabis use of friends. Second, we present an individual fixed effects estimate of peer effects using the full longitudinal structure of the data. Both innovations allow a greater degree of control for correlated effects than is commonly the case in the substance-use peer effects literature, improving our chances of obtaining estimates of peer effects than can be plausibly interpreted as causal. Both estimates suggest positive peer effects of non-trivial magnitude, although the IV estimate is imprecise. Furthermore, when we specify identical models with behaviour and characteristics of randomly selected school peers in place of friends’, we find effectively zero effect from these ‘placebo’ peers, lending credence to our main estimates. We conclude that cross-sectional data can be used to estimate plausible positive peer effects on cannabis use where network structure information is available and appropriately exploited.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an integer programming model for developing optimal shift schedules while allowing extensive flexibility in terms of alternate shift starting times, shift lengths, and break placement. The model combines the work of Moondra (1976) and Bechtold and Jacobs (1990) by implicitly matching meal breaks to implicitly represented shifts. Moreover, the new model extends the work of these authors to enable the scheduling of overtime and the scheduling of rest breaks. We compare the new model to Bechtold and Jacobs' model over a diverse set of 588 test problems. The new model generates optimal solutions more rapidly, solves problems with more shift alternatives, and does not generate schedules violating the operative restrictions on break timing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An extensive literature exists on the problems of daily (shift) and weekly (tour) labor scheduling. In representing requirements for employees in these problems, researchers have used formulations based either on the model of Dantzig (1954) or on the model of Keith (1979). We show that both formulations have weakness in environments where management knows, or can attempt to identify, how different levels of customer service affect profits. These weaknesses results in lower-than-necessary profits. This paper presents a New Formulation of the daily and weekly Labor Scheduling Problems (NFLSP) designed to overcome the limitations of earlier models. NFLSP incorporates information on how changing the number of employees working in each planning period affects profits. NFLP uses this information during the development of the schedule to identify the number of employees who, ideally, should be working in each period. In an extensive simulation of 1,152 service environments, NFLSP outperformed the formulations of Dantzig (1954) and Keith (1979) at a level of significance of 0.001. Assuming year-round operations and an hourly wage, including benefits, of $6.00, NFLSP's schedules were $96,046 (2.2%) and $24,648 (0.6%) more profitable, on average, than schedules developed using the formulations of Danzig (1954) and Keith (1979), respectively. Although the average percentage gain over Keith's model was fairly small, it could be much larger in some real cases with different parameters. In 73 and 100 percent of the cases we simulated NFLSP yielded a higher profit than the models of Keith (1979) and Danzig (1954), respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Timely assessment of the burden of HIV/AIDS is essential for policy setting and programme evaluation. In this report from the Global Burden of Disease Study 2015 (GBD 2015), we provide national estimates of levels and trends of HIV/AIDS incidence, prevalence, coverage of antiretroviral therapy (ART), and mortality for 195 countries and territories from 1980 to 2015. Methods For countries without high-quality vital registration data, we estimated prevalence and incidence with data from antenatal care clinics and population-based seroprevalence surveys, and with assumptions by age and sex on initial CD4 distribution at infection, CD4 progression rates (probability of progression from higher to lower CD4 cell-count category), on and off antiretroviral therapy (ART) mortality, and mortality from all other causes. Our estimation strategy links the GBD 2015 assessment of all-cause mortality and estimation of incidence and prevalence so that for each draw from the uncertainty distribution all assumptions used in each step are internally consistent. We estimated incidence, prevalence, and death with GBD versions of the Estimation and Projection Package (EPP) and Spectrum software originally developed by the Joint United Nations Programme on HIV/AIDS (UNAIDS). We used an open-source version of EPP and recoded Spectrum for speed, and used updated assumptions from systematic reviews of the literature and GBD demographic data. For countries with high-quality vital registration data, we developed the cohort incidence bias adjustment model to estimate HIV incidence and prevalence largely from the number of deaths caused by HIV recorded in cause-of-death statistics. We corrected these statistics for garbage coding and HIV misclassifi cation. Findings Global HIV incidence reached its peak in 1997, at 3·3 million new infections (95% uncertainty interval [UI] 3·1–3·4 million). Annual incidence has stayed relatively constant at about 2·6 million per year (range 2·5–2·8 million) since 2005, after a period of fast decline between 1997 and 2005. The number of people living with HIV/AIDS has been steadily increasing and reached 38·8 million (95% UI 37·6–40·4 million) in 2015. At the same time, HIV/AIDS mortality has been declining at a steady pace, from a peak of 1·8 million deaths (95% UI 1·7–1·9 million) in 2005, to 1·2 million deaths (1·1–1·3 million) in 2015. We recorded substantial heterogeneity in the levels and trends of HIV/AIDS across countries. Although many countries have experienced decreases in HIV/AIDS mortality and in annual new infections, other countries have had slowdowns or increases in rates of change in annual new infections. Interpretation Scale-up of ART and prevention of mother-to-child transmission has been one of the great successes of global health in the past two decades. However, in the past decade, progress in reducing new infections has been slow, development assistance for health devoted to HIV has stagnated, and resources for health in low-income countries have grown slowly. Achievement of the new ambitious goals for HIV enshrined in Sustainable Development Goal 3 and the 90-90-90 UNAIDS targets will be challenging, and will need continued eff orts from governments and international agencies in the next 15 years to end AIDS by 2030.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-08

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study measured fuel consumption in transporting grain from Iowa origins to Japan and Amsterdam by alternative routes and modes of transport and applied these data to construct equations for fuel consumption from Iowa origins to alternative final destinations. Some of the results are as follows: (1) The metered tractor-trailer truck averaged 186.6 gross ton-miles per gallon and 90.5 net ton-miles per gallon when loaded 50% of total miles. (2) The 1983 fuel consumption of seven trucks taken from company records was 82.4 net ton-miles per gallon at 67.5% loaded miles and 68.6 net ton-miles per gallon at 50% loaded miles. (3) Unit grain trains from Iowa to West Coast ports averaged 437.0 net ton-miles per gallon whereas unit grain trains from Iowa to New Orleans averaged 640.1 net ton-miles per gallon--a 46% advantage for the New Orleans trips. (4) Average barge fuel consumption on the Mississippi River from Iowa to New Orleans export grain elevators was 544.5 net ton-miles per gallon, with a 35% backhaul rate. (5) Ocean vessel net ton-miles per gallon varies widely by size of ship and backhaul percentage. With no backhaul, the average net ton-miles per gallon were as follows: for 30,000 dwt ship, 574.8 net ton-miles per gallon; for 50,000 dwt ship, 701.9; for 70,000 dwt ship, 835.1; and for 100,000 dwt ship, 1,043.4. (6) The most fuel efficient route and modal combination to transport grain from Iowa to Japan depends on the size of ocean vessel, the percentage of backhaul, and the origin of the grain. Alternative routes and modal combinations in shipping grain to Japan are ranked in descending order of fuel efficiencies.