985 resultados para Optimal unit commitment
Resumo:
The Vapnik-Chervonenkis (VC) dimension is a combinatorial measure of a certain class of machine learning problems, which may be used to obtain upper and lower bounds on the number of training examples needed to learn to prescribed levels of accuracy. Most of the known bounds apply to the Probably Approximately Correct (PAC) framework, which is the framework within which we work in this paper. For a learning problem with some known VC dimension, much is known about the order of growth of the sample-size requirement of the problem, as a function of the PAC parameters. The exact value of sample-size requirement is however less well-known, and depends heavily on the particular learning algorithm being used. This is a major obstacle to the practical application of the VC dimension. Hence it is important to know exactly how the sample-size requirement depends on VC dimension, and with that in mind, we describe a general algorithm for learning problems having VC dimension 1. Its sample-size requirement is minimal (as a function of the PAC parameters), and turns out to be the same for all non-trivial learning problems having VC dimension 1. While the method used cannot be naively generalised to higher VC dimension, it suggests that optimal algorithm-dependent bounds may improve substantially on current upper bounds.
Resumo:
While conventional Data Envelopment Analysis (DEA) models set targets for each operational unit, this paper considers the problem of input/output reduction in a centralized decision making environment. The purpose of this paper is to develop an approach to input/output reduction problem that typically occurs in organizations with a centralized decision-making environment. This paper shows that DEA can make an important contribution to this problem and discusses how DEA-based model can be used to determine an optimal input/output reduction plan. An application in banking sector with limitation in IT investment shows the usefulness of the proposed method.
Resumo:
Front line employees are critical to service brand success, as their performance brings brand promises to life. Banking employees, like others, must remain committed to their employers, to live the brand, particularly during periods of economic uncertainty and customer frustration. Employees' commitment influences their brand adoption and brand-supporting behavior during service encounters. Effective leadership fosters employee commitment and brand supporting behaviors. This study examines the nature of employee commitment in banking, distinguishing between affective, continuance and normative commitment. The study explores bank leaders, examining whether initiating structure leader behavior or considerate leader behavior is most effective in encouraging employee commitment. Data from a sample of 438 employees in a leading Irish bank reveals the optimal leadership style for employee commitment. © 2012 Elsevier Inc.
Resumo:
Implementation of a Monte Carlo simulation for the solution of population balance equations (PBEs) requires choice of initial sample number (N0), number of replicates (M), and number of bins for probability distribution reconstruction (n). It is found that Squared Hellinger Distance, H2, is a useful measurement of the accuracy of Monte Carlo (MC) simulation, and can be related directly to N0, M, and n. Asymptotic approximations of H2 are deduced and tested for both one-dimensional (1-D) and 2-D PBEs with coalescence. The central processing unit (CPU) cost, C, is found in a power-law relationship, C= aMNb0, with the CPU cost index, b, indicating the weighting of N0 in the total CPU cost. n must be chosen to balance accuracy and resolution. For fixed n, M × N0 determines the accuracy of MC prediction; if b > 1, then the optimal solution strategy uses multiple replications and small sample size. Conversely, if 0 < b < 1, one replicate and a large initial sample size is preferred. © 2015 American Institute of Chemical Engineers AIChE J, 61: 2394–2402, 2015
Resumo:
ACM Computing Classification System (1998): G.1.1, G.1.2.
Resumo:
Supply chain operations directly affect service levels. Decision on amendment of facilities is generally decided based on overall cost, leaving out the efficiency of each unit. Decomposing the supply chain superstructure, efficiency analysis of the facilities (warehouses or distribution centers) that serve customers can be easily implemented. With the proposed algorithm, the selection of a facility is based on service level maximization and not just cost minimization as this analysis filters all the feasible solutions utilizing Data Envelopment Analysis (DEA) technique. Through multiple iterations, solutions are filtered via DEA and only the efficient ones are selected leading to cost minimization. In this work, the problem of optimal supply chain networks design is addressed based on a DEA based algorithm. A Branch and Efficiency (B&E) algorithm is deployed for the solution of this problem. Based on this DEA approach, each solution (potentially installed warehouse, plant etc) is treated as a Decision Making Unit, thus is characterized by inputs and outputs. The algorithm through additional constraints named “efficiency cuts”, selects only efficient solutions providing better objective function values. The applicability of the proposed algorithm is demonstrated through illustrative examples.
Resumo:
In this thesis, the optimal operation of a neighborhood of smart households in terms of minimizing the total energy cost is analyzed. Each household may comprise several assets such as electric vehicles, controllable appliances, energy storage and distributed generation. Bi-directional power flow is considered for each household . Apart from the distributed generation unit, technological options such as vehicle-to-home and vehicle-to-grid are available to provide energy to cover self-consumption needs and to export excessive energy to other households, respectively.
Resumo:
In the recent years, autonomous aerial vehicles gained large popularity in a variety of applications in the field of automation. To accomplish various and challenging tasks the capability of generating trajectories has assumed a key role. As higher performances are sought, traditional, flatness-based trajectory generation schemes present their limitations. In these approaches the highly nonlinear dynamics of the quadrotor is, indeed, neglected. Therefore, strategies based on optimal control principles turn out to be beneficial, since in the trajectory generation process they allow the control unit to best exploit the actual dynamics, and enable the drone to perform quite aggressive maneuvers. This dissertation is then concerned with the development of an optimal control technique to generate trajectories for autonomous drones. The algorithm adopted to this end is a second-order iterative method working directly in continuous-time, which, under proper initialization, guarantees quadratic convergence to a locally optimal trajectory. At each iteration a quadratic approximation of the cost functional is minimized and a decreasing direction is then obtained as a linear-affine control law, after solving a differential Riccati equation. The algorithm has been implemented and its effectiveness has been tested on the vectored-thrust dynamical model of a quadrotor in a realistic simulative setup.
Resumo:
In Brazil, the consumption of extra-virgin olive oil (EVOO) is increasing annually, but there are no experimental studies concerning the phenolic compound contents of commercial EVOO. The aim of this work was to optimise the separation of 17 phenolic compounds already detected in EVOO. A Doehlert matrix experimental design was used, evaluating the effects of pH and electrolyte concentration. Resolution, runtime and migration time relative standard deviation values were evaluated. Derringer's desirability function was used to simultaneously optimise all 37 responses. The 17 peaks were separated in 19min using a fused-silica capillary (50μm internal diameter, 72cm of effective length) with an extended light path and 101.3mmolL(-1) of boric acid electrolyte (pH 9.15, 30kV). The method was validated and applied to 15 EVOO samples found in Brazilian supermarkets.
Resumo:
The aim of the present study was to evaluate the influence of different photopolymerization (halogen, halogen soft-start and LED) systems on shear bond strength (SBS) and marginal microleakage of composite resin restorations. Forty Class V cavities (enamel and dentin margins) were prepared for microleakage assessment, and 160 enamel and dentin fragments were prepared for the SBS test, and divided into 4 groups. Kruskal-Wallis and Wilcoxon tests showed statistically significant difference in microleakage between the margins (p < 0.01) with incisal margins presenting the lowest values. Among the groups, it was observed that, only at the cervical margin, halogen soft-start photo polymerization presented statistically significant higher microleakage values. For SBS test, ANOVA showed no statistical difference (p > 0.05) neither between substrates nor among groups. It was concluded that Soft-Start technique with high intensity end-light influenced negatively the cervical marginal sealing, but the light-curing systems did not influence adhesion.
Resumo:
We report a case of a 67 year-old-male patient admitted to the intensive care unit in the post-coronary bypass surgery period who presented cardiogenic shock, acute renal failure and three episodes of sepsis, the latter with pulmonary distress at the 30th post-operative day. The patient expired within five days in spite of treatment with vancomycin, imipenem, colistimethate and amphotericin B. At autopsy severe adenovirus pneumonia was found. Viral pulmonary infections following cardiovascular surgery are uncommon. We highlight the importance of etiological diagnosis to a correct treatment approach.
Resumo:
OBJETIVO: Investigar a relação entre adequação da oferta energética e mortalidade na unidade de terapia intensiva em pacientes sob terapia nutricional enteral exclusiva. MÉTODOS: Estudo observacional prospectivo conduzido em uma unidade de terapia intensiva em 2008 e 2009. Foram incluídos pacientes >18 anos que receberam terapia nutricional enteral por >72h. A adequação da oferta de energia foi estimada pela razão administrado/prescrito. Para a investigação da relação entre variáveis preditoras (adequação da oferta energética, escore APACHE II, sexo, idade e tempo de permanência na unidade de terapia intensiva e o desfecho mortalidade na unidade de terapia intensiva, utilizou-se o modelo de regressão logística não condicional. RESULTADOS: Foram incluídos 63 pacientes (média 58 anos, mortalidade 27%), 47,6% dos quais receberam mais de 90% da energia prescrita (adequação média 88,2%). O balanço energético médio foi de -190 kcal/dia. Observou-se associação significativa entre ocorrência de óbito e as variáveis idade e tempo de permanência na unidade de terapia intensiva, após a retirada das variáveis adequação da oferta energética, APACHE II e sexo durante o processo de modelagem. CONCLUSÃO: A adequação da oferta energética não influenciou a taxa de mortalidade na unidade de terapia intensiva. Protocolos de infusão de nutrição enteral seguidos criteriosamente, com adequação administrado/prescrito acima de 70%, parecem ser suficientes para não interferirem na mortalidade. Dessa forma, pode-se questionar a obrigatoriedade de atingir índices próximos a 100%, considerando a elevada frequência com que ocorrem interrupções no fornecimento de dieta enteral devido a intolerância gastrointestinal e jejuns para exames e procedimentos. Pesquisas futuras poderão identificar a meta ideal de adequação da oferta energética que resulte em redução significativa de complicações, mortalidade e custos.
Resumo:
Background: In areas with limited structure in place for microscopy diagnosis, rapid diagnostic tests (RDT) have been demonstrated to be effective. Method: The cost-effectiveness of the Optimal (R) and thick smear microscopy was estimated and compared. Data were collected on remote areas of 12 municipalities in the Brazilian Amazon. Data sources included the National Malaria Control Programme of the Ministry of Health, the National Healthcare System reimbursement table, hospitalization records, primary data collected from the municipalities, and scientific literature. The perspective was that of the Brazilian public health system, the analytical horizon was from the start of fever until the diagnostic results provided to patient and the temporal reference was that of year 2006. The results were expressed in costs per adequately diagnosed cases in 2006 U. S. dollars. Sensitivity analysis was performed considering key model parameters. Results: In the case base scenario, considering 92% and 95% sensitivity for thick smear microscopy to Plasmodium falciparum and Plasmodium vivax, respectively, and 100% specificity for both species, thick smear microscopy is more costly and more effective, with an incremental cost estimated at US$ 549.9 per adequately diagnosed case. In sensitivity analysis, when sensitivity and specificity of microscopy for P. vivax were 0.90 and 0.98, respectively, and when its sensitivity for P. falciparum was 0.83, the RDT was more cost-effective than microscopy. Conclusion: Microscopy is more cost-effective than OptiMal (R) in these remote areas if high accuracy of microscopy is maintained in the field. Decision regarding use of rapid tests for diagnosis of malaria in these areas depends on current microscopy accuracy in the field.
Resumo:
This work clarifies the relation between network circuit (topology) and behaviour (information transmission and synchronization) in active networks, e.g. neural networks. As an application, we show how one can find network topologies that are able to transmit a large amount of information, possess a large number of communication channels, and are robust under large variations of the network coupling configuration. This theoretical approach is general and does not depend on the particular dynamic of the elements forming the network, since the network topology can be determined by finding a Laplacian matrix (the matrix that describes the connections and the coupling strengths among the elements) whose eigenvalues satisfy some special conditions. To illustrate our ideas and theoretical approaches, we use neural networks of electrically connected chaotic Hindmarsh-Rose neurons.
Resumo:
The optimal discrimination of nonorthogonal quantum states with minimum error probability is a fundamental task in quantum measurement theory as well as an important primitive in optical communication. In this work, we propose and experimentally realize a new and simple quantum measurement strategy capable of discriminating two coherent states with smaller error probabilities than can be obtained using the standard measurement devices: the Kennedy receiver and the homodyne receiver.