816 resultados para penalty-based aggregation functions


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The optimal reactive dispatch problem is a nonlinear programming problem containing continuous and discrete control variables. Owing to the difficulty caused by discrete variables, this problem is usually solved assuming all variables as continuous variables, therefore the original discrete variables are rounded off to the closest discrete value. This approach may provide solutions far from optimal or even unfeasible solutions. This paper presents an efficient handling of discrete variables by penalty function so that the problem becomes continuous and differentiable. Simulations with the IEEE test systems were performed showing the efficiency of the proposed approach. © 1969-2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optimization methods have been used in many areas of knowledge, such as Engineering, Statistics, Chemistry, among others, to solve optimization problems. In many cases it is not possible to use derivative methods, due to the characteristics of the problem to be solved and/or its constraints, for example if the involved functions are non-smooth and/or their derivatives are not know. To solve this type of problems a Java based API has been implemented, which includes only derivative-free optimization methods, and that can be used to solve both constrained and unconstrained problems. For solving constrained problems, the classic Penalty and Barrier functions were included in the API. In this paper a new approach to Penalty and Barrier functions, based on Fuzzy Logic, is proposed. Two penalty functions, that impose a progressive penalization to solutions that violate the constraints, are discussed. The implemented functions impose a low penalization when the violation of the constraints is low and a heavy penalty when the violation is high. Numerical results, obtained using twenty-eight test problems, comparing the proposed Fuzzy Logic based functions to six of the classic Penalty and Barrier functions are presented. Considering the achieved results, it can be concluded that the proposed penalty functions besides being very robust also have a very good performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Constraints nonlinear optimization problems can be solved using penalty or barrier functions. This strategy, based on solving the problems without constraints obtained from the original problem, have shown to be e ective, particularly when used with direct search methods. An alternative to solve the previous problems is the lters method. The lters method introduced by Fletcher and Ley er in 2002, , has been widely used to solve problems of the type mentioned above. These methods use a strategy di erent from the barrier or penalty functions. The previous functions de ne a new one that combine the objective function and the constraints, while the lters method treat optimization problems as a bi-objective problems that minimize the objective function and a function that aggregates the constraints. Motivated by the work of Audet and Dennis in 2004, using lters method with derivative-free algorithms, the authors developed works where other direct search meth- ods were used, combining their potential with the lters method. More recently. In a new variant of these methods was presented, where it some alternative aggregation restrictions for the construction of lters were proposed. This paper presents a variant of the lters method, more robust than the previous ones, that has been implemented with a safeguard procedure where values of the function and constraints are interlinked and not treated completely independently.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this paper is the design of an optoelectronic circuit based on a-SiC technology, able to act simultaneously as a 4-bit binary encoder or a binary decoder in a 4-to-16 line configurations and show multiplexer-based logical functions. The device consists of a p-i'(a-SiC:H)-n/p-i(a-Si:H)-n multilayered structure produced by PECVD. To analyze it under information-modulated wave (color channels) and uniform irradiation (background) four monochromatic pulsed lights (input channels): red, green, blue and violet shine on the device. Steady state optical bias was superimposed separately from the front and the back sides, and the generated photocurrent was measured. Results show that the devices, under appropriate optical bias, act as reconfigurable active filters that allow optical switching and optoelectronic logic functions development providing the possibility for selective removal of useless wavelengths. The logic functions needed to construct any other complex logic functions are the NOT, and both or either an AND or an OR. Any other complex logic function that might be found can also be used as building blocks to achieve the functions needed for the retrieval of channels within the WDM communication link. (C) 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a methodology based on geostatistical theory for quantifying the risks associated with heavy-metal contamination in the harbor area of Santana, Amapa State, Northern Brazil. In this area there were activities related to the commercialization of manganese ore from Serra do Navio. Manganese and arsenic concentrations at unsampled sites were estimated by postprocessing results from stochastic annealing simulations; the simulations were used to test different criteria for optimization, including average, median, and quantiles. For classifying areas as contaminated or uncontaminated, estimated quantiles based on functions of asymmetric loss showed better results than did estimates based on symmetric loss, such as the average or the median. The use of specific loss functions in the decision-making process can reduce the costs of remediation and health maintenance. The highest global health costs were observed for manganese. (c) 2008 Elsevier B.V. All rights reserved

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distributed data aggregation is an important task, allowing the de- centralized determination of meaningful global properties, that can then be used to direct the execution of other applications. The resulting val- ues result from the distributed computation of functions like count, sum and average. Some application examples can found to determine the network size, total storage capacity, average load, majorities and many others. In the last decade, many di erent approaches have been pro- posed, with di erent trade-o s in terms of accuracy, reliability, message and time complexity. Due to the considerable amount and variety of ag- gregation algorithms, it can be di cult and time consuming to determine which techniques will be more appropriate to use in speci c settings, jus- tifying the existence of a survey to aid in this task. This work reviews the state of the art on distributed data aggregation algorithms, providing three main contributions. First, it formally de nes the concept of aggrega- tion, characterizing the di erent types of aggregation functions. Second, it succinctly describes the main aggregation techniques, organizing them in a taxonomy. Finally, it provides some guidelines toward the selection and use of the most relevant techniques, summarizing their principal characteristics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, image based estimation methods, also known as direct methods, are studied which avoid feature extraction and matching completely. Cost functions use raw pixels as measurements and the goal is to produce precise 3D pose and structure estimates. The cost functions presented minimize the sensor error, because measurements are not transformed or modified. In photometric camera pose estimation, 3D rotation and translation parameters are estimated by minimizing a sequence of image based cost functions, which are non-linear due to perspective projection and lens distortion. In image based structure refinement, on the other hand, 3D structure is refined using a number of additional views and an image based cost metric. Image based estimation methods are particularly useful in conditions where the Lambertian assumption holds, and the 3D points have constant color despite viewing angle. The goal is to improve image based estimation methods, and to produce computationally efficient methods which can be accomodated into real-time applications. The developed image-based 3D pose and structure estimation methods are finally demonstrated in practise in indoor 3D reconstruction use, and in a live augmented reality application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gossip (or Epidemic) protocols have emerged as a communication and computation paradigm for large-scale networked systems. These protocols are based on randomised communication, which provides probabilistic guarantees on convergence speed and accuracy. They also provide robustness, scalability, computational and communication efficiency and high stability under disruption. This work presents a novel Gossip protocol named Symmetric Push-Sum Protocol for the computation of global aggregates (e.g., average) in decentralised and asynchronous systems. The proposed approach combines the simplicity of the push-based approach and the efficiency of the push-pull schemes. The push-pull schemes cannot be directly employed in asynchronous systems as they require synchronous paired communication operations to guarantee their accuracy. Although push schemes guarantee accuracy even with asynchronous communication, they suffer from a slower and unstable convergence. Symmetric Push- Sum Protocol does not require synchronous communication and achieves a convergence speed similar to the push-pull schemes, while keeping the accuracy stability of the push scheme. In the experimental analysis, we focus on computing the global average as an important class of node aggregation problems. The results have confirmed that the proposed method inherits the advantages of both other schemes and outperforms well-known state of the art protocols for decentralized Gossip-based aggregation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nessa tese, é buscado um maior entendimento sobre a importância das funções operacionais nas startups francesas. Uma grande flexibilidade das tarefas a ser coberta e uma gestão horizontal caracterizam as startups. Desse jeito, não é muito comum para as empresas recentemente criadas como as startups ter uma politica clara de recursos humanos. Na verdade, cada participante na start-up pode ser levado a pensar de forma diferente em termos de vendas desenvolvimento de negócios, comercialização, marketing, tecnologia ou desenvolvimento de produto. Essa tese não vai explorar cada uma dessas tarefas. Mas vai procurar para identifcar a percepção sobre a alocação ótima de recursos para cada função chave da nova empresa. Qualquer seja o setor de mercado em consideração ou o estágio de amadurecimento da startup, funções chaves que são percebidas como sendo a base para start-ups bem sucedidas são pesquisa & desenvolvimento e comercialização. Funções de liderança não são tão importantes. Somente a startup focada na tecnologia tem uma "função de chefe executivo" com maior importância do que as startups médias. Além disso, empreendedores em série, bem sucedidos ou não, focam predominantemente aspectos relacionados ao marketing e à captação de recursos em detrimento de aspectos ligados à gestão do negócio. No final, os empresários, muitas vezes tem um preconceito ao respeito da sua formação acadêmica porque ele sobrestimam funções que eles pensam poder fazer em comparação das funções que eles são capazes de fazer. Nessa tese, intent-se demonstrar a relação entre as funções ocupadas por um sócio e as ações que ele possui na startup. Essa relação depende do número de sócios (conhecido como acionistas), o tipo de sócios (acionistas principais ou acionistas segundarias) e o impacto na administração corporativa a respeito da distribuição do capital próprio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il lavoro presentato in questa Tesi si basa sul calcolo di modelli dinamici per Galassie Sferoidali Nane studiando il problema mediante l'utilizzo di funzioni di distribuzione. Si è trattato un tipo di funzioni di distribuzione, "Action-Based distribution functions", le quali sono funzioni delle sole variabili azione. Fornax è stata descritta con un'appropriata funzione di distribuzione e il problema della costruzione di modelli dinamici è stato affrontato assumendo sia un alone di materia oscura con distribuzione di densità costante nelle regioni interne sia un alone con cuspide. Per semplicità è stata assunta simmetria sferica e non è stato calcolato esplicitamente il potenziale gravitazionale della componente stellare (le stelle sono traccianti in un potenziale gravitazionale fissato). Tramite un diretto confronto con alcune osservabili, quali il profilo di densità stellare proiettata e il profilo di dispersione di velocità lungo la linea di vista, sono stati trovati alcuni modelli rappresentativi della dinamica di Fornax. Modelli calcolati tramite funzioni di distribuzione basati su azioni permettono di determinare in maniera autoconsistente profili di anisotropia. Tutti i modelli calcolati sono caratterizzati dal possedere un profilo di anisotropia con forte anisotropia tangenziale. Sono state poi comparate le stime di materia oscura di questi modelli con i più comuni e usati stimatori di massa in letteratura. E stato inoltre stimato il rapporto tra la massa totale del sistema (componente stellare e materia oscura) e la componente stellare di Fornax, entro 1600 pc ed entro i 3 kpc. Come esplorazione preliminare, in questo lavoro abbiamo anche presentato anche alcuni esempi di modelli sferici a due componenti in cui il campo gravitazionale è determinato dall'autogravità delle stelle e da un potenziale esterno che rappresenta l'alone di materia oscura.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite their sensitivity to climate variability, few of the abundant sinkhole lakes of Florida have been the subject of paleolimnological studies to discern patterns of change in aquatic communities and link them to climate drivers. However, deep sinkhole lakes can contain highly resolved paleolimnological records that can be used to track long-term climate variability and its interaction with effects of land-use change. In order to understand how limnological changes were regulated by regional climate variability and further modified by local land-use change in south Florida, we explored diatom assemblage variability over centennial and semi-decadal time scales in an ~11,000-yr and a ~150-yr sediment core extracted from a 21-m deep sinkhole lake, Lake Annie, on the protected property of Archbold Biological Station. We linked variance in diatom assemblage structure to changes in water total phosphorus, color, and pH using diatom-based transfer functions. Reconstructions suggest the sinkhole depression contained a small, acidic, oligotrophic pond ~11000–7000 cal yr BP that gradually deepened to form a humic lake by ~4000 cal yr BP, coinciding with the onset of modern precipitation regimes and the stabilization of sea-level indicated by corresponding palynological records. The lake then contained stable, acidophilous planktonic and benthic algal communities for several thousand years. In the early AD 1900s, that community shifted to one diagnostic of an even lower pH (~5.6), likely resulting from acid precipitation. Further transitions over the past 25 yr reflect recovery from acidification and intensified sensitivity to climate variability caused by enhanced watershed runoff from small drainage ditches dug during the mid-twentieth Century on the surrounding property.