963 resultados para random search algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The basic motivation of this work was the integration of biophysical models within the interval constraints framework for decision support. Comparing the major features of biophysical models with the expressive power of the existing interval constraints framework, it was clear that the most important inadequacy was related with the representation of differential equations. System dynamics is often modelled through differential equations but there was no way of expressing a differential equation as a constraint and integrate it within the constraints framework. Consequently, the goal of this work is focussed on the integration of ordinary differential equations within the interval constraints framework, which for this purpose is extended with the new formalism of Constraint Satisfaction Differential Problems. Such framework allows the specification of ordinary differential equations, together with related information, by means of constraints, and provides efficient propagation techniques for pruning the domains of their variables. This enabled the integration of all such information in a single constraint whose variables may subsequently be used in other constraints of the model. The specific method used for pruning its variable domains can then be combined with the pruning methods associated with the other constraints in an overall propagation algorithm for reducing the bounds of all model variables. The application of the constraint propagation algorithm for pruning the variable domains, that is, the enforcement of local-consistency, turned out to be insufficient to support decision in practical problems that include differential equations. The domain pruning achieved is not, in general, sufficient to allow safe decisions and the main reason derives from the non-linearity of the differential equations. Consequently, a complementary goal of this work proposes a new strong consistency criterion, Global Hull-consistency, particularly suited to decision support with differential models, by presenting an adequate trade-of between domain pruning and computational effort. Several alternative algorithms are proposed for enforcing Global Hull-consistency and, due to their complexity, an effort was made to provide implementations able to supply any-time pruning results. Since the consistency criterion is dependent on the existence of canonical solutions, it is proposed a local search approach that can be integrated with constraint propagation in continuous domains and, in particular, with the enforcing algorithms for anticipating the finding of canonical solutions. The last goal of this work is the validation of the approach as an important contribution for the integration of biophysical models within decision support. Consequently, a prototype application that integrated all the proposed extensions to the interval constraints framework is developed and used for solving problems in different biophysical domains.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consider the problem of assigning real-time tasks on a heterogeneous multiprocessor platform comprising two different types of processors — such a platform is referred to as two-type platform. We present two linearithmic timecomplexity algorithms, SA and SA-P, each providing the follow- ing guarantee. For a given two-type platform and a given task set, if there exists a feasible task-to-processor-type assignment such that tasks can be scheduled to meet deadlines by allowing them to migrate only between processors of the same type, then (i) using SA, it is guaranteed to find such a feasible task-to- processor-type assignment where the same restriction on task migration applies but given a platform in which processors are 1+α/2 times faster and (ii) SA-P succeeds in finding 2 a feasible task-to-processor assignment where tasks are not allowed to migrate between processors but given a platform in which processors are 1+α/times faster, where 0<α≤1. The parameter α is a property of the task set — it is the maximum utilization of any task which is less than or equal to 1.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the most well-known bio-inspired algorithms used in optimization problems is the particle swarm optimization (PSO), which basically consists on a machinelearning technique loosely inspired by birds flocking in search of food. More specifically, it consists of a number of particles that collectively move on the search space in search of the global optimum. The Darwinian particle swarm optimization (DPSO) is an evolutionary algorithm that extends the PSO using natural selection, or survival of the fittest, to enhance the ability to escape from local optima. This paper firstly presents a survey on PSO algorithms mainly focusing on the DPSO. Afterward, a method for controlling the convergence rate of the DPSO using fractional calculus (FC) concepts is proposed. The fractional-order optimization algorithm, denoted as FO-DPSO, is tested using several well-known functions, and the relationship between the fractional-order velocity and the convergence of the algorithm is observed. Moreover, experimental results show that the FO-DPSO significantly outperforms the previously presented FO-PSO.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is widely assumed that scheduling real-time tasks becomes more difficult as their deadlines get shorter. With deadlines shorter, however, tasks potentially compete less with each other for processors, and this could produce more contention-free slots at which the number of competing tasks is smaller than or equal to the number of available processors. This paper presents a policy (called CF policy) that utilizes such contention-free slots effectively. This policy can be employed by any work-conserving, preemptive scheduling algorithm, and we show that any algorithm extended with this policy dominates the original algorithm in terms of schedulability. We also present improved schedulability tests for algorithms that employ this policy, based on the observation that interference from tasks is reduced when their executions are postponed to contention-free slots. Finally, using the properties of the CF policy, we derive a counter-intuitive claim that shortening of task deadlines can help improve schedulability of task systems. We present heuristics that effectively reduce task deadlines for better scheduability without performing any exhaustive search.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we discuss challenges and design principles of an implementation of slot-based tasksplitting algorithms into the Linux 2.6.34 version. We show that this kernel version is provided with the required features for implementing such scheduling algorithms. We show that the real behavior of the scheduling algorithm is very close to the theoretical. We run and discuss experiments on 4-core and 24-core machines.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Não existe uma definição única de processo de memória de longo prazo. Esse processo é geralmente definido como uma série que possui um correlograma decaindo lentamente ou um espectro infinito de frequência zero. Também se refere que uma série com tal propriedade é caracterizada pela dependência a longo prazo e por não periódicos ciclos longos, ou que essa característica descreve a estrutura de correlação de uma série de longos desfasamentos ou que é convencionalmente expressa em termos do declínio da lei-potência da função auto-covariância. O interesse crescente da investigação internacional no aprofundamento do tema é justificado pela procura de um melhor entendimento da natureza dinâmica das séries temporais dos preços dos ativos financeiros. Em primeiro lugar, a falta de consistência entre os resultados reclama novos estudos e a utilização de várias metodologias complementares. Em segundo lugar, a confirmação de processos de memória longa tem implicações relevantes ao nível da (1) modelação teórica e econométrica (i.e., dos modelos martingale de preços e das regras técnicas de negociação), (2) dos testes estatísticos aos modelos de equilíbrio e avaliação, (3) das decisões ótimas de consumo / poupança e de portefólio e (4) da medição de eficiência e racionalidade. Em terceiro lugar, ainda permanecem questões científicas empíricas sobre a identificação do modelo geral teórico de mercado mais adequado para modelar a difusão das séries. Em quarto lugar, aos reguladores e gestores de risco importa saber se existem mercados persistentes e, por isso, ineficientes, que, portanto, possam produzir retornos anormais. O objetivo do trabalho de investigação da dissertação é duplo. Por um lado, pretende proporcionar conhecimento adicional para o debate da memória de longo prazo, debruçando-se sobre o comportamento das séries diárias de retornos dos principais índices acionistas da EURONEXT. Por outro lado, pretende contribuir para o aperfeiçoamento do capital asset pricing model CAPM, considerando uma medida de risco alternativa capaz de ultrapassar os constrangimentos da hipótese de mercado eficiente EMH na presença de séries financeiras com processos sem incrementos independentes e identicamente distribuídos (i.i.d.). O estudo empírico indica a possibilidade de utilização alternativa das obrigações do tesouro (OT’s) com maturidade de longo prazo no cálculo dos retornos do mercado, dado que o seu comportamento nos mercados de dívida soberana reflete a confiança dos investidores nas condições financeiras dos Estados e mede a forma como avaliam as respetiva economias com base no desempenho da generalidade dos seus ativos. Embora o modelo de difusão de preços definido pelo movimento Browniano geométrico gBm alegue proporcionar um bom ajustamento das séries temporais financeiras, os seus pressupostos de normalidade, estacionariedade e independência das inovações residuais são adulterados pelos dados empíricos analisados. Por isso, na procura de evidências sobre a propriedade de memória longa nos mercados recorre-se à rescaled-range analysis R/S e à detrended fluctuation analysis DFA, sob abordagem do movimento Browniano fracionário fBm, para estimar o expoente Hurst H em relação às séries de dados completas e para calcular o expoente Hurst “local” H t em janelas móveis. Complementarmente, são realizados testes estatísticos de hipóteses através do rescaled-range tests R/S , do modified rescaled-range test M - R/S e do fractional differencing test GPH. Em termos de uma conclusão única a partir de todos os métodos sobre a natureza da dependência para o mercado acionista em geral, os resultados empíricos são inconclusivos. Isso quer dizer que o grau de memória de longo prazo e, assim, qualquer classificação, depende de cada mercado particular. No entanto, os resultados gerais maioritariamente positivos suportam a presença de memória longa, sob a forma de persistência, nos retornos acionistas da Bélgica, Holanda e Portugal. Isto sugere que estes mercados estão mais sujeitos a maior previsibilidade (“efeito José”), mas também a tendências que podem ser inesperadamente interrompidas por descontinuidades (“efeito Noé”), e, por isso, tendem a ser mais arriscados para negociar. Apesar da evidência de dinâmica fractal ter suporte estatístico fraco, em sintonia com a maior parte dos estudos internacionais, refuta a hipótese de passeio aleatório com incrementos i.i.d., que é a base da EMH na sua forma fraca. Atendendo a isso, propõem-se contributos para aperfeiçoamento do CAPM, através da proposta de uma nova fractal capital market line FCML e de uma nova fractal security market line FSML. A nova proposta sugere que o elemento de risco (para o mercado e para um ativo) seja dado pelo expoente H de Hurst para desfasamentos de longo prazo dos retornos acionistas. O expoente H mede o grau de memória de longo prazo nos índices acionistas, quer quando as séries de retornos seguem um processo i.i.d. não correlacionado, descrito pelo gBm(em que H = 0,5 , confirmando- se a EMH e adequando-se o CAPM), quer quando seguem um processo com dependência estatística, descrito pelo fBm(em que H é diferente de 0,5, rejeitando-se a EMH e desadequando-se o CAPM). A vantagem da FCML e da FSML é que a medida de memória de longo prazo, definida por H, é a referência adequada para traduzir o risco em modelos que possam ser aplicados a séries de dados que sigam processos i.i.d. e processos com dependência não linear. Então, estas formulações contemplam a EMH como um caso particular possível.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multiprocessors, particularly in the form of multicores, are becoming standard building blocks for executing reliable software. But their use for applications with hard real-time requirements is non-trivial. Well-known realtime scheduling algorithms in the uniprocessor context (Rate-Monotonic [1] or Earliest-Deadline-First [1]) do not perform well on multiprocessors. For this reason the scientific community in the area of real-time systems has produced new algorithms specifically for multiprocessors. In the meanwhile, a proposal [2] exists for extending the Ada language with new basic constructs which can be used for implementing new algorithms for real-time scheduling; the family of task splitting algorithms is one of them which was emphasized in the proposal [2]. Consequently, assessing whether existing task splitting multiprocessor scheduling algorithms can be implemented with these constructs is paramount. In this paper we present a list of state-of-art task-splitting multiprocessor scheduling algorithms and, for each of them, we present detailed Ada code that uses the new constructs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The advent of Wireless Sensor Network (WSN) technologies is paving the way for a panoply of new ubiquitous computing applications, some of them with critical requirements. In the ART-WiSe framework, we are designing a two-tiered communication architecture for supporting real-time and reliable communications in WSNs. Within this context, we have been developing a test-bed application, for testing, validating and demonstrating our theoretical findings - a search&rescue/pursuit-evasion application. Basically, a WSN deployment is used to detect, localize and track a target robot and a station controls a rescuer/pursuer robot until it gets close enough to the target robot. This paper describes how this application was engineered, particularly focusing on the implementation of the localization mechanism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to the growing complexity and dynamism of many embedded application domains (including consumer electronics, robotics, automotive and telecommunications), it is increasingly difficult to react to load variations and adapt the system's performance in a controlled fashion within an useful and bounded time. This is particularly noticeable when intending to benefit from the full potential of an open distributed cooperating environment, where service characteristics are not known beforehand and tasks may exhibit unrestricted QoS inter-dependencies. This paper proposes a novel anytime adaptive QoS control policy in which the online search for the best set of QoS levels is combined with each user's personal preferences on their services' adaptation behaviour. Extensive simulations demonstrate that the proposed anytime algorithms are able to quickly find a good initial solution and effectively optimise the rate at which the quality of the current solution improves as the algorithms are given more time to run, with a minimum overhead when compared against their traditional versions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE To analyze the prevalence of depression in older adults and associated factors. METHODS Cross-sectional study using a stratified random sample of 621 individuals aged ≥ 60 from 27 family health teams in Porto Alegre, RS, Southern Brazil, between 2010 and 2012. Community health agents measured depression using the 15-item Geriatric Depression Scale. Scores of ≥ 6 were considered as depression and between 11 and 15 as severe depression. Poisson regression was used to search for independent associations of sociodemographic and self-perceived health with both depression and its severity. RESULTS The prevalence of depression was 30.6% and was significantly higher in women (35.9% women versus 20.9% men, p < 0.001). The variables independently associated with depression were: female gender (PR = 1.4, 95%CI 1.1;1.8); low education, especially illiteracy (PR = 1.8, 95%CI 1.2;2 6); regular self-rated health (OR = 2.2, 95%CI 1.6;3.0); and poor/very poor self-rated health (PR = 4.0, 95%CI 2.9;5.5). Except for education, the strength of association of these factors increases significantly in severe depression. CONCLUSIONS A high prevalence of depression was observed in the evaluations conducted by community health agents, professionals who are not highly specialized. The findings identified using the 15-item Geriatric Depression Scale in this way are similar to those in the literature, with depression more associated with low education, female gender and worse self-rated health. From a primary health care strategic point of view, the findings become still more relevant, indicating that community health agents could play an important role in identifying depression in older adults.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A MATLAB/SIMULINK-based simulator was employed for studies concerning the control of baker’s yeast fed-batch fermentation. Four control algorithms were implemented and compared: the classical PID control, two discrete versions- modified velocity and position algorithms, and a fuzzy law. The simulation package was seen to be an efficient tool for the simulation and tests of control strategies of the nonlinear process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optimization problems arise in science, engineering, economy, etc. and we need to find the best solutions for each reality. The methods used to solve these problems depend on several factors, including the amount and type of accessible information, the available algorithms for solving them, and, obviously, the intrinsic characteristics of the problem. There are many kinds of optimization problems and, consequently, many kinds of methods to solve them. When the involved functions are nonlinear and their derivatives are not known or are very difficult to calculate, these methods are more rare. These kinds of functions are frequently called black box functions. To solve such problems without constraints (unconstrained optimization), we can use direct search methods. These methods do not require any derivatives or approximations of them. But when the problem has constraints (nonlinear programming problems) and, additionally, the constraint functions are black box functions, it is much more difficult to find the most appropriate method. Penalty methods can then be used. They transform the original problem into a sequence of other problems, derived from the initial, all without constraints. Then this sequence of problems (without constraints) can be solved using the methods available for unconstrained optimization. In this chapter, we present a classification of some of the existing penalty methods and describe some of their assumptions and limitations. These methods allow the solving of optimization problems with continuous, discrete, and mixing constraints, without requiring continuity, differentiability, or convexity. Thus, penalty methods can be used as the first step in the resolution of constrained problems, by means of methods that typically are used by unconstrained problems. We also discuss a new class of penalty methods for nonlinear optimization, which adjust the penalty parameter dynamically.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fingerprinting is an indoor location technique, based on wireless networks, where data stored during the offline phase is compared with data collected by the mobile device during the online phase. In most of the real-life scenarios, the mobile node used throughout the offline phase is different from the mobile nodes that will be used during the online phase. This means that there might be very significant differences between the Received Signal Strength values acquired by the mobile node and the ones stored in the Fingerprinting Map. As a consequence, this difference between RSS values might contribute to increase the location estimation error. One possible solution to minimize these differences is to adapt the RSS values, acquired during the online phase, before sending them to the Location Estimation Algorithm. Also the internal parameters of the Location Estimation Algorithms, for example the weights of the Weighted k-Nearest Neighbour, might need to be tuned for every type of terminal. This paper focuses both approaches, using Direct Search optimization methods to adapt the Received Signal Strength and to tune the Location Estimation Algorithm parameters. As a result it was possible to decrease the location estimation error originally obtained without any calibration procedure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Constraints nonlinear optimization problems can be solved using penalty or barrier functions. This strategy, based on solving the problems without constraints obtained from the original problem, have shown to be e ective, particularly when used with direct search methods. An alternative to solve the previous problems is the lters method. The lters method introduced by Fletcher and Ley er in 2002, , has been widely used to solve problems of the type mentioned above. These methods use a strategy di erent from the barrier or penalty functions. The previous functions de ne a new one that combine the objective function and the constraints, while the lters method treat optimization problems as a bi-objective problems that minimize the objective function and a function that aggregates the constraints. Motivated by the work of Audet and Dennis in 2004, using lters method with derivative-free algorithms, the authors developed works where other direct search meth- ods were used, combining their potential with the lters method. More recently. In a new variant of these methods was presented, where it some alternative aggregation restrictions for the construction of lters were proposed. This paper presents a variant of the lters method, more robust than the previous ones, that has been implemented with a safeguard procedure where values of the function and constraints are interlinked and not treated completely independently.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Constrained nonlinear optimization problems are usually solved using penalty or barrier methods combined with unconstrained optimization methods. Another alternative used to solve constrained nonlinear optimization problems is the lters method. Filters method, introduced by Fletcher and Ley er in 2002, have been widely used in several areas of constrained nonlinear optimization. These methods treat optimization problem as bi-objective attempts to minimize the objective function and a continuous function that aggregates the constraint violation functions. Audet and Dennis have presented the rst lters method for derivative-free nonlinear programming, based on pattern search methods. Motivated by this work we have de- veloped a new direct search method, based on simplex methods, for general constrained optimization, that combines the features of the simplex method and lters method. This work presents a new variant of these methods which combines the lters method with other direct search methods and are proposed some alternatives to aggregate the constraint violation functions.