993 resultados para Direct search


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The filter method is a technique for solving nonlinear programming problems. The filter algorithm has two phases in each iteration. The first one reduces a measure of infeasibility, while in the second the objective function value is reduced. In real optimization problems, usually the objective function is not differentiable or its derivatives are unknown. In these cases it becomes essential to use optimization methods where the calculation of the derivatives or the verification of their existence is not necessary: direct search methods or derivative-free methods are examples of such techniques. In this work we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of simplex and filter methods. This method neither computes nor approximates derivatives, penalty constants or Lagrange multipliers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Diversas plataformas permitem que os utilizadores rotulem recursos com tags e partilhem informação com outros utilizadores. Assim, foram desenvolvidas várias formas de visualização das tags associados aos recursos, com o intuito de facilitar aos utilizadores a pesquisa dos mesmos, assim como a visualização do tag space. De entre os vários conceitos desenvolvidos, a nuvem de tags destaca-se como a forma mais comum de visualização. Este documento apresenta um estudo efetuado sobre as suas limitações e propõe uma forma de visualização alternativa. Sugere-se também uma nova interpretação sobre como pesquisar e visualizar informação associada a tags, diferindo assim do método de pesquisa direta do termo na base de dados que atualmente é maioritariamente utilizado. Como resultado desta implementação, obteve-se uma solução viável e inovadora, o sistema Molecule, para vários dos problemas associados à tradicional nuvem de tags.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Optimization problems arise in science, engineering, economy, etc. and we need to find the best solutions for each reality. The methods used to solve these problems depend on several factors, including the amount and type of accessible information, the available algorithms for solving them, and, obviously, the intrinsic characteristics of the problem. There are many kinds of optimization problems and, consequently, many kinds of methods to solve them. When the involved functions are nonlinear and their derivatives are not known or are very difficult to calculate, these methods are more rare. These kinds of functions are frequently called black box functions. To solve such problems without constraints (unconstrained optimization), we can use direct search methods. These methods do not require any derivatives or approximations of them. But when the problem has constraints (nonlinear programming problems) and, additionally, the constraint functions are black box functions, it is much more difficult to find the most appropriate method. Penalty methods can then be used. They transform the original problem into a sequence of other problems, derived from the initial, all without constraints. Then this sequence of problems (without constraints) can be solved using the methods available for unconstrained optimization. In this chapter, we present a classification of some of the existing penalty methods and describe some of their assumptions and limitations. These methods allow the solving of optimization problems with continuous, discrete, and mixing constraints, without requiring continuity, differentiability, or convexity. Thus, penalty methods can be used as the first step in the resolution of constrained problems, by means of methods that typically are used by unconstrained problems. We also discuss a new class of penalty methods for nonlinear optimization, which adjust the penalty parameter dynamically.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fingerprinting is an indoor location technique, based on wireless networks, where data stored during the offline phase is compared with data collected by the mobile device during the online phase. In most of the real-life scenarios, the mobile node used throughout the offline phase is different from the mobile nodes that will be used during the online phase. This means that there might be very significant differences between the Received Signal Strength values acquired by the mobile node and the ones stored in the Fingerprinting Map. As a consequence, this difference between RSS values might contribute to increase the location estimation error. One possible solution to minimize these differences is to adapt the RSS values, acquired during the online phase, before sending them to the Location Estimation Algorithm. Also the internal parameters of the Location Estimation Algorithms, for example the weights of the Weighted k-Nearest Neighbour, might need to be tuned for every type of terminal. This paper focuses both approaches, using Direct Search optimization methods to adapt the Received Signal Strength and to tune the Location Estimation Algorithm parameters. As a result it was possible to decrease the location estimation error originally obtained without any calibration procedure.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Constraints nonlinear optimization problems can be solved using penalty or barrier functions. This strategy, based on solving the problems without constraints obtained from the original problem, have shown to be e ective, particularly when used with direct search methods. An alternative to solve the previous problems is the lters method. The lters method introduced by Fletcher and Ley er in 2002, , has been widely used to solve problems of the type mentioned above. These methods use a strategy di erent from the barrier or penalty functions. The previous functions de ne a new one that combine the objective function and the constraints, while the lters method treat optimization problems as a bi-objective problems that minimize the objective function and a function that aggregates the constraints. Motivated by the work of Audet and Dennis in 2004, using lters method with derivative-free algorithms, the authors developed works where other direct search meth- ods were used, combining their potential with the lters method. More recently. In a new variant of these methods was presented, where it some alternative aggregation restrictions for the construction of lters were proposed. This paper presents a variant of the lters method, more robust than the previous ones, that has been implemented with a safeguard procedure where values of the function and constraints are interlinked and not treated completely independently.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The bending of simply supported composite plates is analyzed using a direct collocation meshless numerical method. In order to optimize node distribution the Direct MultiSearch (DMS) for multi-objective optimization method is applied. In addition, the method optimizes the shape parameter in radial basis functions. The optimization algorithm was able to find good solutions for a large variety of nodes distribution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Optimization methods have been used in many areas of knowledge, such as Engineering, Statistics, Chemistry, among others, to solve optimization problems. In many cases it is not possible to use derivative methods, due to the characteristics of the problem to be solved and/or its constraints, for example if the involved functions are non-smooth and/or their derivatives are not know. To solve this type of problems a Java based API has been implemented, which includes only derivative-free optimization methods, and that can be used to solve both constrained and unconstrained problems. For solving constrained problems, the classic Penalty and Barrier functions were included in the API. In this paper a new approach to Penalty and Barrier functions, based on Fuzzy Logic, is proposed. Two penalty functions, that impose a progressive penalization to solutions that violate the constraints, are discussed. The implemented functions impose a low penalization when the violation of the constraints is low and a heavy penalty when the violation is high. Numerical results, obtained using twenty-eight test problems, comparing the proposed Fuzzy Logic based functions to six of the classic Penalty and Barrier functions are presented. Considering the achieved results, it can be concluded that the proposed penalty functions besides being very robust also have a very good performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract:INTRODUCTION:The Montenegro skin test (MST) has good clinical applicability and low cost for the diagnosis of American tegumentary leishmaniasis (ATL). However, no studies have validated the reference value (5mm) typically used to discriminate positive and negative results. We investigated MST results and evaluated its performance using different cut-off points.METHODS:The results of laboratory tests for 4,256 patients with suspected ATL were analyzed, and 1,182 individuals were found to fulfill the established criteria. Two groups were formed. The positive cutaneous leishmaniasis (PCL) group included patients with skin lesions and positive direct search for parasites (DS) results. The negative cutaneous leishmaniasis (NCL) group included patients with skin lesions with evolution up to 2 months, negative DS results, and negative indirect immunofluorescence assay results who were residents of urban areas that were reported to be probable sites of infection at domiciles and peridomiciles.RESULTS:The PCL and NCL groups included 769 and 413 individuals, respectively. The mean ± standard deviation MST in the PCL group was 12.62 ± 5.91mm [95% confidence interval (CI): 12.20-13.04], and that in the NCL group was 1.43 ± 2.17mm (95% CI: 1.23-1.63). Receiver-operating characteristic curve analysis indicated 97.4% sensitivity and 93.9% specificity for a cut-off of 5mm and 95.8% sensitivity and 97.1% specificity for a cut-off of 6mm.CONCLUSIONS:Either 5mm or 6mm could be used as the cut-off value for diagnosing ATL, as both values had high sensitivity and specificity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The objective of this work was to compare the polymerase chain reaction (PCR) using lesion scrapping with other conventional techniques for the diagnosis of the American tegumentary leishmaniasis (ATL). For this, patients with cutaneous lesions suspected to be ATL were studied. The DNA was amplified with the MP1L/MP3H primers. From the 156 studied patients, 79 (50.6%) presented positive parasite direct search (PD), 81 (51.9%) had positive Montenegro skin test (MST), and 90 (57.7%) presented PD and/or MST positive. The PCR was positive in all of the positive-PD patients (100% sensitivity), in 91.1% of the positive PD and/or MST patients, and in 27.3% of the patients that presented negative PD and positive MST. The PCR positivity was similar to the PD (P = 0.2482) and inferior to the MST (P = 0.0455), and to the PD/MST association (P = 0.0133). The high PCR sensitivity, and positivity in those cases where the PD was negative, highlights the importance of this technique as an auxiliary tool for the diagnosis of ATL.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recentemente, foco de leishmaniose visceral canina (CVL) foi descrito na região noroeste do Estado de São Paulo - Brasil. O Hospital Veterinário - UNESP - Araçatuba, no ano de 2.000, desenvolveu 60 testes citopatológicos de casos suspeitos de leishmaniose usando aspirado por agulha fina (FNA). Os esfregaços de linfonodo foram corados pelo método de Romanowsky (Diff-Quik®) e observados em microscopia de luz. Os casos positivos mostraram formas amastigotas típicas de Leishmania livres ou em vacúolos de macrófagos. Sinais citopatológicos de reatividade do sistema linfo-histiocitário com ausência de parasitos foram também observados. Com o objetivo de implementar o diagnóstico da CVL, detectando parasitos e material antigênico nos esfregaços, aplicou-se a reação de imunofluorescência direta (IFD) usando anticorpo policlonal anti-Leishmania produzido em camundongo. Comparamos o método de IFD com a pesquisa direta do parasito em esfregaços corados pelo método de Romanowsky. Dos 60 cães com sinais clínicos da doença, o exame direto foi positivo em 50% (n=30), duvidoso em 36,7% (n=22) e negativo com reatividade do linfonodo em 13,3% (n=8). Quando os linfonodos foram submetidos a reação de IFD observamos reação positiva em 93,3% (n=56) e reação negativa em 6,7% (n=4). Nossos resultados mostraram que a reação de IFD apresentou alta sensibilidade quando comparada a pesquisa direta do parasito pela coloração de Romanowsky. A reação de IFD pode ser um método útil para confirmar os casos duvidosos da doença, onde as formas amastigotas não são identificadas com facilidade.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A formulation used to determine the time-optimal geomagnetic attitude maneuvers subject to dynamic and geometric constraints is proposed in this paper. This was obtained by a direct search procedure based on a control function parametrization method, using linear programming to obtain numerical suboptimal solutions by linear perturbation. Due to its characteristics it can be used in small computers and to generate computer programs of general application. The dynamic modeling, the magnetic torque model and the suboptimal control procedure are presented. Simulation runs have verified the feasibility of the formulation thus derived and have shown a notable improvement in performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O trabalho em pauta tem como objetivo o modelamento da crosta, através da inversão de dados de refração sísmica profunda, segundo camadas planas horizontais lateralmente homogêneas, sobre um semi-espaço. O modelo direto é dado pela expressão analítica da curva tempo-distância como uma função que depende da distância fonte-estação e do vetor de parâmetros velocidades e espessuras de cada camada, calculado segundo as trajetórias do raio sísmico, regidas pela Lei de Snell. O cálculo dos tempos de chegada por este procedimento, exige a utilização de um modelo cujas velocidades sejam crescentes com a profundidade, de modo que a ocorrência das camadas de baixa velocidade (CBV) é contornada pela reparametrização do modelo, levando-se em conta o fato de que o topo da CBV funciona apenas como um refletor do raio sísmico, e não como refrator. A metodologia de inversão utilizada tem em vista não só a determinação das soluções possíveis, mas também a realização de uma análise sobre as causas responsáveis pela ambiguidade do problema. A região de pesquisa das prováveis soluções é vinculada segundo limites superiores e inferiores para cada parâmetro procurado, e pelo estabelecimento de limites superiores para os valores de distâncias críticas, calculadas a partir do vetor de parâmetros. O processo de inversão é feito utilizando-se uma técnica de otimização do ajuste de curvas através da busca direta no espaço dos parâmetros, denominado COMPLEX. Esta técnica apresenta a vantagem de poder ser utilizada com qualquer função objeto, e ser bastante prática na obtenção de múltiplas soluções do problema. Devido a curva tempo-distância corresponder ao caso de uma multi-função, o algoritmo foi adaptado de modo a minimizar simultaneamente várias funções objetos, com vínculos nos parâmetros. A inversão é feita de modo a se obter um conjunto de soluções representativas do universo existente. Por sua vez, a análise da ambiguidade é realizada pela análise fatorial modo-Q, através da qual é possível se caracterizar as propriedades comuns existentes no elenco das soluções analisadas. Os testes com dados sintéticos e reais foram feitos tendo como aproximação inicial ao processo de inversão, os valores de velocidades e espessuras calculados diretamente da interpretação visual do sismograma. Para a realização dos primeiros, utilizou-se sismogramas calculados pelo método da refletividade, segundo diferentes modelos. Por sua vez, os testes com dados reais foram realizados utilizando-se dados extraídos de um dos sismogramas coletados pelo projeto Lithospheric Seismic Profile in Britain (LISPB), na região norte da Grã-Bretanha. Em todos os testes foi verificado que a geometria do modelo possui um maior peso na ambiguidade do problema, enquanto os parâmetros físicos apresentam apenas suaves variações, no conjunto das soluções obtidas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A plasticidade adaptativa tem sido postulada como um fator importante para explicar a distribuição e abundância de espécies em habitat com diferentes níveis de variação ambiental e a heterogeneidade ambiental tem sido apontada como responsável pela manutenção, aumento ou diminuição da diversidade. Neste trabalho, determinamos o efeito da periodicidade e estrutura do habitat sobre a riqueza e composição de espécies em três diferentes habitat: córrego (P1), poças temporárias (P2) e represa permanente (P3) em um agrossistema no Cerrado do Brasil central. Nove expedições de campo foram realizadas entre outubro de 2005 e abril de 2007. Os métodos de amostragem por encontro visual e procura auditiva foram utilizados para o registro das espécies. Foram registradas 19 espécies de anuros pertencentes a quatro famílias: Bufonidae (uma espécie), Hylidae (nove espécies), Leptodactylidae (cinco espécies) e Leiuperidae (quatro espécies). Maior riqueza e a abundância foram registradas nas lagoas temporárias (P2), que diferiram significativamente do córrego (P1) e da represa permanente (P3). Dendropsophus nanus, Hypsiboas raniceps e Leptodactylus chaquensis apresentaram forte associação com o sítio P2. Os sítios P2 e P3 apresentaram maior diferenciação entre si na composição de espécies, do que quando comparados ao sítio P1. Apesar dos corpos d'água estudados estarem inseridos em área de intensa agricultura e sofrerem elevado grau de perturbação antrópica, esses ambientes apresentam elevada riqueza de espécies, constituindo-se como importantes refúgios para anurofauna da região. Entretanto, as espécies registradas são associadas a áreas antropizadas ou fitofisionomias abertas sendo favorecidas com a criação de ambientes artificiais como os observados no presente estudo.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVES The aim of this study was to optimise dexmedetomidine and alfaxalone dosing, for intramuscular administration with butorphanol, to perform minor surgeries in cats. METHODS Initially, cats were assigned to one of five groups, each composed of six animals and receiving, in addition to 0.3 mg/kg butorphanol intramuscularly, one of the following: (A) 0.005 mg/kg dexmedetomidine, 2 mg/kg alfaxalone; (B) 0.008 mg/kg dexmedetomidine, 1.5 mg/kg alfaxalone; (C) 0.012 mg/kg dexmedetomidine, 1 mg/kg alfaxalone; (D) 0.005 mg/kg dexmedetomidine, 1 mg/kg alfaxalone; and (E) 0.012 mg/kg dexmedetomidine, 2 mg/kg alfaxalone. Thereafter, a modified 'direct search' method, conducted in a stepwise manner, was used to optimise drug dosing. The quality of anaesthesia was evaluated on the basis of composite scores (one for anaesthesia and one for recovery), visual analogue scales and the propofol requirement to suppress spontaneous movements. The medians or means of these variables were used to rank the treatments; 'unsatisfactory' and 'promising' combinations were identified to calculate, through the equation first described by Berenbaum in 1990, new dexmedetomidine and alfaxalone doses to be tested in the next step. At each step, five combinations (one new plus the best previous four) were tested. RESULTS None of the tested combinations resulted in adverse effects. Four steps and 120 animals were necessary to identify the optimal drug combination (0.014 mg/kg dexmedetomidine, 2.5 mg/kg alfaxalone and 0.3 mg/kg butorphanol). CONCLUSIONS AND RELEVANCE The investigated drug mixture, at the doses found with the optimisation method, is suitable for cats undergoing minor clinical procedures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The normal boiling point is a fundamental thermo-physical property, which is important in describing the transition between the vapor and liquid phases. Reliable method which can predict it is of great importance, especially for compounds where there are no experimental data available. In this work, an improved group contribution method, which is second order method, for determination of the normal boiling point of organic compounds based on the Joback functional first order groups with some changes and added some other functional groups was developed by using experimental data for 632 organic components. It could distinguish most of structural isomerism and stereoisomerism, which including the structural, cis- and trans- isomers of organic compounds. First and second order contributions for hydrocarbons and hydrocarbon derivatives containing carbon, hydrogen, oxygen, nitrogen, sulfur, fluorine, chlorine and bromine atoms, are given. The fminsearch mathematical approach from MATLAB software is used in this study to select an optimal collection of functional groups (65 functional groups) and subsequently to develop the model. This is a direct search method that uses the simplex search method of Lagarias et al. The results of the new method are compared to the several currently used methods and are shown to be far more accurate and reliable. The average absolute deviation of normal boiling point predictions for 632 organic compounds is 4.4350 K; and the average absolute relative deviation is 1.1047 %, which is of adequate accuracy for many practical applications.