966 resultados para Maximum entropy methods
Resumo:
In this paper, we focus on large-scale and dense Cyber- Physical Systems, and discuss methods that tightly integrate communication and computing with the underlying physical environment. We present Physical Dynamic Priority Dominance ((PD)2) protocol that exemplifies a key mechanism to devise low time-complexity communication protocols for large-scale networked sensor systems. We show that using this mechanism, one can compute aggregate quantities such as the maximum or minimum of sensor readings in a time-complexity that is equivalent to essentially one message exchange. We also illustrate the use of this mechanism in a more complex task of computing the interpolation of smooth as well as non-smooth sensor data in very low timecomplexity.
Resumo:
Optimization problems arise in science, engineering, economy, etc. and we need to find the best solutions for each reality. The methods used to solve these problems depend on several factors, including the amount and type of accessible information, the available algorithms for solving them, and, obviously, the intrinsic characteristics of the problem. There are many kinds of optimization problems and, consequently, many kinds of methods to solve them. When the involved functions are nonlinear and their derivatives are not known or are very difficult to calculate, these methods are more rare. These kinds of functions are frequently called black box functions. To solve such problems without constraints (unconstrained optimization), we can use direct search methods. These methods do not require any derivatives or approximations of them. But when the problem has constraints (nonlinear programming problems) and, additionally, the constraint functions are black box functions, it is much more difficult to find the most appropriate method. Penalty methods can then be used. They transform the original problem into a sequence of other problems, derived from the initial, all without constraints. Then this sequence of problems (without constraints) can be solved using the methods available for unconstrained optimization. In this chapter, we present a classification of some of the existing penalty methods and describe some of their assumptions and limitations. These methods allow the solving of optimization problems with continuous, discrete, and mixing constraints, without requiring continuity, differentiability, or convexity. Thus, penalty methods can be used as the first step in the resolution of constrained problems, by means of methods that typically are used by unconstrained problems. We also discuss a new class of penalty methods for nonlinear optimization, which adjust the penalty parameter dynamically.
Resumo:
The characteristics of carbon fibre reinforced laminates had widened their use, from aerospace to domestic appliances. A common characteristic is the need of drilling for assembly purposes. It is known that a drilling process that reduces the drill thrust force can decrease the risk of delamination. In this work, delamination assessment methods based on radiographic data are compared and correlated with mechanical test results (bearing test).
Resumo:
Constrained and unconstrained Nonlinear Optimization Problems often appear in many engineering areas. In some of these cases it is not possible to use derivative based optimization methods because the objective function is not known or it is too complex or the objective function is non-smooth. In these cases derivative based methods cannot be used and Direct Search Methods might be the most suitable optimization methods. An Application Programming Interface (API) including some of these methods was implemented using Java Technology. This API can be accessed either by applications running in the same computer where it is installed or, it can be remotely accessed through a LAN or the Internet, using webservices. From the engineering point of view, the information needed from the API is the solution for the provided problem. On the other hand, from the optimization methods researchers’ point of view, not only the solution for the problem is needed. Also additional information about the iterative process is useful, such as: the number of iterations; the value of the solution at each iteration; the stopping criteria, etc. In this paper are presented the features added to the API to allow users to access to the iterative process data.
Resumo:
Finding the optimal value for a problem is usual in many areas of knowledge where in many cases it is needed to solve Nonlinear Optimization Problems. For some of those problems it is not possible to determine the expression for its objective function and/or its constraints, they are the result of experimental procedures, might be non-smooth, among other reasons. To solve such problems it was implemented an API contained methods to solve both constrained and unconstrained problems. This API was developed to be used either locally on the computer where the application is being executed or remotely on a server. To obtain the maximum flexibility both from the programmers’ and users’ points of view, problems can be defined as a Java class (because this API was developed in Java) or as a simple text input that is sent to the API. For this last one to be possible it was also implemented on the API an expression evaluator. One of the drawbacks of this expression evaluator is that it is slower than the Java native code. In this paper it is presented a solution that combines both options: the problem can be expressed at run-time as a string of chars that are converted to Java code, compiled and loaded dynamically. To wide the target audience of the API, this new expression evaluator is also compatible with the AMPL format.
Resumo:
In Nonlinear Optimization Penalty and Barrier Methods are normally used to solve Constrained Problems. There are several Penalty/Barrier Methods and they are used in several areas from Engineering to Economy, through Biology, Chemistry, Physics among others. In these areas it often appears Optimization Problems in which the involved functions (objective and constraints) are non-smooth and/or their derivatives are not know. In this work some Penalty/Barrier functions are tested and compared, using in the internal process, Derivative-free, namely Direct Search, methods. This work is a part of a bigger project involving the development of an Application Programming Interface, that implements several Optimization Methods, to be used in applications that need to solve constrained and/or unconstrained Nonlinear Optimization Problems. Besides the use of it in applied mathematics research it is also to be used in engineering software packages.
Resumo:
Epidemiological studies have shown the effect of diet on the incidence of chronic diseases; however, proper planning, designing, and statistical modeling are necessary to obtain precise and accurate food consumption data. Evaluation methods used for short-term assessment of food consumption of a population, such as tracking of food intake over 24h or food diaries, can be affected by random errors or biases inherent to the method. Statistical modeling is used to handle random errors, whereas proper designing and sampling are essential for controlling biases. The present study aimed to analyze potential biases and random errors and determine how they affect the results. We also aimed to identify ways to prevent them and/or to use statistical approaches in epidemiological studies involving dietary assessments.
Resumo:
OBJECTIVE Analyze the implementation of drug price regulation policy by the Drug Market Regulation Chamber.METHODS This is an interview-based study, which was undertaken in 2012, using semi-structured questionnaires with social actors from the pharmaceutical market, the pharmaceuticals industry, consumers and the regulatory agency. In addition, drug prices were compiled based on surveys conducted in the state of Sao Paulo, at the point of sale, between February 2009 and May 2012.RESULTS The mean drug prices charged at the point of sale (pharmacies) were well below the maximum price to the consumer, compared with many drugs sold in Brazil. Between 2009 and 2012, 44 of the 129 prices, corresponding to 99 drugs listed in the database of compiled prices, showed a variation of more than 20.0% in the mean prices at the point of sale and the maximum price to the consumer. In addition, many laboratories have refused to apply the price adequacy coefficient in their sales to government agencies.CONCLUSIONS The regulation implemented by the pharmaceutical market regulator was unable to significantly control prices of marketed drugs, without succeeding to push them to levels lower than those determined by the pharmaceutical industry and failing, therefore, in its objective to promote pharmaceutical support for the public. It is necessary reconstruct the regulatory law to allow market prices to be reduced by the regulator as well as institutional strengthen this government body.
Resumo:
On-chip debug (OCD) features are frequently available in modern microprocessors. Their contribution to shorten the time-to-market justifies the industry investment in this area, where a number of competing or complementary proposals are available or under development, e.g. NEXUS, CJTAG, IJTAG. The controllability and observability features provided by OCD infrastructures provide a valuable toolbox that can be used well beyond the debugging arena, improving the return on investment rate by diluting its cost across a wider spectrum of application areas. This paper discusses the use of OCD features for validating fault tolerant architectures, and in particular the efficiency of various fault injection methods provided by enhanced OCD infrastructures. The reference data for our comparative study was captured on a workbench comprising the 32-bit Freescale MPC-565 microprocessor, an iSYSTEM IC3000 debugger (iTracePro version) and the Winidea 2005 debugging package. All enhanced OCD infrastructures were implemented in VHDL and the results were obtained by simulation within the same fault injection environment. The focus of this paper is on the comparative analysis of the experimental results obtained for various OCD configurations and debugging scenarios.
Resumo:
This paper investigates the adoption of entropy for analyzing the dynamics of a multiple independent particles system. Several entropy definitions and types of particle dynamics with integer and fractional behavior are studied. The results reveal the adequacy of the entropy concept in the analysis of complex dynamical systems.
Resumo:
ABSTRACT OBJECTIVE To describe methods and challenges faced in the health impact assessment of vaccination programs, focusing on the pneumococcal conjugate and rotavirus vaccines in Latin America and the Caribbean. METHODS For this narrative review, we searched for the terms "rotavirus", "pneumococcal", "conjugate vaccine", "vaccination", "program", and "impact" in the databases Medline and LILACS. The search was extended to the grey literature in Google Scholar. No limits were defined for publication year. Original articles on the health impact assessment of pneumococcal and rotavirus vaccination programs in Latin America and the Caribbean in English, Spanish or Portuguese were included. RESULTS We identified 207 articles. After removing duplicates and assessing eligibility, we reviewed 33 studies, 25 focusing on rotavirus and eight on pneumococcal vaccination programs. The most frequent studies were ecological, with time series analysis or comparing pre- and post-vaccination periods. The main data sources were: health information systems; population-, sentinel- or laboratory-based surveillance systems; statistics reports; and medical records from one or few health care services. Few studies used primary data. Hospitalization and death were the main outcomes assessed. CONCLUSIONS Over the last years, a significant number of health impact assessments of pneumococcal and rotavirus vaccination programs have been conducted in Latin America and the Caribbean. These studies were carried out few years after the programs were implemented, meet the basic methodological requirements and suggest positive health impact. Future assessments should consider methodological issues and challenges arisen in these first studies conducted in the region.
Resumo:
ABSTRACT OBJECTIVE To estimate the required number of public beds for adults in intensive care units in the state of Rio de Janeiro to meet the existing demand and compare results with recommendations by the Brazilian Ministry of Health. METHODS The study uses a hybrid model combining time series and queuing theory to predict the demand and estimate the number of required beds. Four patient flow scenarios were considered according to bed requests, percentage of abandonments and average length of stay in intensive care unit beds. The results were plotted against Ministry of Health parameters. Data were obtained from the State Regulation Center from 2010 to 2011. RESULTS There were 33,101 medical requests for 268 regulated intensive care unit beds in Rio de Janeiro. With an average length of stay in regulated ICUs of 11.3 days, there would be a need for 595 active beds to ensure system stability and 628 beds to ensure a maximum waiting time of six hours. Deducting current abandonment rates due to clinical improvement (25.8%), these figures fall to 441 and 417. With an average length of stay of 6.5 days, the number of required beds would be 342 and 366, respectively; deducting abandonment rates, 254 and 275. The Brazilian Ministry of Health establishes a parameter of 118 to 353 beds. Although the number of regulated beds is within the recommended range, an increase in beds of 122.0% is required to guarantee system stability and of 134.0% for a maximum waiting time of six hours. CONCLUSIONS Adequate bed estimation must consider reasons for limited timely access and patient flow management in a scenario that associates prioritization of requests with the lowest average length of stay.
Resumo:
When considering time series data of variables describing agent interactions in social neurobiological systems, measures of regularity can provide a global understanding of such system behaviors. Approximate entropy (ApEn) was introduced as a nonlinear measure to assess the complexity of a system behavior by quantifying the regularity of the generated time series. However, ApEn is not reliable when assessing and comparing the regularity of data series with short or inconsistent lengths, which often occur in studies of social neurobiological systems, particularly in dyadic human movement systems. Here, the authors present two normalized, nonmodified measures of regularity derived from the original ApEn, which are less dependent on time series length. The validity of the suggested measures was tested in well-established series (random and sine) prior to their empirical application, describing the dyadic behavior of athletes in team games. The authors consider one of the ApEn normalized measures to generate the 95th percentile envelopes that can be used to test whether a particular social neurobiological system is highly complex (i.e., generates highly unpredictable time series). Results demonstrated that suggested measures may be considered as valid instruments for measuring and comparing complexity in systems that produce time series with inconsistent lengths.
Resumo:
In the field of appearance-based robot localization, the mainstream approach uses a quantized representation of local image features. An alternative strategy is the exploitation of raw feature descriptors, thus avoiding approximations due to quantization. In this work, the quantized and non-quantized representations are compared with respect to their discriminativity, in the context of the robot global localization problem. Having demonstrated the advantages of the non-quantized representation, the paper proposes mechanisms to reduce the computational burden this approach would carry, when applied in its simplest form. This reduction is achieved through a hierarchical strategy which gradually discards candidate locations and by exploring two simplifying assumptions about the training data. The potential of the non-quantized representation is exploited by resorting to the entropy-discriminativity relation. The idea behind this approach is that the non-quantized representation facilitates the assessment of the distinctiveness of features, through the entropy measure. Building on this finding, the robustness of the localization system is enhanced by modulating the importance of features according to the entropy measure. Experimental results support the effectiveness of this approach, as well as the validity of the proposed computation reduction methods.
Resumo:
Introdução: A aplicação das técnicas de Contrair-Relaxar com Contracção do Antagonista (CRCA) e de Músculo Energia (TME) promovem um aumento da flexibilidade muscular, contudo poucos estudos comparam a eficácia de ambas. Apresentam aspectos comuns como a contracção prévia do músculo a alongar sendo esta máxima na CRCA e uma percentagem da máxima na TME. Contudo, alguma evidência sugere que não existe correspondência entre a força produzida e a desejada pelo que este aspecto da TME carece de explicação. Objectivos: Confirmar se a técnica CRCA e a TME são efectivas no alongamento muscular dos isquiotibiais a curto prazo, caso sejam determinar qual a mais efectiva. Pretende-se ainda avaliar se a percepção ao esforço durante a aplicação da TME corresponde à força efectivamente realizada. Métodos: Efectuou-se um estudo experimental com 45 voluntários distribuídos aleatoriamente pelos grupos CRCA, TME e Controlo. Avaliou-se a amplitude articular passiva de extensão do joelho antes e depois de aplicar as técnicas, utilizando um goniómetro. Nos participantes submetidos à TME avaliou-se a percepção ao esforço, solicitando uma contracção submáxima isométrica de 40% medida através do dinamómetro de mão. Resultados: Verificou-se um efeito das técnicas entre as avaliações (Teste ANOVA medidas repetidas factor tempo: p<0,001) e entre os grupos (tempo*grupo: p<0,001). Comparando os grupos dois a dois, verificaram-se diferenças entre o grupo CRCA e o grupo Controlo (Teste Post Hoc Games-Howell: p=0,001) e entre o grupo TME e o grupo Controlo (p=0,009), não existindo diferenças entre os grupos CRCA e TME (p=0,376). Os grupos CRCA e TME obtiveram um ganho de 10,7º e de 11,4º respectivamente, não havendo diferenças significativas entre os ganhos (Teste T-Student Independente: p=0,599). Existiram diferenças significativas entre os 40% CMVI produzida e desejada (Teste Wilcoxon: p=0,018). Conclusão: Ambas foram efectivas no aumento da flexibilidade muscular dos isquiotibiais a curto prazo. Os efeitos foram comparáveis, mas dada a menor complexidade e menor solicitação a TME foi considerada mais eficiente. A percepção ao esforço durante a aplicação da TME não correspondeu ao esforço desejado, existindo uma tendência para a produção de intensidades de contracções maiores.