959 resultados para Multi-attribute reverse auctions
Resumo:
Background Multi attribute utility instruments (MAUIs) are preference-based measures that comprise a health state classification system (HSCS) and a scoring algorithm that assigns a utility value to each health state in the HSCS. When developing a MAUI from a health-related quality of life (HRQOL) questionnaire, first a HSCS must be derived. This typically involves selecting a subset of domains and items because HRQOL questionnaires typically have too many items to be amendable to the valuation task required to develop the scoring algorithm for a MAUI. Currently, exploratory factor analysis (EFA) followed by Rasch analysis is recommended for deriving a MAUI from a HRQOL measure. Aim To determine whether confirmatory factor analysis (CFA) is more appropriate and efficient than EFA to derive a HSCS from the European Organisation for the Research and Treatment of Cancer’s core HRQOL questionnaire, Quality of Life Questionnaire (QLQ-C30), given its well-established domain structure. Methods QLQ-C30 (Version 3) data were collected from 356 patients receiving palliative radiotherapy for recurrent/metastatic cancer (various primary sites). The dimensional structure of the QLQ-C30 was tested with EFA and CFA, the latter informed by the established QLQ-C30 structure and views of both patients and clinicians on which are the most relevant items. Dimensions determined by EFA or CFA were then subjected to Rasch analysis. Results CFA results generally supported the proposed QLQ-C30 structure (comparative fit index =0.99, Tucker–Lewis index =0.99, root mean square error of approximation =0.04). EFA revealed fewer factors and some items cross-loaded on multiple factors. Further assessment of dimensionality with Rasch analysis allowed better alignment of the EFA dimensions with those detected by CFA. Conclusion CFA was more appropriate and efficient than EFA in producing clinically interpretable results for the HSCS for a proposed new cancer-specific MAUI. Our findings suggest that CFA should be recommended generally when deriving a preference-based measure from a HRQOL measure that has an established domain structure.
Resumo:
In the global construction context, the Best Value or Most Economically Advantageous Tender is becoming a widespread approach for contractor selection, as an alternative to other traditional awarding criteria such as the Lowest Price. In these multi-attribute tenders, the owner or auctioneer solicits proposals containing both a price bid and additional technical features. Once the proposals are received, each bidder's price bid is given an economic score according to a scoring rule, generally called an Economic Scoring Formula (ESF) and a technical score according to pre-specified criteria. Eventually, the contract is awarded to the bidder with the highest weighted overall score (economic + technical). However, Economic Scoring Formula selection by auctioneers is invariably and paradoxically a highly intuitive process in practice, involving few theoretical or empirical considerations, despite having being considered traditionally and mistakenly as objective, due to its mathematical nature. This paper provides a taxonomic classification of a wide variety of ESF and Abnormally Low Bid Criteria (ALBC) gathered in several countries with different tendering approaches. Practical implications concern the optimal design of price scoring rules in construction contract tenders, as well as future analyses of the effects of ESF and ALBC on competitive bidding behaviour.
Resumo:
The overlapping sound pressure waves that enter our brain via the ears and auditory nerves must be organized into a coherent percept. Modelling the regularities of the auditory environment and detecting unexpected changes in these regularities, even in the absence of attention, is a necessary prerequisite for orientating towards significant information as well as speech perception and communication, for instance. The processing of auditory information, in particular the detection of changes in the regularities of the auditory input, gives rise to neural activity in the brain that is seen as a mismatch negativity (MMN) response of the event-related potential (ERP) recorded by electroencephalography (EEG). --- As the recording of MMN requires neither a subject s behavioural response nor attention towards the sounds, it can be done even with subjects with problems in communicating or difficulties in performing a discrimination task, for example, from aphasic and comatose patients, newborns, and even fetuses. Thus with MMN one can follow the evolution of central auditory processing from the very early, often critical stages of development, and also in subjects who cannot be examined with the more traditional behavioural measures of auditory discrimination. Indeed, recent studies show that central auditory processing, as indicated by MMN, is affected in different clinical populations, such as schizophrenics, as well as during normal aging and abnormal childhood development. Moreover, the processing of auditory information can be selectively impaired for certain auditory attributes (e.g., sound duration, frequency) and can also depend on the context of the sound changes (e.g., speech or non-speech). Although its advantages over behavioral measures are undeniable, a major obstacle to the larger-scale routine use of the MMN method, especially in clinical settings, is the relatively long duration of its measurement. Typically, approximately 15 minutes of recording time is needed for measuring the MMN for a single auditory attribute. Recording a complete central auditory processing profile consisting of several auditory attributes would thus require from one hour to several hours. In this research, I have contributed to the development of new fast multi-attribute MMN recording paradigms in which several types and magnitudes of sound changes are presented in both speech and non-speech contexts in order to obtain a comprehensive profile of auditory sensory memory and discrimination accuracy in a short measurement time (altogether approximately 15 min for 5 auditory attributes). The speed of the paradigms makes them highly attractive for clinical research, their reliability brings fidelity to longitudinal studies, and the language context is especially suitable for studies on language impairments such as dyslexia and aphasia. In addition I have presented an even more ecological paradigm, and more importantly, an interesting result in view of the theory of MMN where the MMN responses are recorded entirely without a repetitive standard tone. All in all, these paradigms contribute to the development of the theory of auditory perception, and increase the feasibility of MMN recordings in both basic and clinical research. Moreover, they have already proven useful in studying for instance dyslexia, Asperger syndrome and schizophrenia.
Resumo:
Conventional seismic attribute analysis is not only time consuming, but also has several possible results. Therefore, seismic attribute optimization and multi-attribute analysis are needed. In this paper, Fuyu oil layer in Daqing oil field is our main studying object. And there is much difference between seismic attributes and well logs. So under this condition, Independent Component Analysis (ICA) and Kohonen neural net are introduced to seismic attribute optimization and multi-attribute analysis. The main contents are as follows: (1) Now the method of seismic attribute compression is mainly principal component analysis (PCA). In this article, independent component analysis (ICA), which is superficially related to PCA, but much more powerful, is used to seismic reservoir characterizeation. The fundamental, algorithms and applications of ICA are surveyed. And comparation of ICA with PCA is stydied. On basis of the ne-entropy measurement of independence, the FastICA algorithm is implemented. (2) Two parts of ICA application are included in this article: First, ICA is used directly to identify sedimentary characters. Combined with geology and well data, ICA results can be used to predict sedimentary characters. Second, ICA treats many attributes as multi-dimension random vectors. Through ICA transform, a few good new attributes can be got from a lot of seismic attributes. Attributes got from ICA optimization are independent. (3) In this paper, Kohonen self-organizing neural network is studied. First, the characteristics of neural network’s structure and algorithm is analyzed in detail, and the traditional algorithm is achieved which has been used in seism. From experimental results, we know that the Kohonen self-organizing neural network converges fast and classifies accurately. Second, the self-organizing feature map algorithm needs to be improved because the result of classification is not very exact, the boundary is not quite clear and the velocity is not fast enough, and so on. Here frequency sensitive principle is introduced. Combine it with the self-organizing feature map algorithm, then get frequency sensitive self-organizing feature map algorithm. Experimental results show that it is really better. (4) Kohonen self-organizing neural network is used to classify seismic attributes. And it can be avoided drawing confusing conclusions because the algorithm’s characteristics integrate many kinds of seismic features. The result can be used in the division of sand group’s seismic faces, and so on. And when attributes are extracted from seismic data, some useful information is lost because of difference and deriveative. But multiattributes can make this lost information compensated in a certain degree.
Resumo:
In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.
Resumo:
For optimal solutions in health care, decision makers inevitably must evaluate trade-offs, which call for multi-attribute valuation methods. Researchers have proposed using best-worst scaling (BWS) methods which seek to extract information from respondents by asking them to identify the best and worst items in each choice set. While a companion paper describes the different types of BWS, application and their advantages and downsides, this contribution expounds their relationships with microeconomic theory, which also have implications for statistical inference. This article devotes to the microeconomic foundations of preference measurement, also addressing issues such as scale invariance and scale heterogeneity. Furthermore the paper discusses the basics of preference measurement using rating, ranking and stated choice data in the light of the findings of the preceding section. Moreover the paper gives an introduction to the use of stated choice data and juxtaposes BWS with the microeconomic foundations.
Resumo:
De nombreux problèmes pratiques qui se posent dans dans le domaine de la logistique, peuvent être modélisés comme des problèmes de tournées de véhicules. De façon générale, cette famille de problèmes implique la conception de routes, débutant et se terminant à un dépôt, qui sont utilisées pour distribuer des biens à un nombre de clients géographiquement dispersé dans un contexte où les coûts associés aux routes sont minimisés. Selon le type de problème, un ou plusieurs dépôts peuvent-être présents. Les problèmes de tournées de véhicules sont parmi les problèmes combinatoires les plus difficiles à résoudre. Dans cette thèse, nous étudions un problème d’optimisation combinatoire, appartenant aux classes des problèmes de tournées de véhicules, qui est liée au contexte des réseaux de transport. Nous introduisons un nouveau problème qui est principalement inspiré des activités de collecte de lait des fermes de production, et de la redistribution du produit collecté aux usines de transformation, pour la province de Québec. Deux variantes de ce problème sont considérées. La première, vise la conception d’un plan tactique de routage pour le problème de la collecte-redistribution de lait sur un horizon donné, en supposant que le niveau de la production au cours de l’horizon est fixé. La deuxième variante, vise à fournir un plan plus précis en tenant compte de la variation potentielle de niveau de production pouvant survenir au cours de l’horizon considéré. Dans la première partie de cette thèse, nous décrivons un algorithme exact pour la première variante du problème qui se caractérise par la présence de fenêtres de temps, plusieurs dépôts, et une flotte hétérogène de véhicules, et dont l’objectif est de minimiser le coût de routage. À cette fin, le problème est modélisé comme un problème multi-attributs de tournées de véhicules. L’algorithme exact est basé sur la génération de colonnes impliquant un algorithme de plus court chemin élémentaire avec contraintes de ressources. Dans la deuxième partie, nous concevons un algorithme exact pour résoudre la deuxième variante du problème. À cette fin, le problème est modélisé comme un problème de tournées de véhicules multi-périodes prenant en compte explicitement les variations potentielles du niveau de production sur un horizon donné. De nouvelles stratégies sont proposées pour résoudre le problème de plus court chemin élémentaire avec contraintes de ressources, impliquant dans ce cas une structure particulière étant donné la caractéristique multi-périodes du problème général. Pour résoudre des instances de taille réaliste dans des temps de calcul raisonnables, une approche de résolution de nature heuristique est requise. La troisième partie propose un algorithme de recherche adaptative à grands voisinages où de nombreuses nouvelles stratégies d’exploration et d’exploitation sont proposées pour améliorer la performances de l’algorithme proposé en termes de la qualité de la solution obtenue et du temps de calcul nécessaire.
Resumo:
In the global construction context, the best value or most economically advantageous tender is becoming a widespread approach for contractor selection, as an alternative to other traditional awarding criteria such as the lowest price. In these multi-attribute tenders, the owner or auctioneer solicits proposals containing both a price bid and additional technical features. Once the proposals are received, each bidder’s price bid is given an economic score according to a scoring rule, generally called an economic scoring formula (ESF) and a technical score according to pre-specified criteria. Eventually, the contract is awarded to the bidder with the highest weighted overall score (economic + technical). However, economic scoring formula selection by auctioneers is invariably and paradoxically a highly intuitive process in practice, involving few theoretical or empirical considerations, despite having been considered traditionally and mistakenly as objective, due to its mathematical nature. This paper provides a taxonomic classification of a wide variety of ESFs and abnormally low bids criteria (ALBC) gathered in several countries with different tendering approaches. Practical implications concern the optimal design of price scoring rules in construction contract tenders, as well as future analyses of the effects of the ESF and ALBC on competitive bidding behaviour.
Resumo:
A tese analisa as mudanças da política de compras e contratações da administração pública federal brasileira descrevendo de forma sistemática os seis casos nos quais as regras e procedimentos sofrem alteração substancial, na forma de leis gerais ou estatutos: a centralização das compras no período Vargas, em dois momentos decisivos (1931 e 1940); a revisão das regras de licitação pelo Decreto-lei n. 200, no contexto da reforma administrativa do governo Castello Branco; a edição de um estatuto das licitações (o Decreto-lei n 2.300) no governo Sarney; a aprovação no Legislativo de uma lei de licitações voltada para o combate à corrupção e ao direcionamento dos contratos públicos (Lei 8.666); a tentativa frustrada de uma nova lei alinhada com a reforma gerencial do primeiro governo Fernando Henrique Cardoso e a criação do pregão como nova modalidade de licitação, em 2000. A pesquisa focaliza o processo político de formulação dos problemas, especificação de soluções e tomada de decisão, com base no modelo de John Kingdon, desdobrando a análise em fluxos do processo político, dos problemas emergentes e das soluções, em cada contexto histórico específico. Os seis casos são descritos por meio de narrativas estruturadas e comparados a partir das categorias do modelo teórico para elucidar como se desenvolveu o processo de mudança, quais os atores relevantes, idéias, modelos e eventos políticos que explicam suas circunstâncias e resultado.
Resumo:
O tema transparência na administração pública Brasileira esta cada vez mais em foco e o portal de compras ComprasNET faz parte dos portais de transparência do Governo Federal. Em 2011, dos quase 60 bilhões de reais gastos em investimentos e despesas diversas, 22 bilhões foram realizados por meio do ComprasNET na modalidade pregão eletrônico. Esta dissertação visa analisar a variabilidade de preços para um item específico de material, no caso o papel A4 75 gr, verificando estatisticamente se existe variabilidade de preços nas diversas licitações realizadas pelos órgãos da Administração Pública Federal. Em caso afirmativo, o trabalho visa identificar os procedimentos administrativos que podem ter causado tal divergência. A partir daí, foram apresentadas sugestões para alteração desses procedimentos administrativos, visando a redução do valor pago nos pregões eletrônicos. As recomendações foram baseadas na legislação em vigor e em decisões e acórdãos do TCU, AGU e demais órgãos da Administração Pública Federal. Dessa forma, este trabalho vai de encontro aos anseios da sociedade em ter uma melhor aplicação dos recursos públicos arrecadados por meio de impostos e taxas pagos pela população.
Resumo:
O presente trabalho investiga os impactos das licitações do tipo menor preço, realizadas por meio de Pregão eletrônico, no desempenho das execuções dos contratos contínuos, efetivados pela Superintendência de Administração da AGU em Pernambuco - SAD/PE -, no período de 2006 a 2010. Teve como proposição a premissa de que a contratação do tipo menor preço pode, em função de suas próprias características, estimular uma redução excessiva nos preços ofertados pelos licitantes e originar contratações com valores muito baixos que interferem de forma negativa no desempenho da prestação dos serviços, gerando infrações contratuais e diminuindo a vida útil dos contratos contínuos. A aparente economia, consequência da acirrada competição nos leilões invertidos, que caracterizam a modalidade licitatória Pregão, em médio e longo prazo, pode ser questionada. Os resultados confirmaram a proposição, evidenciando um percentual de 55% dos contratos, oriundos de Pregão eletrônico, com infrações e 31%, rescindidos unilateralmente por descumprimento de cláusulas contratuais. Foi identificada uma relação, de força moderada, inversamente proporcional entre a economia inicial gerada na licitação e o tempo de execução dos contratos, sugerindo uma tendência no sentido de que - quanto maior a diferença entre o valor referencia e o contratado na licitação, menor o tempo de execução do contrato, pois, parte das contratações muito abaixo do preço de mercado, geraram contratos com pequena vida útil e com muitas infrações. As análises dos dados apontam para a necessidade de se relativizar a adoção da modalidade licitatória Pregão, repensando-se a sua indicação para serviços continuados
Resumo:
The process for choosing the best components to build systems has become increasingly complex. It becomes more critical if it was need to consider many combinations of components in the context of an architectural configuration. These circumstances occur, mainly, when we have to deal with systems involving critical requirements, such as the timing constraints in distributed multimedia systems, the network bandwidth in mobile applications or even the reliability in real-time systems. This work proposes a process of dynamic selection of architectural configurations based on non-functional requirements criteria of the system, which can be used during a dynamic adaptation. This proposal uses the MAUT theory (Multi-Attribute Utility Theory) for decision making from a finite set of possibilities, which involve multiple criteria to be analyzed. Additionally, it was proposed a metamodel which can be used to describe the application s requirements in terms of the non-functional requirements criteria and their expected values, to express them in order to make the selection of the desired configuration. As a proof of concept, it was implemented a module that performs the dynamic choice of configurations, the MoSAC. This module was implemented using a component-based development approach (CBD), performing a selection of architectural configurations based on the proposed selection process involving multiple criteria. This work also presents a case study where an application was developed in the context of Digital TV to evaluate the time spent on the module to return a valid configuration to be used in a middleware with autoadaptative features, the middleware AdaptTV
Resumo:
In multi-attribute utility theory, it is often not easy to elicit precise values for the scaling weights representing the relative importance of criteria. A very widespread approach is to gather incomplete information. A recent approach for dealing with such situations is to use information about each alternative?s intensity of dominance, known as dominance measuring methods. Different dominancemeasuring methods have been proposed, and simulation studies have been carried out to compare these methods with each other and with other approaches but only when ordinal information about weights is available. In this paper, we useMonte Carlo simulation techniques to analyse the performance of and adapt such methods to deal with weight intervals, weights fitting independent normal probability distributions orweights represented by fuzzy numbers.Moreover, dominance measuringmethod performance is also compared with a widely used methodology dealing with incomplete information on weights, the stochastic multicriteria acceptability analysis (SMAA). SMAA is based on exploring the weight space to describe the evaluations that would make each alternative the preferred one.
Resumo:
Knowledge resource reuse has become a popular approach within the ontology engineering field, mainly because it can speed up the ontology development process, saving time and money and promoting the application of good practices. The NeOn Methodology provides guidelines for reuse. These guidelines include the selection of the most appropriate knowledge resources for reuse in ontology development. This is a complex decision-making problem where different conflicting objectives, like the reuse cost, understandability, integration workload and reliability, have to be taken into account simultaneously. GMAA is a PC-based decision support system based on an additive multi-attribute utility model that is intended to allay the operational difficulties involved in the Decision Analysis methodology. The paper illustrates how it can be applied to select multimedia ontologies for reuse to develop a new ontology in the multimedia domain. It also demonstrates that the sensitivity analyses provided by GMAA are useful tools for making a final recommendation.
Resumo:
In the mid-long-term after a nuclear accident, the contamination of drinking water sources, fish and other aquatic foodstuffs, irrigation supplies and people?s exposure during recreational activities may create considerable public concern, even though dose assessment may in certain situations indicate lesser importance than for other sources, as clearly experienced in the aftermath of past accidents. In such circumstances there are a number of available countermeasure options, ranging from specific chemical treatment of lakes to bans on fish ingestion or on the use of water for crop irrigation. The potential actions can be broadly grouped into four main categories, chemical, biological, physical and social. In some cases a combination of actions may be the optimal strategy and a decision support system (DSS) like MOIRA-PLUS can be of great help to optimise a decision. A further option is of course not to take any remedial actions, although this may also have significant socio-economic repercussions which should be adequately evaluated. MOIRA-PLUS is designed to allow for a reliable assessment of the long-term evolution of the radiological situation and of feasible alternative rehabilitation strategies, including an objective evaluation of their social, economic and ecological impacts in a rational and comprehensive manner. MOIRA-PLUS also features a decision analysis methodology, making use of multi-attribute analysis, which can take into account the preferences and needs of different types of stakeholders. The main functions and elements of the system are described summarily. Also the conclusions from end-user?s experiences with the system are discussed, including exercises involving the organizations responsible for emergency management and the affected services, as well as different local and regional stakeholders. MOIRAPLUS has proven to be a mature system, user friendly and relatively easy to set up. It can help to better decisionmaking by enabling a realistic evaluation of the complete impacts of possible recovery strategies. Also, the interaction with stakeholders has allowed identifying improvements of the system that have been recently implemented.