919 resultados para Superiority and Inferiority Multi-criteria Ranking (SIR) Method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a generic method/model for multi-objective design optimization of laminated composite components, based on vector evaluated particle swarm optimization (VEPSO) algorithm. VEPSO is a novel, co-evolutionary multi-objective variant of the popular particle swarm optimization algorithm (PSO). In the current work a modified version of VEPSO algorithm for discrete variables has been developed and implemented successfully for the, multi-objective design optimization of composites. The problem is formulated with multiple objectives of minimizing weight and the total cost of the composite component to achieve a specified strength. The primary optimization variables are - the number of layers, its stacking sequence (the orientation of the layers) and thickness of each layer. The classical lamination theory is utilized to determine the stresses in the component and the design is evaluated based on three failure criteria; failure mechanism based failure criteria, Maximum stress failure criteria and the Tsai-Wu failure criteria. The optimization method is validated for a number of different loading configurations - uniaxial, biaxial and bending loads. The design optimization has been carried for both variable stacking sequences, as well fixed standard stacking schemes and a comparative study of the different design configurations evolved has been presented. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present a generic method/model for multi-objective design optimization of laminated composite components, based on Vector Evaluated Artificial Bee Colony (VEABC) algorithm. VEABC is a parallel vector evaluated type, swarm intelligence multi-objective variant of the Artificial Bee Colony algorithm (ABC). In the current work a modified version of VEABC algorithm for discrete variables has been developed and implemented successfully for the multi-objective design optimization of composites. The problem is formulated with multiple objectives of minimizing weight and the total cost of the composite component to achieve a specified strength. The primary optimization variables are the number of layers, its stacking sequence (the orientation of the layers) and thickness of each layer. The classical lamination theory is utilized to determine the stresses in the component and the design is evaluated based on three failure criteria: failure mechanism based failure criteria, maximum stress failure criteria and the tsai-wu failure criteria. The optimization method is validated for a number of different loading configurations-uniaxial, biaxial and bending loads. The design optimization has been carried for both variable stacking sequences, as well fixed standard stacking schemes and a comparative study of the different design configurations evolved has been presented. Finally the performance is evaluated in comparison with other nature inspired techniques which includes Particle Swarm Optimization (PSO), Artificial Immune System (AIS) and Genetic Algorithm (GA). The performance of ABC is at par with that of PSO, AIS and GA for all the loading configurations. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Learning to rank from relevance judgment is an active research area. Itemwise score regression, pairwise preference satisfaction, and listwise structured learning are the major techniques in use. Listwise structured learning has been applied recently to optimize important non-decomposable ranking criteria like AUC (area under ROC curve) and MAP(mean average precision). We propose new, almost-lineartime algorithms to optimize for two other criteria widely used to evaluate search systems: MRR (mean reciprocal rank) and NDCG (normalized discounted cumulative gain)in the max-margin structured learning framework. We also demonstrate that, for different ranking criteria, one may need to use different feature maps. Search applications should not be optimized in favor of a single criterion, because they need to cater to a variety of queries. E.g., MRR is best for navigational queries, while NDCG is best for informational queries. A key contribution of this paper is to fold multiple ranking loss functions into a multi-criteria max-margin optimization.The result is a single, robust ranking model that is close to the best accuracy of learners trained on individual criteria. In fact, experiments over the popular LETOR and TREC data sets show that, contrary to conventional wisdom, a test criterion is often not best served by training with the same individual criterion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study presents an overview of seismic microzonation and existing methodologies with a newly proposed methodology covering all aspects. Earlier seismic microzonation methods focused on parameters that affect the structure or foundation related problems. But seismic microzonation has generally been recognized as an important component of urban planning and disaster management. So seismic microzonation should evaluate all possible hazards due to earthquake and represent the same by spatial distribution. This paper presents a new methodology for seismic microzonation which has been generated based on location of study area and possible associated hazards. This new method consists of seven important steps with defined output for each step and these steps are linked with each other. Addressing one step and respective result may not be seismic microzonation, which is practiced widely. This paper also presents importance of geotechnical aspects in seismic microzonation and how geotechnical aspects affect the final map. For the case study, seismic hazard values at rock level are estimated considering the seismotectonic parameters of the region using deterministic and probabilistic seismic hazard analysis. Surface level hazard values are estimated considering site specific study and local site effects based on site classification/characterization. The liquefaction hazard is estimated using standard penetration test data. These hazard parameters are integrated in Geographical Information System (GIS) using Analytic Hierarchy Process (AHP) and used to estimate hazard index. Hazard index is arrived by following a multi-criteria evaluation technique - AHP, in which each theme and features have been assigned weights and then ranked respectively according to a consensus opinion about their relative significance to the seismic hazard. The hazard values are integrated through spatial union to obtain the deterministic microzonation map and probabilistic microzonation map for a specific return period. Seismological parameters are widely used for microzonation rather than geotechnical parameters. But studies show that the hazard index values are based on site specific geotechnical parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Grating Compression Transform (GCT) is a two-dimensional analysis of speech signal which has been shown to be effective in multi-pitch tracking in speech mixtures. Multi-pitch tracking methods using GCT apply Kalman filter framework to obtain pitch tracks which requires training of the filter parameters using true pitch tracks. We propose an unsupervised method for obtaining multiple pitch tracks. In the proposed method, multiple pitch tracks are modeled using time-varying means of a Gaussian mixture model (GMM), referred to as TVGMM. The TVGMM parameters are estimated using multiple pitch values at each frame in a given utterance obtained from different patches of the spectrogram using GCT. We evaluate the performance of the proposed method on all voiced speech mixtures as well as random speech mixtures having well separated and close pitch tracks. TVGMM achieves multi-pitch tracking with 51% and 53% multi-pitch estimates having error <= 20% for random mixtures and all-voiced mixtures respectively. TVGMM also results in lower root mean squared error in pitch track estimation compared to that by Kalman filtering.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We are at the cusp of a historic transformation of both communication system and electricity system. This creates challenges as well as opportunities for the study of networked systems. Problems of these systems typically involve a huge number of end points that require intelligent coordination in a distributed manner. In this thesis, we develop models, theories, and scalable distributed optimization and control algorithms to overcome these challenges.

This thesis focuses on two specific areas: multi-path TCP (Transmission Control Protocol) and electricity distribution system operation and control. Multi-path TCP (MP-TCP) is a TCP extension that allows a single data stream to be split across multiple paths. MP-TCP has the potential to greatly improve reliability as well as efficiency of communication devices. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate a new algorithm Balia (balanced linked adaptation) which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new proposed algorithm Balia with existing MP-TCP algorithms.

Our second focus is on designing computationally efficient algorithms for electricity distribution system operation and control. First, we develop efficient algorithms for feeder reconfiguration in distribution networks. The feeder reconfiguration problem chooses the on/off status of the switches in a distribution network in order to minimize a certain cost such as power loss. It is a mixed integer nonlinear program and hence hard to solve. We propose a heuristic algorithm that is based on the recently developed convex relaxation of the optimal power flow problem. The algorithm is efficient and can successfully computes an optimal configuration on all networks that we have tested. Moreover we prove that the algorithm solves the feeder reconfiguration problem optimally under certain conditions. We also propose a more efficient algorithm and it incurs a loss in optimality of less than 3% on the test networks.

Second, we develop efficient distributed algorithms that solve the optimal power flow (OPF) problem on distribution networks. The OPF problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. Traditionally OPF is solved in a centralized manner. With increasing penetration of volatile renewable energy resources in distribution systems, we need faster and distributed solutions for real-time feedback control. This is difficult because power flow equations are nonlinear and kirchhoff's law is global. We propose solutions for both balanced and unbalanced radial distribution networks. They exploit recent results that suggest solving for a globally optimal solution of OPF over a radial network through a second-order cone program (SOCP) or semi-definite program (SDP) relaxation. Our distributed algorithms are based on the alternating direction method of multiplier (ADMM), but unlike standard ADMM-based distributed OPF algorithms that require solving optimization subproblems using iterative methods, the proposed solutions exploit the problem structure that greatly reduce the computation time. Specifically, for balanced networks, our decomposition allows us to derive closed form solutions for these subproblems and it speeds up the convergence by 1000x times in simulations. For unbalanced networks, the subproblems reduce to either closed form solutions or eigenvalue problems whose size remains constant as the network scales up and computation time is reduced by 100x compared with iterative methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The lengths of otoliths and other skeletal structures recovered from the scats of pinnipeds, such as Steller sea lions (Eumetopias jubatus), correlate with body size and can be used to estimate the length of prey consumed. Unfortunately, otoliths are often found in too few scats or are too digested to usefully estimate prey size. Alternative diagnostic bones are frequently recovered, but few bone-size to prey-size correlations exist and bones are also reduced in size by various degrees owing to digestion. To prevent underestimates in prey sizes consumed techniques are required to account for the degree of digestion of alternative bones prior to estimating prey size. We developed a method (using defined criteria and photo-reference material) to assign the degree of digestion for key cranial structures of two prey species: walleye pollock (Theragra chalcogramma) and Atka mackerel (Pleurogrammus monopterygius). The method grades each structure into one of three condition categories; good, fair or poor. We also conducted feeding trials with captive Steller sea lions, feeding both fish species to determine the extent of erosion of each structure and to derive condition-specific digestion correction factors to reconstruct the original sizes of the structures consumed. In general, larger structures were relatively more digested than smaller ones. Mean size reduction varied between different types of structures (3.3−26.3%), but was not influenced by the size of the prey consumed. Results from the observations and experiments were combined to be able to reconstruct the size of prey consumed by sea lions and other pinnipeds. The proposed method has four steps: 1) measure the recovered structures and grade the extent of digestion by using defined criteria and photo-reference collection; 2) exclude structures graded in poor condition; 3) multiply measurements of structures in good and fair condition by their appropriate digestion correction factors to derive their original size; and 4) calculate the size of prey from allometric regressions relating corrected structure measurements to body lengths. This technique can be readily applied to piscivore dietary studies that use hard remains of fish.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are a variety of guidelines and methods available to measure and assess survey quality. Most of these are based on qualitative descriptions. In practice, they are not easy to implement and it is very difficult to make comparisons between surveys. Hence there is a theoretical and pragmatic demand to develop a mainly quantitative based survey assessment tool. This research aimed to meet this need and make contributions to the evaluation and improvement of survey quality. Acknowledging the critical importance of measurement issues in survey research, this thesis starts with a comprehensive introduction to measurement theory and identifies the types of measurement errors associated with measurement procedures through three experiments. Then it moves on to describe concepts, guidelines and methods available for measuring and assessing survey quality. Combining these with measurement principles leads to the development of a quantitative based statistical holistic tool to measure and assess survey quality. The criteria, weights and subweights for the assessment tool are determined using Multi-Criteria Decision-Making (MCDM) and a survey questionnaire based on the Delphi method. Finally the model is applied to a database of surveys which was constructed to develop methods of classification, assessment and improvement of survey quality. The model developed in this thesis enables survey researchers and/or commissioners to make a holistic assessment of the value of the particular survey(s). This model is an Excel based audit which takes a holistic approach, following all stages of the survey from inception, to design, construction, execution, analysis and dissemination. At each stage a set of criteria are applied to assess quality. Scores attained against these assessments are weighted by the importance of the criteria and summed to give an overall assessment of the stage. The total score for a survey can be obtained by a combination of the scores for every stage weighted again by the importance of each stage. The advantage of this is to construct a means of survey assessment which can be used in a diagnostic manner to assess and improve survey quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A ranking method assigns to every weighted directed graph a (weak) ordering of the nodes. In this paper we axiomatize the ranking method that ranks the nodes according to their outflow using four independent axioms. Besides the well-known axioms of anonymity and positive responsiveness we introduce outflow monotonicity – meaning that in pairwise comparison between two nodes, a node is not doing worse in case its own outflow does not decrease and the other node’s outflow does not increase – and order preservation – meaning that adding two weighted digraphs such that the pairwise ranking between two nodes is the same in both weighted digraphs, then this is also their pairwise ranking in the ‘sum’ weighted digraph. The outflow ranking method generalizes the ranking by outdegree for directed graphs, and therefore also generalizes the ranking by Copeland score for tournaments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates the center selection of multi-output radial basis function (RBF) networks, and a multi-output fast recursive algorithm (MFRA) is proposed. This method can not only reveal the significance of each candidate center based on the reduction in the trace of the error covariance matrix, but also can estimate the network weights simultaneously using a back substitution approach. The main contribution is that the center selection procedure and the weight estimation are performed within a well-defined regression context, leading to a significantly reduced computational complexity. The efficiency of the algorithm is confirmed by a computational complexity analysis, and simulation results demonstrate its effectiveness. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The practice of mixed-methods research has increased considerably over the last 10 years. While these studies have been criticized for violating quantitative and qualitative paradigmatic assumptions, the methodological quality of mixed-method studies has not been addressed. The purpose of this paper is to identify criteria to critically appraise the quality of mixed-method studies in the health literature. Criteria for critically appraising quantitative and qualitative studies were generated from a review of the literature. These criteria were organized according to a cross-paradigm framework. We recommend that these criteria be applied to a sample of mixed-method studies which are judged to be exemplary. With the consultation of critical appraisal experts and experienced qualitative, quantitative, and mixed-method researchers, further efforts are required to revise and prioritize the criteria according to importance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A multiplex surface plasmon resonance (SPR) biosensor method for the detection of paralytic shellfish poisoning (PSP) toxins, okadaic acid (and analogues) and domoic acid was developed. This method was compared to enzyme-linked immunosorbent assay (ELISA) methods. Seawater samples (n?=?256) from around Europe were collected by the consortia of an EU project MIcroarrays for the Detection of Toxic Algae (MIDTAL) and evaluated using each method. A simple sample preparation procedure was developed which involved lysing and releasing the toxins from the algal cells with glass beads followed by centrifugation and filtering the extract before testing for marine biotoxins by both multi-SPR and ELISA. Method detection limits based on IC20 values for PSP, okadaic acid and domoic acid toxins were 0.82, 0.36 and 1.66 ng/ml, respectively, for the prototype multiplex SPR biosensor. Evaluation by SPR for seawater samples has shown that 47, 59 and 61 % of total seawater samples tested positive (result greater than the IC20) for PSP, okadaic acid (and analogues) and domoic acid toxins, respectively. Toxic samples were received mainly from Spain and Ireland. This work has demonstrated the potential of multiplex analysis for marine biotoxins in algal and seawater samples with results available for 24 samples within a 7 h period for three groups of key marine biotoxins. Multiplex immunological methods could therefore be used as early warning monitoring tools for a variety of marine biotoxins in seawater samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three issues usually are associated with threat prevention intelligent surveillance systems. First, the fusion and interpretation of large scale incomplete heterogeneous information; second, the demand of effectively predicting suspects’ intention and ranking the potential threats posed by each suspect; third, strategies of allocating limited security resources (e.g., the dispatch of security team) to prevent a suspect’s further actions towards critical assets. However, in the literature, these three issues are seldomly considered together in a sensor network based intelligent surveillance framework. To address
this problem, in this paper, we propose a multi-level decision support framework for in-time reaction in intelligent surveillance. More specifically, based on a multi-criteria event modeling framework, we design a method to predict the most plausible intention of a suspect. Following this, a decision support model is proposed to rank each suspect based on their threat severity and to determine resource allocation strategies. Finally, formal properties are discussed to justify our framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the past few decades, there has been an increased frequency and duration of cyanobacterial Harmful Algal Blooms (HABs) in freshwater systems globally. These can produce secondary metabolites called cyanotoxins, many of which are hepatotoxins, raising concerns about repeated exposure through ingestion of contaminated drinking water or food or through recreational activities such as bathing/ swimming. An ultra-performance liquid chromatography tandem mass spectrometry (UPLC–MS/MS) multi-toxin method has been developed and validated for freshwater cyanotoxins; microcystins-LR, -YR, -RR, -LA, -LY and -LF, nodularin, cylindrospermopsin, anatoxin-a and the marine diatom toxin domoic acid. Separation was achieved in around 9 min and dual SPE was incorporated providing detection limits of between 0.3 and 5.6 ng/L of original sample. Intra- and inter-day precision analysis showed relative
standard deviations (RSD) of 1.2–9.6% and 1.3–12.0% respectively. The method was applied to the analysis of aquatic samples (n = 206) from six European countries. The main class detected were the hepatotoxins; microcystin-YR (n = 22), cylindrospermopsin (n = 25), microcystin-RR (n = 17), microcystin-LR (n = 12), microcystin-LY (n = 1), microcystin-LF (n = 1) and nodularin (n = 5). For microcystins, the levels detected ranged from 0.001 to 1.51 mg/L, with two samples showing combined levels above the guideline set by the WHO of 1 mg/L for microcystin-LR. Several samples presented with multiple toxins indicating the potential for synergistic effects and possibly enhanced toxicity. This is the first published pan European survey of freshwater bodies for multiple biotoxins, including two identified for the first time; cylindrospermopsin in Ireland and nodularin in Germany, presenting further incentives for improved monitoring and development of strategies to mitigate human exposure.