120 resultados para Statistical mixture-design optimization
Resumo:
An exciting application of crowdsourcing is to use social networks in complex task execution. In this paper, we address the problem of a planner who needs to incentivize agents within a network in order to seek their help in executing an atomic task as well as in recruiting other agents to execute the task. We study this mechanism design problem under two natural resource optimization settings: (1) cost critical tasks, where the planner's goal is to minimize the total cost, and (2) time critical tasks, where the goal is to minimize the total time elapsed before the task is executed. We identify a set of desirable properties that should ideally be satisfied by a crowdsourcing mechanism. In particular, sybil-proofness and collapse-proofness are two complementary properties in our desiderata. We prove that no mechanism can satisfy all the desirable properties simultaneously. This leads us naturally to explore approximate versions of the critical properties. We focus our attention on approximate sybil-proofness and our exploration leads to a parametrized family of payment mechanisms which satisfy collapse-proofness. We characterize the approximate versions of the desirable properties in cost critical and time critical domain.
Resumo:
Data clustering is a common technique for statistical data analysis, which is used in many fields, including machine learning and data mining. Clustering is grouping of a data set or more precisely, the partitioning of a data set into subsets (clusters), so that the data in each subset (ideally) share some common trait according to some defined distance measure. In this paper we present the genetically improved version of particle swarm optimization algorithm which is a population based heuristic search technique derived from the analysis of the particle swarm intelligence and the concepts of genetic algorithms (GA). The algorithm combines the concepts of PSO such as velocity and position update rules together with the concepts of GA such as selection, crossover and mutation. The performance of the above proposed algorithm is evaluated using some benchmark datasets from Machine Learning Repository. The performance of our method is better than k-means and PSO algorithm.
Resumo:
In this article, we study the thermal performance of phase-change material (PCM)-based heat sinks under cyclic heat load and subjected to melt convection. Plate fin type heat sinks made of aluminum and filled with PCM are considered in this study. The heat sink is heated from the bottom. For a prescribed value of heat flux, design of such a heat sink can be optimized with respect to its geometry, with the objective of minimizing the temperature rise during heating and ensuring complete solidification of PCM at the end of the cooling period for a given cycle. For given length and base plate thickness of a heat sink, a genetic algorithm (GA)-based optimization is carried out with respect to geometrical variables such as fin thickness, fin height, and the number of fins. The thermal performance of the heat sink for a given set of parameters is evaluated using an enthalpy-based heat transfer model, which provides the necessary data for the optimization algorithm. The effect of melt convection is studied by taking two cases, one without melt convection (conduction regime) and the other with convection. The results show that melt convection alters the results of geometrical optimization.
Resumo:
Compliant mechanisms are elastic continua used to transmit or transform force and motion mechanically. The topology optimization methods developed for compliant mechanisms also give the shape for a chosen parameterization of the design domain with a fixed mesh. However, in these methods, the shapes of the flexible segments in the resulting optimal solutions are restricted either by the type or the resolution of the design parameterization. This limitation is overcome in this paper by focusing on optimizing the skeletal shape of the compliant segments in a given topology. It is accomplished by identifying such segments in the topology and representing them using Bezier curves. The vertices of the Bezier control polygon are used to parameterize the shape-design space. Uniform parameter steps of the Bezier curves naturally enable adaptive finite element discretization of the segments as their shapes change. Practical constraints such as avoiding intersections with other segments, self-intersections, and restrictions on the available space and material, are incorporated into the formulation. A multi-criteria function from our prior work is used as the objective. Analytical sensitivity analysis for the objective and constraints is presented and is used in the numerical optimization. Examples are included to illustrate the shape optimization method.
Resumo:
The present work presents the results of experimental investigation of semi-solid rheocasting of A356 Al alloy using a cooling slope. The experiments have been carried out following Taguchi method of parameter design (orthogonal array of L-9 experiments). Four key process variables (slope angle, pouring temperature, wall temperature, and length of travel of the melt) at three different levels have been considered for the present experimentation. Regression analysis and analysis of variance (ANOVA) has also been performed to develop a mathematical model for degree of sphericity evolution of primary alpha-Al phase and to find the significance and percentage contribution of each process variable towards the final outcome of degree of sphericity, respectively. The best processing condition has been identified for optimum degree of sphericity (0.83) as A(3), B-3, C-2, D-1 i.e., slope angle of 60 degrees, pouring temperature of 650 degrees C, wall temperature 60 degrees C, and 500 mm length of travel of the melt, based on mean response and signal to noise ratio (SNR). ANOVA results shows that the length of travel has maximum impact on degree of sphericity evolution. The predicted sphericity obtained from the developed regression model and the values obtained experimentally are found to be in good agreement with each other. The sphericity values obtained from confirmation experiment, performed at 95% confidence level, ensures that the optimum result is correct and also the confirmation experiment values are within permissible limits. (c) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Smoothed functional (SF) schemes for gradient estimation are known to be efficient in stochastic optimization algorithms, especially when the objective is to improve the performance of a stochastic system However, the performance of these methods depends on several parameters, such as the choice of a suitable smoothing kernel. Different kernels have been studied in the literature, which include Gaussian, Cauchy, and uniform distributions, among others. This article studies a new class of kernels based on the q-Gaussian distribution, which has gained popularity in statistical physics over the last decade. Though the importance of this family of distributions is attributed to its ability to generalize the Gaussian distribution, we observe that this class encompasses almost all existing smoothing kernels. This motivates us to study SF schemes for gradient estimation using the q-Gaussian distribution. Using the derived gradient estimates, we propose two-timescale algorithms for optimization of a stochastic objective function in a constrained setting with a projected gradient search approach. We prove the convergence of our algorithms to the set of stationary points of an associated ODE. We also demonstrate their performance numerically through simulations on a queuing model.
Resumo:
Thermally induced demixing in an LCST mixture, polystyrene (PS)/polyvinyl methyl ether] (PVME), was used as a template to design materials with high electrical conductivity. This was facilitated by gelation of multiwall carbon nanotubes (MWNTs) in a given phase of the blends. The MWNTs were mixed in the miscible blends and the thermodynamic driven demixing further resulted in selective localization in the PVME phase of the blends. This was further confirmed by atomic force microscopy (AFM). The time dependent gelation of MWNTs at shallow quench depth, evaluated using isochronal temperature sweep by rheology, was studied by monitoring the melt electrical conductivity of the samples in situ by an LCR meter coupled to a rheometer. By varying the composition in the mixture, several intricate shapes like gaskets and also coatings capable of attenuating the EM radiation in the microwave frequency can be derived. For instance, the PVME rich mixtures can be molded in the form of a gasket, O-ring and other intricate shapes while the PS rich mixtures can be coated onto an insulating polymer to enhance the shielding effectiveness (SE) for EM radiation. The SE of the various materials was analyzed using a vector network analyzer in both the X-band (8.2 to 12 GHz) and the K-u-band (12 to 18 GHz) frequency. The improved SE upon gelation of MWNTs in the demixed blends is well evident by comparing the SE before and after demixing. A reflection loss of -35 dB was observed in the blends with 2 wt% MWNTs. Further, by coating a layer of ca. 0.15 mm of PS/PVME/MWNT, a SE of -15 dB at 18 GHz could be obtained.
Resumo:
Friction stir processing (FSP) is emerging as one of the most competent severe plastic deformation (SPD) method for producing bulk ultra-fine grained materials with improved properties. Optimizing the process parameters for a defect free process is one of the challenging aspects of FSP to mark its commercial use. For the commercial aluminium alloy 2024-T3 plate of 6 mm thickness, a bottom-up approach has been attempted to optimize major independent parameters of the process such as plunge depth, tool rotation speed and traverse speed. Tensile properties of the optimum friction stir processed sample were correlated with the microstructural characterization done using Scanning Electron Microscope (SEM) and Electron Back-Scattered Diffraction (EBSD). Optimum parameters from the bottom-up approach have led to a defect free FSP having a maximum strength of 93% the base material strength. Micro tensile testing of the samples taken from the center of processed zone has shown an increased strength of 1.3 times the base material. Measured maximum longitudinal residual stress on the processed surface was only 30 MPa which was attributed to the solid state nature of FSP. Microstructural observation reveals significant grain refinement with less variation in the grain size across the thickness and a large amount of grain boundary precipitation compared to the base metal. The proposed experimental bottom-up approach can be applied as an effective method for optimizing parameters during FSP of aluminium alloys, which is otherwise difficult through analytical methods due to the complex interactions between work-piece, tool and process parameters. Precipitation mechanisms during FSP were responsible for the fine grained microstructure in the nugget zone that provided better mechanical properties than the base metal. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
The objective of this study is to determine an optimal trailing edge flap configuration and flap location to achieve minimum hub vibration levels and flap actuation power simultaneously. An aeroelastic analysis of a soft in-plane four-bladed rotor is performed in conjunction with optimal control. A second-order polynomial response surface based on an orthogonal array (OA) with 3-level design describes both the objectives adequately. Two new orthogonal arrays called MGB2P-OA and MGB4P-OA are proposed to generate nonlinear response surfaces with all interaction terms for two and four parameters, respectively. A multi-objective bat algorithm (MOBA) approach is used to obtain the optimal design point for the mutually conflicting objectives. MOBA is a recently developed nature-inspired metaheuristic optimization algorithm that is based on the echolocation behaviour of bats. It is found that MOBA inspired Pareto optimal trailing edge flap design reduces vibration levels by 73% and flap actuation power by 27% in comparison with the baseline design.
Resumo:
Selection of relevant features is an open problem in Brain-computer interfacing (BCI) research. Sometimes, features extracted from brain signals are high dimensional which in turn affects the accuracy of the classifier. Selection of the most relevant features improves the performance of the classifier and reduces the computational cost of the system. In this study, we have used a combination of Bacterial Foraging Optimization and Learning Automata to determine the best subset of features from a given motor imagery electroencephalography (EEG) based BCI dataset. Here, we have employed Discrete Wavelet Transform to obtain a high dimensional feature set and classified it by Distance Likelihood Ratio Test. Our proposed feature selector produced an accuracy of 80.291% in 216 seconds.
Resumo:
In this paper, we consider the problem of power allocation in MIMO wiretap channel for secrecy in the presence of multiple eavesdroppers. Perfect knowledge of the destination channel state information (CSI) and only the statistical knowledge of the eavesdroppers CSI are assumed. We first consider the MIMO wiretap channel with Gaussian input. Using Jensen's inequality, we transform the secrecy rate max-min optimization problem to a single maximization problem. We use generalized singular value decomposition and transform the problem to a concave maximization problem which maximizes the sum secrecy rate of scalar wiretap channels subject to linear constraints on the transmit covariance matrix. We then consider the MIMO wiretap channel with finite-alphabet input. We show that the transmit covariance matrix obtained for the case of Gaussian input, when used in the MIMO wiretap channel with finite-alphabet input, can lead to zero secrecy rate at high transmit powers. We then propose a power allocation scheme with an additional power constraint which alleviates this secrecy rate loss problem, and gives non-zero secrecy rates at high transmit powers.
Resumo:
In this paper, we consider applying derived knowledge base regarding the sensitivity and specificity of damage(s) to be detected by an SHM system being designed and qualified. These efforts are necessary toward developing capabilities in SHM system to classify reliably various probable damages through sequence of monitoring, i.e., damage precursor identification, detection of damage and monitoring its progression. We consider the particular problem of visual and ultrasonic NDE based SHM system design requirements, where the damage detection sensitivity and specificity data definitions for a class of structural components are established. Methodologies for SHM system specification creation are discussed in details. Examples are shown to illustrate how the physics of damage detection scheme limits particular damage detection sensitivity and specificity and further how these information can be used in algorithms to combine various different NDE schemes in an SHM system to enhance efficiency and effectiveness. Statistical and data driven models to determine the sensitivity and probability of damage detection (POD) has been demonstrated for plate with varying one-sided line crack using optical and ultrasonic based inspection techniques.
Resumo:
In this work, spectrum sensing for cognitive radios is considered in the presence of multiple Primary Users (PU) using frequency-hopping communication over a set of frequency bands. The detection performance of the Fast Fourier Transform (FFT) Average Ratio (FAR) algorithm is obtained in closed-form, for a given FFT size and number of PUs. The effective throughput of the Secondary Users (SU) is formulated as an optimization problem with a constraint on the maximum allowable interference on the primary network. Given the hopping period of the PUs, the sensing duration that maximizes the SU throughput is derived. The results are validated using Monte Carlo simulations. Further, an implementation of the FAR algorithm on the Lyrtech (now, Nutaq) small form factor software defined radio development platform is presented, and the performance recorded through the hardware is observed to corroborate well with that obtained through simulations, allowing for implementation losses. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
This paper presents the design and implementation of PolyMage, a domain-specific language and compiler for image processing pipelines. An image processing pipeline can be viewed as a graph of interconnected stages which process images successively. Each stage typically performs one of point-wise, stencil, reduction or data-dependent operations on image pixels. Individual stages in a pipeline typically exhibit abundant data parallelism that can be exploited with relative ease. However, the stages also require high memory bandwidth preventing effective utilization of parallelism available on modern architectures. For applications that demand high performance, the traditional options are to use optimized libraries like OpenCV or to optimize manually. While using libraries precludes optimization across library routines, manual optimization accounting for both parallelism and locality is very tedious. The focus of our system, PolyMage, is on automatically generating high-performance implementations of image processing pipelines expressed in a high-level declarative language. Our optimization approach primarily relies on the transformation and code generation capabilities of the polyhedral compiler framework. To the best of our knowledge, this is the first model-driven compiler for image processing pipelines that performs complex fusion, tiling, and storage optimization automatically. Experimental results on a modern multicore system show that the performance achieved by our automatic approach is up to 1.81x better than that achieved through manual tuning in Halide, a state-of-the-art language and compiler for image processing pipelines. For a camera raw image processing pipeline, our performance is comparable to that of a hand-tuned implementation.
Resumo:
Campaigners are increasingly using online social networking platforms for promoting products, ideas and information. A popular method of promoting a product or even an idea is incentivizing individuals to evangelize the idea vigorously by providing them with referral rewards in the form of discounts, cash backs, or social recognition. Due to budget constraints on scarce resources such as money and manpower, it may not be possible to provide incentives for the entire population, and hence incentives need to be allocated judiciously to appropriate individuals for ensuring the highest possible outreach size. We aim to do the same by formulating and solving an optimization problem using percolation theory. In particular, we compute the set of individuals that are provided incentives for minimizing the expected cost while ensuring a given outreach size. We also solve the problem of computing the set of individuals to be incentivized for maximizing the outreach size for given cost budget. The optimization problem turns out to be non trivial; it involves quantities that need to be computed by numerically solving a fixed point equation. Our primary contribution is, that for a fairly general cost structure, we show that the optimization problems can be solved by solving a simple linear program. We believe that our approach of using percolation theory to formulate an optimization problem is the first of its kind. (C) 2016 Elsevier B.V. All rights reserved.