837 resultados para Pricing.
Resumo:
This paper presents new results on the welfare e¤ects of third-degree price discrimination under constant elasticity demand. We show that when both the share of the strong market under uniform pricing and the elasticity di¤erence between markets are high enough,then price discrimination not only can increase social welfare but also consumer surplus.
Resumo:
Fish products from the Chad Basin Lake play important role in meeting fish protein needs of Nigeria: they contribute not less than 25% of the total domestic fish supply and are significant in determining the availability of processed products and reduction of post-harvest losses. Processors, marketers and consumers are the major actors in appraising a marketing system. The results show that most sellers (4-7.5%) are within the age range of 30-39 years. Desires for more earnings led the markets to diversify their business activities to food stuff trading (37.5%), dried meat/livestock sales (37.5%), farming (12.5%), and transportation (12.5%). 65% of traders dispose off their products mostly in the mornings and evenings, 70% of the products are sold smoked while 50% of products are sold to individual consumers. Lake Chad fish products have a long distribution chain. There is also a high degree of buyers and sellers concentration in the primary fish markets and secondary (urban) markets. The products have a vertical regional movement with southern traders (82.5%) dominating the business, thus making the products popular all over Nigeria. Product differentiation with imperfect pricing policy is common occurrence. Lake Chad fish marketing system has distortions that impede its efficiency, recommendations are made on how to ensure a better efficiency of the system
Resumo:
This thesis examines foundational questions in behavioral economics—also called psychology and economics—and the neural foundations of varied sources of utility. We have three primary aims: First, to provide the field of behavioral economics with psychological theories of behavior that are derived from neuroscience and to use those theories to identify novel evidence for behavioral biases. Second, we provide neural and micro foundations of behavioral preferences that give rise to well-documented empirical phenomena in behavioral economics. Finally, we show how a deep understanding of the neural foundations of these behavioral preferences can feed back into our theories of social preferences and reference-dependent utility.
The first chapter focuses on classical conditioning and its application in identifying the psychological underpinnings of a pricing phenomenon. We return to classical conditioning again in the third chapter where we use fMRI to identify varied sources of utility—here, reference dependent versus direct utility—and cross-validate our interpretation with a conditioning experiment. The second chapter engages social preferences and, more broadly, causative utility (wherein the decision-maker derives utility from making or avoiding particular choices).
Resumo:
The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.
Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.
Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.
Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.
Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.
Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.
Resumo:
This thesis belongs to the growing field of economic networks. In particular, we develop three essays in which we study the problem of bargaining, discrete choice representation, and pricing in the context of networked markets. Despite analyzing very different problems, the three essays share the common feature of making use of a network representation to describe the market of interest.
In Chapter 1 we present an analysis of bargaining in networked markets. We make two contributions. First, we characterize market equilibria in a bargaining model, and find that players' equilibrium payoffs coincide with their degree of centrality in the network, as measured by Bonacich's centrality measure. This characterization allows us to map, in a simple way, network structures into market equilibrium outcomes, so that payoffs dispersion in networked markets is driven by players' network positions. Second, we show that the market equilibrium for our model converges to the so called eigenvector centrality measure. We show that the economic condition for reaching convergence is that the players' discount factor goes to one. In particular, we show how the discount factor, the matching technology, and the network structure interact in a very particular way in order to see the eigenvector centrality as the limiting case of our market equilibrium.
We point out that the eigenvector approach is a way of finding the most central or relevant players in terms of the “global” structure of the network, and to pay less attention to patterns that are more “local”. Mathematically, the eigenvector centrality captures the relevance of players in the bargaining process, using the eigenvector associated to the largest eigenvalue of the adjacency matrix of a given network. Thus our result may be viewed as an economic justification of the eigenvector approach in the context of bargaining in networked markets.
As an application, we analyze the special case of seller-buyer networks, showing how our framework may be useful for analyzing price dispersion as a function of sellers and buyers' network positions.
Finally, in Chapter 3 we study the problem of price competition and free entry in networked markets subject to congestion effects. In many environments, such as communication networks in which network flows are allocated, or transportation networks in which traffic is directed through the underlying road architecture, congestion plays an important role. In particular, we consider a network with multiple origins and a common destination node, where each link is owned by a firm that sets prices in order to maximize profits, whereas users want to minimize the total cost they face, which is given by the congestion cost plus the prices set by firms. In this environment, we introduce the notion of Markovian traffic equilibrium to establish the existence and uniqueness of a pure strategy price equilibrium, without assuming that the demand functions are concave nor imposing particular functional forms for the latency functions. We derive explicit conditions to guarantee existence and uniqueness of equilibria. Given this existence and uniqueness result, we apply our framework to study entry decisions and welfare, and establish that in congested markets with free entry, the number of firms exceeds the social optimum.
Resumo:
31 p.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
Energy and sustainability have become one of the most critical issues of our generation. While the abundant potential of renewable energy such as solar and wind provides a real opportunity for sustainability, their intermittency and uncertainty present a daunting operating challenge. This thesis aims to develop analytical models, deployable algorithms, and real systems to enable efficient integration of renewable energy into complex distributed systems with limited information.
The first thrust of the thesis is to make IT systems more sustainable by facilitating the integration of renewable energy into these systems. IT represents the fastest growing sectors in energy usage and greenhouse gas pollution. Over the last decade there are dramatic improvements in the energy efficiency of IT systems, but the efficiency improvements do not necessarily lead to reduction in energy consumption because more servers are demanded. Further, little effort has been put in making IT more sustainable, and most of the improvements are from improved "engineering" rather than improved "algorithms". In contrast, my work focuses on developing algorithms with rigorous theoretical analysis that improve the sustainability of IT. In particular, this thesis seeks to exploit the flexibilities of cloud workloads both (i) in time by scheduling delay-tolerant workloads and (ii) in space by routing requests to geographically diverse data centers. These opportunities allow data centers to adaptively respond to renewable availability, varying cooling efficiency, and fluctuating energy prices, while still meeting performance requirements. The design of the enabling algorithms is however very challenging because of limited information, non-smooth objective functions and the need for distributed control. Novel distributed algorithms are developed with theoretically provable guarantees to enable the "follow the renewables" routing. Moving from theory to practice, I helped HP design and implement industry's first Net-zero Energy Data Center.
The second thrust of this thesis is to use IT systems to improve the sustainability and efficiency of our energy infrastructure through data center demand response. The main challenges as we integrate more renewable sources to the existing power grid come from the fluctuation and unpredictability of renewable generation. Although energy storage and reserves can potentially solve the issues, they are very costly. One promising alternative is to make the cloud data centers demand responsive. The potential of such an approach is huge.
To realize this potential, we need adaptive and distributed control of cloud data centers and new electricity market designs for distributed electricity resources. My work is progressing in both directions. In particular, I have designed online algorithms with theoretically guaranteed performance for data center operators to deal with uncertainties under popular demand response programs. Based on local control rules of customers, I have further designed new pricing schemes for demand response to align the interests of customers, utility companies, and the society to improve social welfare.
Resumo:
The focus of this project is going to be, as the title indicates, on the comparison of marketing policies applied by the same company in different countries and analysis of the reasons for the differences. In order to do that, I have selected the company Nestlé to analyze the marketing decisions it makes across national boundaries to market the brand of Kit Kat and keep it as a leader snack worldwide. After having analyzed the brand in all continents, I can say the execution of the strategy used by Nestlé with Kit Kat really matches the planning of the strategy which is to think globally and act locally. Nestlé uses global brand identity but, from the internal point of view, it uses local ingredients and gives autonomy to its local branches based in different countries to make pricing and distributions decisions and therefore satisfy different consumers’ needs and preferences in local environments where changes happen very rapidly. The “glocal” approach to marketing is an effective way for Nestlé Kit Kat to stay focused on the consumer in worldwide markets.
Resumo:
[EN] The aim of this paper is to study systematic liquidity at the Euronext Lisbon Stock Exchange. The motivation for this research is provided by the growing interest in financial literature about stock liquidity and the implications of commonality in liquidity for asset pricing since it could represent a source of non-diversifiable risk. Namely, it is analysed whether there exist common factors that drive the variation in individual stock liquidity and the causes of the inter-temporal variation of aggregate liquidity. Monthly data for the period between January 1988 and December 2011 is used to compute some of the most used proxies for liquidity: bid-ask spreads, turnover rate, trading volume, proportion of zero returns and the illiquidity ratio. Following Chordia et al. (2000) methodology, some evidence of commonality in liquidity is found in the Portuguese stock market when the proportion of zero returns is used as a measure of liquidity. In relation to the factors that drive the inter-temporal variation of the Portuguese stock market liquidity, the results obtained within a VAR framework suggest that changes in real economy activity, monetary policy (proxied by changes in monetary aggregate M1) and stock market returns play an important role as determinants of commonality in liquidity.
Resumo:
Resumen (Castellano): El acceso a los medicamentos ha sido en las últimas décadas un tema a debatir tanto en los países del Sur como en los países del Norte. Los precios impuestos por la industria farmacéutica para enfermedades tan graves como el VIH-SIDA han sido excesivos y consecuentemente inalcanzables para los países más pobres. Sin embargo, actualmente los países más ricos del Norte están sufriendo estas mismas consecuencias a causa del nuevo tratamiento contra la hepatitis C, cuyos precios astronómicos han excluido a numerosos enfermos del acceso al mismo. El derecho humano a la salud está siendo vulnerado, y las principales responsables son las empresas farmacéuticas, las cuales han corrompido los sistemas de salud. La fijación de precios monopólicos, tras el fortalecimiento de las patentes farmacéuticas con la firma del Acuerdo sobre Derechos de Propiedad Intelectual relacionados con el Comercio (ADPIC), ha constituido un obstáculo para la realización del mismo, agrandado por las disposiciones ADPIC-plus incluidas en los tratados de libre comercio . No obstante, la consolidación de la industria de medicamentos genéricos ha logrado competir contra ellas, suministrando medicamentos asequibles y promoviendo los intereses de los más necesitados.
Resumo:
El presente trabajo tiene como objetivo estudiar la relevancia del uso del sexo a la hora de tarificar y calcular prestaciones en el seguro de salud en base a la promulgación de la Directiva 2004/113/CE, que prohíbe su utilización al considerarlo discriminatorio. Se analiza el valor de la variable sexo como indicador del perfil de riesgo de los consumidores en el seguro de enfermedad, haciendo hincapié en el ámbito del gasto obstétrico, en las primas de las mujeres en edad de embarazo y en el empleo de factores alternativos razonables que determinen el riesgo. Se estudian a su vez los efectos y consecuencias de la Directiva 2004/113/CE y la sentencia Test-Achats en el ámbito asegurador, así como el impacto producido concretamente en la tarificación y suscripción de los seguros de salud en España, analizando la evolución del sector durante el periodo de implantación de dichos documentos.
Resumo:
Este estudo avalia o impacto da liberalização comercial entre Brasil e China sobre o comércio, produção, preços, investimento, poupança e emprego. O objetivo da análise é identificar a existência de uma oportunidade de comércio para o Brasil que viabilize um maior crescimento, incremente as exportações brasileiras e reduza o desemprego. A hipótese principal é a existência de ganhos de bem estar no comércio com a China. O modelo utilizado é o GLOBAL TRADE ANALYSIS PROJECT (GTAP) com 10 regiões, 10 produtos, 5 fatores, com retornos constantes de escala e competição perfeita nas atividades de produção. Destacam-se na análise os produtos agropecuários. Utilizam-se três fechamentos macroeconômicos (closure) para avaliar separadamente alguns agregados: a configuração padrão dos modelos CGE (preço da poupança endógeno e pleno emprego); preço da poupança exógeno; e desemprego. Conclui-se que pode haver benefícios para os dois países com o acordo.
Resumo:
Esta dissertação aplica a regularização por entropia máxima no problema inverso de apreçamento de opções, sugerido pelo trabalho de Neri e Schneider em 2012. Eles observaram que a densidade de probabilidade que resolve este problema, no caso de dados provenientes de opções de compra e opções digitais, pode ser descrito como exponenciais nos diferentes intervalos da semireta positiva. Estes intervalos são limitados pelos preços de exercício. O critério de entropia máxima é uma ferramenta poderosa para regularizar este problema mal posto. A família de exponencial do conjunto solução, é calculado usando o algoritmo de Newton-Raphson, com limites específicos para as opções digitais. Estes limites são resultados do princípio de ausência de arbitragem. A metodologia foi usada em dados do índice de ação da Bolsa de Valores de São Paulo com seus preços de opções de compra em diferentes preços de exercício. A análise paramétrica da entropia em função do preços de opções digitais sínteticas (construídas a partir de limites respeitando a ausência de arbitragem) mostraram valores onde as digitais maximizaram a entropia. O exemplo de extração de dados do IBOVESPA de 24 de janeiro de 2013, mostrou um desvio do princípio de ausência de arbitragem para as opções de compra in the money. Este princípio é uma condição necessária para aplicar a regularização por entropia máxima a fim de obter a densidade e os preços. Nossos resultados mostraram que, uma vez preenchida a condição de convexidade na ausência de arbitragem, é possível ter uma forma de smile na curva de volatilidade, com preços calculados a partir da densidade exponencial do modelo. Isto coloca o modelo consistente com os dados do mercado. Do ponto de vista computacional, esta dissertação permitiu de implementar, um modelo de apreçamento que utiliza o princípio de entropia máxima. Três algoritmos clássicos foram usados: primeiramente a bisseção padrão, e depois uma combinação de metodo de bisseção com Newton-Raphson para achar a volatilidade implícita proveniente dos dados de mercado. Depois, o metodo de Newton-Raphson unidimensional para o cálculo dos coeficientes das densidades exponenciais: este é objetivo do estudo. Enfim, o algoritmo de Simpson foi usado para o calculo integral das distribuições cumulativas bem como os preços do modelo obtido através da esperança matemática.
Resumo:
The study was conducted on the present status of HACCP based quality management system of golda, Macrobrachium rosenbergii farms in Fulpur region of Mymensingh. Information was collected on general condition of farms, culture systems and post-harvest quality management. In almost all farms, there is no or inadequate infrastructure facilities such as, road access, electric supply, telecommunications, ice, feed storage facility, vehicle for golda transportation, washing and toilet facilities. The problems associated with sanitation and hygiene was: widespread use of cow dung, poultry manure and construction of open toilet within the vicinity of prawn culture pond. Different grades of commercially available and locally prepared feeds were used for golda culture in the pond. Golda post-larvae (PL) of 40-50 days old were stocked with carp species. The price of golda PL ranged from Tk. 1.00 to Tk. 1.25/piece. The pond size varied from 50 decimal (0.2 ha) to 2.5 acre (1.0 ha) with an average depth of 2-2.5 m. The culture period of golda varied from April-May to November-December and survival rate ranged between 75 and 80%. Production of golda varied from 250-500 kg/acre (625-1,250 kg/ha). Harvested golda were transported to city market within 4 h. Two size grading were generally followed during pricing, e.g. Tk. 500 to 550/kg for >100 g size and Tk. 300/kg for <100 g size. The cost-benefit ratio was found to remain around 1:1.25 depending on availability of PL. Water quality parameters such as, water temperature, pH, dissolved oxygen, total alkalinity and chlorophyll a in five golda farms in Fulpur region were monitored. Water temperature ranged from 29°C to 33°C, dissolved oxygen from 2.28 to 4.13 mg/l, pH between 6.65 and 7.94, alkalinity from 44 to 70 mg/l and chlorophyll a concentration from 61.88 to 102.34 µg/l in the five investigated ponds. The Aerobic Plate Count (APC) of the water sample was within the range of 2.0x10^6 - 2.96x10^7 CFU/ml and of soil samples within the range of 6.9x10^6 - 7.73x10^6 CFU/g. Streptococcus sp., Bacillus sp., Escherichia coli, Staphylococcus sp., Pseudomonas sp. and Salmonella sp. were isolated from pond water and sediment. Different feed samples used for golda was analyzed for proximate composition. Moisture content ranged around 14.14-21.22%, crude protein 20.55-44.1%, lipid 4.67-12.54% and ash 9.7-27.69%. The TVB-N values and peroxide values of feeds used as starter, grower and fish meal were found within the acceptable ranges and samples were free from pathogenic organisms. A training was organized for the golda farmers on HACCP, water quality and post-harvest quality management of prawn.