869 resultados para 150507 Pricing (incl. Consumer Value Estimation)
Resumo:
Storage of water within a river basin is often estimated by analyzing recession flow curves as it cannot be `instantly' estimated with the aid of available technologies. In this study we explicitly deal with the issue of estimation of `drainable' storage, which is equal to the area under the `complete' recession flow curve (i.e. a discharge vs. time curve where discharge continuously decreases till it approaches zero). But a major challenge in this regard is that recession curves are rarely `complete' due to short inter-storm time intervals. Therefore, it is essential to analyze and model recession flows meaningfully. We adopt the wellknown Brutsaert and Nieber analytical method that expresses time derivative of discharge (dQ/dt) as a power law function of Q : -dQ/dt = kQ(alpha). However, the problem with dQ/dt-Q analysis is that it is not suitable for late recession flows. Traditional studies often compute alpha considering early recession flows and assume that its value is constant for the whole recession event. But this approach gives unrealistic results when alpha >= 2, a common case. We address this issue here by using the recently proposed geomorphological recession flow model (GRFM) that exploits the dynamics of active drainage networks. According to the model, alpha is close to 2 for early recession flows and 0 for late recession flows. We then derive a simple expression for drainable storage in terms the power law coefficient k, obtained by considering early recession flows only, and basin area. Using 121 complete recession curves from 27 USGS basins we show that predicted drainable storage matches well with observed drainable storage, indicating that the model can also reliably estimate drainable storage for `incomplete' recession events to address many challenges related to water resources. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Probable maximum precipitation (PMP) is a theoretical concept that is widely used by hydrologists to arrive at estimates for probable maximum flood (PMF) that find use in planning, design and risk assessment of high-hazard hydrological structures such as flood control dams upstream of populated areas. The PMP represents the greatest depth of precipitation for a given duration that is meteorologically possible for a watershed or an area at a particular time of year, with no allowance made for long-term climatic trends. Various methods are in use for estimation of PMP over a target location corresponding to different durations. Moisture maximization method and Hershfield method are two widely used methods. The former method maximizes the observed storms assuming that the atmospheric moisture would rise up to a very high value estimated based on the maximum daily dew point temperature. On the other hand, the latter method is a statistical method based on a general frequency equation given by Chow. The present study provides one-day PMP estimates and PMP maps for Mahanadi river basin based on the aforementioned methods. There is a need for such estimates and maps, as the river basin is prone to frequent floods. Utility of the constructed PMP maps in computing PMP for various catchments in the river basin is demonstrated. The PMP estimates can eventually be used to arrive at PMF estimates for those catchments. (C) 2015 The Authors. Published by Elsevier B.V.
Resumo:
The main objective of the paper is to develop a new method to estimate the maximum magnitude (M (max)) considering the regional rupture character. The proposed method has been explained in detail and examined for both intraplate and active regions. Seismotectonic data has been collected for both the regions, and seismic study area (SSA) map was generated for radii of 150, 300, and 500 km. The regional rupture character was established by considering percentage fault rupture (PFR), which is the ratio of subsurface rupture length (RLD) to total fault length (TFL). PFR is used to arrive RLD and is further used for the estimation of maximum magnitude for each seismic source. Maximum magnitude for both the regions was estimated and compared with the existing methods for determining M (max) values. The proposed method gives similar M (max) value irrespective of SSA radius and seismicity. Further seismicity parameters such as magnitude of completeness (M (c) ), ``a'' and ``aEuro parts per thousand b `` parameters and maximum observed magnitude (M (max) (obs) ) were determined for each SSA and used to estimate M (max) by considering all the existing methods. It is observed from the study that existing deterministic and probabilistic M (max) estimation methods are sensitive to SSA radius, M (c) , a and b parameters and M (max) (obs) values. However, M (max) determined from the proposed method is a function of rupture character instead of the seismicity parameters. It was also observed that intraplate region has less PFR when compared to active seismic region.
Resumo:
A new approach is proposed to estimate the thermal diffusivity of optically transparent solids at ambient temperature based on the velocity of an effective temperature point (ETP), and by using a two-beam interferometer the proposed concept is corroborated. 1D unsteady heat flow via step-temperature excitation is interpreted as a `micro-scale rectilinear translatory motion' of an ETP. The velocity dependent function is extracted by revisiting the Fourier heat diffusion equation. The relationship between the velocity of the ETP with thermal diffusivity is modeled using a standard solution. Under optimized thermal excitation, the product of the `velocity of the ETP' and the distance is a new constitutive equation for the thermal diffusivity of the solid. The experimental approach involves the establishment of a 1D unsteady heat flow inside the sample through step-temperature excitation. In the moving isothermal surfaces, the ETP is identified using a two-beam interferometer. The arrival-time of the ETP to reach a fixed distance away from heat source is measured, and its velocity is calculated. The velocity of the ETP and a given distance is sufficient to estimate the thermal diffusivity of a solid. The proposed method is experimentally verified for BK7 glass samples and the measured results are found to match closely with the reported value.
Resumo:
The LY12-cz aluminium alloy sheet specimens with a central hole were tested under constant amplitude loading, Rayleigh narrow band random loading and a typical fighter broad band random loading. The fatigue life was estimated by means of the nominal stress and the Miner's rule. The stress cycles were distinguished by the rainflow count, range count and peak value count, respectively. The comparison between the estimated results and the test results was made. The effects of random loading sequence and small load cycles on fatigue life were also studied.
Resumo:
ENGLISH: Age composition of catch, and growth rate, of yellowfin tuna have been estimated by Hennemuth (1961a) and Davidoff (1963). The relative abundance and instantaneous total mortality rate of yellowfin tuna during 1954-1959 have been estimated by Hennenmuth (1961b). It is now possible to extend this work, because more data are available; these include data for 1951-1954, which were previously not available, and data for 1960-1962, which were collected subsequent to Hennemuth's (1961b) publication. In that publication, Hennemuth estimated the total instantaneous mortality rate (Z) during the entire time period a year class is present in the fishery following full recruitment. However, this method may lead to biased estimates of abundance, and hence mortality rates, because of both seasonal migrations into or out of specific fishing areas and possible seasonal differences in availability or vulnerability of the fish to the fishing gear. Schaefer, Chatwin and Broadhead (1961) and Joseph etl al. (1964) have indicated that seasonal migrations of yellowfin occur. A method of estimating mortality rates which is not biased by seasonal movements would be of value in computations of population dynamics. The method of analysis outlined and used in the present paper may obviate this bias by comparing the abundance of an individual yellowfin year class, following its period of maximum abundance, in an individual area during a specific quarter of the year with its abundance in the same area one year later. The method was suggested by Gulland (1955) and used by Chapman, Holt and Allen (1963) in assessing Antarctic whale stocks. This method, and the results of its use with data for yellowfin caught in the eastern tropical Pacific from 1951-1962 are described in this paper. SPANISH: La composición de edad de la captura, y la tasa de crecimiento del atún aleta amarilla, han sido estimadas por Hennemuth (1961a) y Davidoff (1963). Hennemuth (1961b), estimó la abundancia relativa y la tasa de mortalidad total instantánea del atún aleta amarilla durante 1954-1959. Se puede ampliar ahora, este trabajo, porque se dispone de más datos; éstos incluyen datos de 1951 1954, de los cuales no se disponía antes, y datos de 1960-1962 que fueron recolectados después de la publicación de Hennemuth (1961b). En esa obra, Hennemuth estimó la tasa de mortalidad total instantánea (Z) durante todo el período de tiempo en el cual una clase anual está presente en la pesquería, consecutiva al reclutamiento total. Sin embargo, este método puede conducir a estimaciones con bias (inclinación viciada) de abundancia, y de aquí las tasas de mortalidad, debidas tanto a migraciones estacionales dentro o fuera de las áreas determinadas de pesca, como a posibles diferencias estacionales en la disponibilidad y vulnerabilidad de los peces al equipo de pesca. Schaefer, Chatwin y Broadhead (1961) y Joseph et al. (1964) han indicado que ocurren migraciones estacionales de atún aleta amarilla. Un método para estimar las tasas de mortalidad el cual no tuviera bias debido a los movimientos estacionales, sería de valor en los cómputos de la dinámica de las poblaciones. El método de análisis delineado y usado en el presente estudio puede evitar este bias al comparar la abundancia de una clase anual individual de atún aleta amarilla, subsecuente a su período de abundancia máxima en un área individual, durante un trimestre específico del año, con su abundancia en la misma área un año más tarde. Este método fue sugerido por Gulland (1955) y empleado por Chapman, Holt y Allen (1963) en la declaración de los stocks de la ballena antártica. Este método y los resultados de su uso, en combinación con los datos del atún aleta amarilla capturado en el Pacífico oriental tropical desde 1951-1962, son descritos en este estudio.
Resumo:
In three essays we examine user-generated product ratings with aggregation. While recommendation systems have been studied extensively, this simple type of recommendation system has been neglected, despite its prevalence in the field. We develop a novel theoretical model of user-generated ratings. This model improves upon previous work in three ways: it considers rational agents and allows them to abstain from rating when rating is costly; it incorporates rating aggregation (such as averaging ratings); and it considers the effect on rating strategies of multiple simultaneous raters. In the first essay we provide a partial characterization of equilibrium behavior. In the second essay we test this theoretical model in laboratory, and in the third we apply established behavioral models to the data generated in the lab. This study provides clues to the prevalence of extreme-valued ratings in field implementations. We show theoretically that in equilibrium, ratings distributions do not represent the value distributions of sincere ratings. Indeed, we show that if rating strategies follow a set of regularity conditions, then in equilibrium the rate at which players participate is increasing in the extremity of agents' valuations of the product. This theoretical prediction is realized in the lab. We also find that human subjects show a disproportionate predilection for sincere rating, and that when they do send insincere ratings, they are almost always in the direction of exaggeration. Both sincere and exaggerated ratings occur with great frequency despite the fact that such rating strategies are not in subjects' best interest. We therefore apply the behavioral concepts of quantal response equilibrium (QRE) and cursed equilibrium (CE) to the experimental data. Together, these theories explain the data significantly better than does a theory of rational, Bayesian behavior -- accurately predicting key comparative statics. However, the theories fail to predict the high rates of sincerity, and it is clear that a better theory is needed.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
21 p.
Resumo:
Growth is one of the most important characteristics of cultured species. The objective of this study was to determine the fitness of linear, log linear, polynomial, exponential and Logistic functions to the growth curves of Macrobrachium rosenbergii obtained by using weekly records of live weight, total length, head length, claw length, and last segment length from 20 to 192 days of age. The models were evaluated according to the coefficient of determination (R2), and error sum off square (ESS) and helps in formulating breeders in selective breeding programs. Twenty full-sib families consisting 400 PLs each were stocked in 20 different hapas and reared till 8 weeks after which a total of 1200 animals were transferred to earthen ponds and reared up to 192 days. The R2 values of the models ranged from 56 – 96 in case of overall body weight with logistic model being the highest. The R2 value for total length ranged from 62 to 90 with logistic model being the highest. In case of head length, the R2 value ranged between 55 and 95 with logistic model being the highest. The R2 value for claw length ranged from 44 to 94 with logistic model being the highest. For last segment length, R2 value ranged from 55 – 80 with polynomial model being the highest. However, the log linear model registered low ESS value followed by linear model for overall body weight while exponential model showed low ESS value followed by log linear model in case of head length. For total length the low ESS value was given by log linear model followed by logistic model and for claw length exponential model showed low ESS value followed by log linear model. In case of last segment length, linear model showed lowest ESS value followed by log linear model. Since, the model that shows highest R2 value with low ESS value is generally considered as the best fit model. Among the five models tested, logistic model, log linear model and linear models were found to be the best models for overall body weight, total length and head length respectively. For claw length and last segment length, log linear model was found to be the best model. These models can be used to predict growth rates in M. rosenbergii. However, further studies need to be conducted with more growth traits taken into consideration
Resumo:
The Aquaculture for Income and Nutrition (AIN) project implemented by World Fish and funded by USAID, aims at increasing aquaculture production in 20 districts of Southern Bangladesh (Greater Khulna, Greater Barisal, Greater Jessore and Greater Faridpur) to reduce poverty and enhance nutritional status. As part of its initial scoping activities World Fish commissioned this value chain assessment on the market chains of carp fish seed (spawn, fry and fingerlings) in the southern region of Bangladesh. The purpose of this study is to obtain a clearer understanding of the volumes of fish produced and consumed in Southern Bangladesh and their origin (by system and location), their destination (by type of market, type of consumer and location), and to gain a clearer understanding of potential market based solutions to farmer’s problems, which the project could implement.
Resumo:
In this paper, a Decimative Spectral estimation method based on Eigenanalysis and SVD (Singular Value Decomposition) is presented and applied to speech signals in order to estimate Formant/Bandwidth values. The underlying model decomposes a signal into complex damped sinusoids. The algorithm is applied not only on speech samples but on a small amount of the autocorrelation coefficients of a speech frame as well, for finer estimation. Correct estimation of Formant/Bandwidth values depend on the model order thus, the requested number of poles. Overall, experimentation results indicate that the proposed methodology successfully estimates formant trajectories and their respective bandwidths.
Resumo:
Although partially observable Markov decision processes (POMDPs) have shown great promise as a framework for dialog management in spoken dialog systems, important scalability issues remain. This paper tackles the problem of scaling slot-filling POMDP-based dialog managers to many slots with a novel technique called composite point-based value iteration (CSPBVI). CSPBVI creates a "local" POMDP policy for each slot; at runtime, each slot nominates an action and a heuristic chooses which action to take. Experiments in dialog simulation show that CSPBVI successfully scales POMDP-based dialog managers without compromising performance gains over baseline techniques and preserving robustness to errors in user model estimation. Copyright © 2006, American Association for Artificial Intelligence (www.aaai.org). All rights reserved.
Resumo:
Cost-profit analysis and market testing of some value-added products from silver carp such as fish mince block, fish sausage, fish ball, fish stick and fish burger were analyzed during April 2001 to March 2002. The study also explored the possibility to involve rural low-income people in the production and marketing of such products. The production of silver carp was higher in greater Jessore and Mymensingh districts but the price remained low during the peak-harvesting season in October to November. The price varied with size of the fish, season, market characteristics and effective demand of the buyers. Price of about 500 g size fish was found to be Tk. 20-25/kg in the rural markets. The average size of fish in the rural markets was 3S0-550 g while that in the urban markets it was 700-1,200 g. The cost of production of the value added products and profit margin were assessed on the basis of market price of the raw material as well as that of the finished products, transportation, storage and marketing costs. The profit margins of 34%, 39%, 81% and 31% of their sales price were obtained for fish sausage, fish ball, fish stick and fish burger, respectively. Actual production cost could be minimized if the fish is purchased directly from the farmers. Consumer's acceptance and marketability tests showed that both rural and urban people preferred fish ball than fish sausage. However, response towards the taste, flavor and color of fish ball and fish sausage was found to vary with occupations and age of the consumers. A correlation was observed between age group and acceptance of new products. Fish ball, fish stick and fish burger were found to be the most preferable items to the farmers because of easy formulation process with common utensils. Good marketing linkage and requirement of capital had been identified as the prerequisites for operating small-scale business on value-added fish products.
Resumo:
Silver carp, Hypophthalmichthys molitrix is contributing significantly to the total production of fish through aquaculture in Bangladesh. However, its low market price has become a serious concern to the fish farmers. The suitability of silver carp mince for the production of various value-added products (VAPs) - surimi, fish sausage, fish burger and fish stick was studied during April-September 2000 to ensure more appropriate and profitable utilization of silver carp. Surimi/frozen mince block was produced by washing the silver carp mince with 0.1% NaCl for 7-8 min (4-5 min agitation and 3-4 min settling). A two-step heating schedule for incubation at 50°C for 2 h and cooking at 95°C for 30 min gave high textured good quality consumer product. With the addition of cryoprotectants, surimi could be kept frozen for 5 months without loosing [sic] much of its textural and sensory qualities. Mince-mix and a batter with different ingredients and spices were formulated to produce fish burger using potato smash as the binding agent. Fish flake-mix and a batter with different ingredients and spices were formulated to prepare fish stick using both potato starch and potato smash as filler ingredients. Unwashed and washed frozen mince block or fresh flesh of silver carp was used to prepare fish sausage by heating at 100°C for 1 h after incubating at 50°C for 2 h. A spice-mix formulated with various local spices at the rate of 1.0-1.2% gave good texture and flavor to the sausage. A good-appeared sausage-pink color was developed by combining three food-grade colors of asthaxanthin. Products prepared with potato starch, potato smash and rice smash had an acceptable bacterial load in refrigeration (5°C) for up to 8 days and in room temperature (28°C) for up to 3 days. No coliform bacteria were found in the products prepared.