869 resultados para price drop
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The theoretical framework that underpins this research study is based on the Prospect Theory formulated by Kahneman and Tversky, and Thaler's Mental Accounting Theory. The research aims to evaluate the consumers' behavior when different patterns of discount are offered (in percentage and absolute value and for larger and smaller discounts). Two experiments were conducted to explore these patterns of behavior and the results that were obtained supported the view that the framing effect was a common occurrence. The patterns of choice of individuals in a sample were found to be different due to changes in the ways discounts were offered. This can be explained by the various ways of presenting discount rates that had an impact on the influence of purchase intentions, recommendations and quality perception.
Resumo:
This work presents numerical simulations of two fluid flow problems involving moving free surfaces: the impacting drop and fluid jet buckling. The viscoelastic model used in these simulations is the eXtended Pom-Pom (XPP) model. To validate the code, numerical predictions of the drop impact problem for Newtonian and Oldroyd-B fluids are presented and compared with other methods. In particular, a benchmark on numerical simulations for a XPP drop impacting on a rigid plate is performed for a wide range of the relevant parameters. Finally, to provide an additional application of free surface flows of XPP fluids, the viscous jet buckling problem is simulated and discussed. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
This paper presents an experimental study on two-phase flow patterns and pressure drop of R134a inside a 15.9 mm ID tube containing twisted-tape inserts. Experimental results were obtained in a horizontal test section for twisted-tape ratios of 3, 4, 9 and 14, mass velocities ranging from 75 to 250 kg/m(2) s and saturation temperatures of 5 and 15 degrees C. An unprecedented discussion on two-phase flow patterns inside tubes containing twisted-tape inserts is presented and the flow pattern effects on the frictional pressure drop are carefully discussed. Additionally, a new method to predict the frictional pressure drop during two-phase flow inside tubes containing twisted-tape inserts is proposed. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
The theoretical framework that underpins this research study is based on the Prospect Theory formulated by Kahneman and Tversky, and Thaler's Mental Accounting Theory. The research aims to evaluate the consumers' behavior when different patterns of discount are offered (in percentage and absolute value and for larger and smaller discounts). Two experiments were conducted to explore these patterns of behavior and the results that were obtained supported the view that the framing effect was a common occurrence. The patterns of choice of individuals in a sample were found to be different due to changes in the ways discounts were offered. This can be explained by the various ways of presenting discount rates that had an impact on the influence of purchase intentions, recommendations and quality perception.
Resumo:
Experimental two-phase frictional pressure drop and flow boiling heat transfer results are presented for a horizontal 2.32-mm ID stainless-steel tube using R245fa as working fluid. The frictional pressure drop data was obtained under adiabatic and diabatic conditions. Experiments were performed for mass velocities ranging from 100 to 700 kg m−2 s−1 , heat flux from 0 to 55 kW m−2 , exit saturation temperatures of 31 and 41◦C, and vapor qualities from 0.10 to 0.99. Pressures drop gradients and heat transfer coefficients ranging from 1 to 70 kPa m−1 and from 1 to 7 kW m−2 K−1 were measured. It was found that the heat transfer coefficient is a strong function of the heat flux, mass velocity, and vapor quality. Five frictional pressure drop predictive methods were compared against the experimental database. The Cioncolini et al. (2009) method was found to work the best. Six flow boiling heat transfer predictive methods were also compared against the present database. Liu and Winterton (1991), Zhang et al. (2004), and Saitoh et al. (2007) were ranked as the best methods. They predicted the experimental flow boiling heat transfer data with an average error around 19%.
Resumo:
Study IReal Wage Determination in the Swedish Engineering Industry This study uses the monopoly union model to examine the determination of real wages and in particular the effects of active labour market programmes (ALMPs) on real wages in the engineering industry. Quarterly data for the period 1970:1 to 1996:4 are used in a cointegration framework, utilising the Johansen's maximum likelihood procedure. On a basis of the Johansen (trace) test results, vector error correction (VEC) models are created in order to model the determination of real wages in the engineering industry. The estimation results support the presence of a long-run wage-raising effect to rises in the labour productivity, in the tax wedge, in the alternative real consumer wage and in real UI benefits. The estimation results also support the presence of a long-run wage-raising effect due to positive changes in the participation rates regarding ALMPs, relief jobs and labour market training. This could be interpreted as meaning that the possibility of being a participant in an ALMP increases the utility for workers of not being employed in the industry, which in turn could increase real wages in the industry in the long run. Finally, the estimation results show evidence of a long-run wage-reducing effect due to positive changes in the unemployment rate. Study IIIntersectoral Wage Linkages in Sweden The purpose of this study is to investigate whether the wage-setting in certain sectors of the Swedish economy affects the wage-setting in other sectors. The theoretical background is the Scandinavian model of inflation, which states that the wage-setting in the sectors exposed to international competition affects the wage-setting in the sheltered sectors of the economy. The Johansen maximum likelihood cointegration approach is applied to quarterly data on Swedish sector wages for the period 1980:1–2002:2. Different vector error correction (VEC) models are created, based on assumptions as to which sectors are exposed to international competition and which are not. The adaptability of wages between sectors is then tested by imposing restrictions on the estimated VEC models. Finally, Granger causality tests are performed in the different restricted/unrestricted VEC models to test for sector wage leadership. The empirical results indicate considerable adaptability in wages as between manufacturing, construction, the wholesale and retail trade, the central government sector and the municipalities and county councils sector. This is consistent with the assumptions of the Scandinavian model. Further, the empirical results indicate a low level of adaptability in wages as between the financial sector and manufacturing, and between the financial sector and the two public sectors. The Granger causality tests provide strong evidence for the presence of intersectoral wage causality, but no evidence of a wage-leading role in line with the assumptions of the Scandinavian model for any of the sectors. Study IIIWage and Price Determination in the Private Sector in Sweden The purpose of this study is to analyse wage and price determination in the private sector in Sweden during the period 1980–2003. The theoretical background is a variant of the “Imperfect competition model of inflation”, which assumes imperfect competition in the labour and product markets. According to the model wages and prices are determined as a result of a “battle of mark-ups” between trade unions and firms. The Johansen maximum likelihood cointegration approach is applied to quarterly Swedish data on consumer prices, import prices, private-sector nominal wages, private-sector labour productivity and the total unemployment rate for the period 1980:1–2003:3. The chosen cointegration rank of the estimated vector error correction (VEC) model is two. Thus, two cointegration relations are assumed: one for private-sector nominal wage determination and one for consumer price determination. The estimation results indicate that an increase of consumer prices by one per cent lifts private-sector nominal wages by 0.8 per cent. Furthermore, an increase of private-sector nominal wages by one per cent increases consumer prices by one per cent. An increase of one percentage point in the total unemployment rate reduces private-sector nominal wages by about 4.5 per cent. The long-run effects of private-sector labour productivity and import prices on consumer prices are about –1.2 and 0.3 per cent, respectively. The Rehnberg agreement during 1991–92 and the monetary policy shift in 1993 affected the determination of private-sector nominal wages, private-sector labour productivity, import prices and the total unemployment rate. The “offensive” devaluation of the Swedish krona by 16 per cent in 1982:4, and the start of a floating Swedish krona and the substantial depreciation of the krona at this time affected the determination of import prices.
Resumo:
[EN] Background This study aims to design an empirical test on the sensitivity of the prescribing doctors to the price afforded for the patient, and to apply it to the population data of primary care dispensations for cardiovascular disease and mental illness in the Spanish National Health System (NHS). Implications for drug policies are discussed. Methods We used population data of 17 therapeutic groups of cardiovascular and mental illness drugs aggregated by health areas to obtain 1424 observations ((8 cardiovascular groups * 70 areas) + (9 psychotropics groups * 96 areas)). All drugs are free for pensioners. For non-pensioner patients 10 of the 17 therapeutic groups have a reduced copayment (RC) status of only 10% of the price with a ceiling of €2.64 per pack, while the remaining 7 groups have a full copayment (FC) rate of 40%. Differences in the average price among dispensations for pensioners and non-pensioners were modelled with multilevel regression models to test the following hypothesis: 1) in FC drugs there is a significant positive difference between the average prices of drugs prescribed to pensioners and non-pensioners; 2) in RC drugs there is no significant price differential between pensioner and non-pensioner patients; 3) the price differential of FC drugs prescribed to pensioners and non-pensioners is greater the higher the price of the drugs. Results The average monthly price of dispensations to pensioners and non-pensioners does not differ for RC drugs, but for FC drugs pensioners get more expensive dispensations than non-pensioners (estimated difference of €9.74 by DDD and month). There is a positive and significant effect of the drug price on the differential price between pensioners and non-pensioners. For FC drugs, each additional euro of the drug price increases the differential by nearly half a euro (0.492). We did not find any significant differences in the intensity of the price effect among FC therapeutic groups. Conclusions Doctors working in the Spanish NHS seem to be sensitive to the price that can be afforded by patients when they fill in prescriptions, although alternative hypothesis could also explain the results found.
Resumo:
[EN] This paper presents a location–price equilibrium problem on a tree. A sufficient condition for having a Nash equilibrium in a spatial competition model that incorporates price, transport, and externality costs is given. This condition implies both competitors are located at the same point, a vertex that is the unique median of the tree. However, this is not an equilibrium necessary condition. Some examples show that not all medians are equilibria. Finally, an application to the Tenerife tram is presented.
Resumo:
The present study has been carried out with the following objectives: i) To investigate the attributes of source parameters of local and regional earthquakes; ii) To estimate, as accurately as possible, M0, fc, Δσ and their standard errors to infer their relationship with source size; iii) To quantify high-frequency earthquake ground motion and to study the source scaling. This work is based on observational data of micro, small and moderate -earthquakes for three selected seismic sequences, namely Parkfield (CA, USA), Maule (Chile) and Ferrara (Italy). For the Parkfield seismic sequence (CA), a data set of 757 (42 clusters) repeating micro-earthquakes (0 ≤ MW ≤ 2), collected using borehole High Resolution Seismic Network (HRSN), have been analyzed and interpreted. We used the coda methodology to compute spectral ratios to obtain accurate values of fc , Δσ, and M0 for three target clusters (San Francisco, Los Angeles, and Hawaii) of our data. We also performed a general regression on peak ground velocities to obtain reliable seismic spectra of all earthquakes. For the Maule seismic sequence, a data set of 172 aftershocks of the 2010 MW 8.8 earthquake (3.7 ≤ MW ≤ 6.2), recorded by more than 100 temporary broadband stations, have been analyzed and interpreted to quantify high-frequency earthquake ground motion in this subduction zone. We completely calibrated the excitation and attenuation of the ground motion in Central Chile. For the Ferrara sequence, we calculated moment tensor solutions for 20 events from MW 5.63 (the largest main event occurred on May 20 2012), down to MW 3.2 by a 1-D velocity model for the crust beneath the Pianura Padana, using all the geophysical and geological information available for the area. The PADANIA model allowed a numerical study on the characteristics of the ground motion in the thick sediments of the flood plain.
Resumo:
This paper presents the first full-fledged branch-and-price (bap) algorithm for the capacitated arc-routing problem (CARP). Prior exact solution techniques either rely on cutting planes or the transformation of the CARP into a node-routing problem. The drawbacks are either models with inherent symmetry, dense underlying networks, or a formulation where edge flows in a potential solution do not allow the reconstruction of unique CARP tours. The proposed algorithm circumvents all these drawbacks by taking the beneficial ingredients from existing CARP methods and combining them in a new way. The first step is the solution of the one-index formulation of the CARP in order to produce strong cuts and an excellent lower bound. It is known that this bound is typically stronger than relaxations of a pure set-partitioning CARP model.rnSuch a set-partitioning master program results from a Dantzig-Wolfe decomposition. In the second phase, the master program is initialized with the strong cuts, CARP tours are iteratively generated by a pricing procedure, and branching is required to produce integer solutions. This is a cut-first bap-second algorithm and its main function is, in fact, the splitting of edge flows into unique CARP tours.
Resumo:
Die pneumatische Zerstäubung ist die häufigste Methode der Probenzuführung von Flüssigkeiten in der Plasmaspektrometrie. Trotz der bekannten Limitierungen dieser Systeme, wie die hohen Probenverluste, finden diese Zerstäuber aufgrund ihrer guten Robustheit eine breite Anwendung. Die flussratenabhängige Aerosolcharakteristik und pumpenbasierte Signalschwankungen limitieren bisher Weiterentwicklungen. Diese Probleme werden umso gravierender, je weiter die notwendige Miniaturisierung dieser Systeme fortschreitet. Der neuartige Ansatz dieser Arbeit basiert auf dem Einsatz modifizierter Inkjet-Druckerpatronen für die Dosierung von pL-Tropfen. Ein selbst entwickelter Mikrokontroller ermöglicht den Betrieb von matrixkodierten Patronen des Typs HP45 mit vollem Zugriff auf alle essentiellen Betriebsparameter. Durch die neuartige Aerosoltransportkammer gelang die effiziente Kopplung des Tropfenerzeugungssystems an ein ICP-MS. Das so aufgebaute drop-on-demand-System (DOD) zeigt im Vergleich zu herkömmlichen und miniaturisierten Zerstäubern eine deutlich gesteigerte Empfindlichkeit (8 - 18x, elementabhängig) bei leicht erhöhtem, aber im Grunde vergleichbarem Signalrauschen. Darüber hinaus ist die Flexibilität durch die große Zahl an Freiheitsgraden des Systems überragend. So ist die Flussrate über einen großen Bereich variabel (5 nL - 12,5 µL min-1), ohne dabei die primäre Aerosolcharakteristik zu beeinflussen, welche vom Nutzer durch Wahl der elektrischen Parameter bestimmt wird. Das entwickelte Probenzuführungssystem ist verglichen mit dem pneumatischen Referenzsystem weniger anfällig gegenüber Matrixeffekten beim Einsatz von realen Proben mit hohen Anteilen gelöster Substanzen. So gelingt die richtige Quantifizierung von fünf Metallen im Spurenkonzentrationsbereich (Li, Sr, Mo, Sb und Cs) in nur 12 µL Urin-Referenzmaterial mittels externer Kalibrierung ohne Matrixanpassung. Wohingegen beim pneumatischen Referenzsystem die aufwändigere Standardadditionsmethode sowie über 250 µL Probenvolumen für eine akkurate Bestimmung der Analyten nötig sind. Darüber hinaus wird basierend auf der Dosierfrequenz eines dualen DOD-Systems eine neuartige Kalibrierstrategie vorgestellt. Bei diesem Ansatz werden nur eine Standard- und eine Blindlösung anstelle einer Reihe unterschiedlich konzentrierter Standards benötigt, um eine lineare Kalibrierfunktion zu erzeugen. Zusätzlich wurde mittels selbst entwickelter, zeitlich aufgelöster ICP-MS umfangreiche Rauschspektren aufgenommen. Aus diesen gelang die Ermittlung der Ursache des erhöhten Signalrauschens des DOD, welches maßgeblich durch das zeitlich nicht äquidistante Eintreffen der Tropfen am Detektor verursacht wird. Diese Messtechnik erlaubt auch die Detektion einzeln zugeführter Tropfen, wodurch ein Vergleich der Volumenverteilung der mittels ICP-MS detektierten, gegenüber den generierten und auf optischem Wege charakterisierten Tropfen möglich wurde. Dieses Werkzeug ist für diagnostische Untersuchungen äußerst hilfreich. So konnte aus diesen Studien neben der Aufklärung von Aerosoltransportprozessen die Transporteffizienz des DOD ermittelt werden, welche bis zu 94 Vol.-% beträgt.