966 resultados para product model
Resumo:
Purpose – This article aims to investigate whether intermediaries reduce loss aversion in the context of a high-involvement non-frequently purchased hedonic product (tourism packages). Design/methodology/approach – The study incorporates the reference-dependent model into a multinomial logit model with random parameters, which controls for heterogeneity and allows representation of different correlation patterns between non-independent alternatives. Findings – Differentiated loss aversion is found: consumers buying high-involvement non-frequently purchased hedonic products are less loss averse when using an intermediary than when dealing with each provider separately and booking their services independently. This result can be taken as identifying consumer-based added value provided by the intermediaries. Practical implications – Knowing the effect of an increase in their prices is crucial for tourism collective brands (e.g. “sun and sea”, “inland”, “green destinations”, “World Heritage destinations”). This is especially applicable nowadays on account of the fact that many destinations have lowered prices to attract tourists (although, in the future, they will have to put prices back up to their normal levels). The negative effect of raising prices can be absorbed more easily via indirect channels when compared to individual providers, as the influence of loss aversion is lower for the former than the latter. The key implication is that intermediaries can – and should – add value in competition with direct e-tailing. Originality/value – Research on loss aversion in retailing has been prolific, exclusively focused on low-involvement and frequently purchased products without distinguishing the direct or indirect character of the distribution channel. However, less is known about other types of products such as high-involvement non-frequently purchased hedonic products. This article focuses on the latter and analyzes different patterns of loss aversion in direct and indirect channels.
Resumo:
In this paper a new technique for partial product reduction based on the use of look-up tables for efficient processing is presented. We describe how to construct counter devices with pre-calculated data and their subsequent integration into the whole operation. The development of reduction trees organizations for this kind of devices uses the inherent integration benefits of computer memories and offers an alternative implementation to classic operation methods. Therefore, in our experiments we compare our implementation model with CMOS technology model in homogeneous terms.
Resumo:
Context. The first soft gamma-ray repeater was discovered over three decades ago, and was subsequently identified as a magnetar, a class of highly magnetised neutron star. It has been hypothesised that these stars power some of the brightest supernovae known, and that they may form the central engines of some long duration gamma-ray bursts. However there is currently no consenus on the formation channel(s) of these objects. Aims. The presence of a magnetar in the starburst cluster Westerlund 1 implies a progenitor with a mass ≥40 M⊙, which favours its formation in a binary that was disrupted at supernova. To test this hypothesis we conducted a search for the putative pre-SN companion. Methods. This was accomplished via a radial velocity survey to identify high-velocity runaways, with subsequent non-LTE model atmosphere analysis of the resultant candidate, Wd1-5. Results. Wd1-5 closely resembles the primaries in the short-period binaries, Wd1-13 and 44, suggesting a similar evolutionary history, although it currently appears single. It is overluminous for its spectroscopic mass and we find evidence of He- and N-enrichement, O-depletion, and critically C-enrichment, a combination of properties that is difficult to explain under single star evolutionary paradigms. We infer a pre-SN history for Wd1-5 which supposes an initial close binary comprising two stars of comparable (~ 41 M⊙ + 35 M⊙) masses. Efficient mass transfer from the initially more massive component leads to the mass-gainer evolving more rapidly, initiating luminous blue variable/common envelope evolution. Reverse, wind-driven mass transfer during its subsequent WC Wolf-Rayet phase leads to the carbon pollution of Wd1-5, before a type Ibc supernova disrupts the binary system. Under the assumption of a physical association between Wd1-5 and J1647-45, the secondary is identified as the magnetar progenitor; its common envelope evolutionary phase prevents spin-down of its core prior to SN and the seed magnetic field for the magnetar forms either in this phase or during the earlier episode of mass transfer in which it was spun-up. Conclusions. Our results suggest that binarity is a key ingredient in the formation of at least a subset of magnetars by preventing spin-down via core-coupling and potentially generating a seed magnetic field. The apparent formation of a magnetar in a Type Ibc supernova is consistent with recent suggestions that superluminous Type Ibc supernovae are powered by the rapid spin-down of these objects.
Resumo:
This raster layer represents surface elevation and bathymetry data for the Boston Region, Massachusetts. It was created by merging portions of MassGIS Digital Elevation Model 1:5,000 (2005) data with NOAA Estuarine Bathymetric Digital Elevation Models (30 m.) (1998). DEM data was derived from the digital terrain models that were produced as part of the MassGIS 1:5,000 Black and White Digital Orthophoto imagery project. Cellsize is 5 meters by 5 meters. Each cell has a floating point value, in meters, which represents its elevation above or below sea level.
Resumo:
Hydrographers have traditionally referred to the nearshore area as the "white ribbon" area due to the challenges associated with the collection of elevation data in this highly dynamic transitional zone between terrestrial and marine environments. Accordingly, available information in this zone is typically characterised by a range of datasets from disparate sources. In this paper we propose a framework to 'fill' the white ribbon area of a coral reef system by integrating multiple elevation and bathymetric datasets acquired by a suite of remote-sensing technologies into a seamless digital elevation model (DEM). A range of datasets are integrated, including field-collected GPS elevation points, terrestrial and bathymetric LiDAR, single and multibeam bathymetry, nautical chart depths and empirically derived bathymetry estimations from optical remote sensing imagery. The proposed framework ranks data reliability internally, thereby avoiding the requirements to quantify absolute error and results in a high resolution, seamless product. Nested within this approach is an effective spatially explicit technique for improving the accuracy of bathymetry estimates derived empirically from optical satellite imagery through modelling the spatial structure of residuals. The approach was applied to data collected on and around Lizard Island in northern Australia. Collectively, the framework holds promise for filling the white ribbon zone in coastal areas characterised by similar data availability scenarios. The seamless DEM is referenced to the horizontal coordinate system MGA Zone 55 - GDA 1994, mean sea level (MSL) vertical datum and has a spatial resolution of 20 m.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-03
Resumo:
Warranty is an important element of marketing new products. The servicing of warranty results in additional costs to the manufacturer. Warranty logistics deals with various issues relating to the servicing of warranty. Proper management of warranty logistics is needed not only to reduce the warranty servicing cost but also to ensure customer satisfaction as customer dissatisfaction has a negative impact on sales and revenue. Unfortunately, warranty logistics has received very little attention. The paper links the literature on warranty and on logistics and then discusses the different issues in warranty logistics. It highlights the challenges and identifies some research topics of potential interest to operational researchers. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Mineral processing plants use two main processes; these are comminution and separation. The objective of the comminution process is to break complex particles consisting of numerous minerals into smaller simpler particles where individual particles consist primarily of only one mineral. The process in which the mineral composition distribution in particles changes due to breakage is called 'liberation'. The purpose of separation is to separate particles consisting of valuable mineral from those containing nonvaluable mineral. The energy required to break particles to fine sizes is expensive, and therefore the mineral processing engineer must design the circuit so that the breakage of liberated particles is reduced in favour of breaking composite particles. In order to effectively optimize a circuit through simulation it is necessary to predict how the mineral composition distributions change due to comminution. Such a model is called a 'liberation model for comminution'. It was generally considered that such a model should incorporate information about the ore, such as the texture. However, the relationship between the feed and product particles can be estimated using a probability method, with the probability being defined as the probability that a feed particle of a particular composition and size will form a particular product particle of a particular size and composition. The model is based on maximizing the entropy of the probability subject to mass constraints and composition constraint. Not only does this methodology allow a liberation model to be developed for binary particles, but also for particles consisting of many minerals. Results from applying the model to real plant ore are presented. A laboratory ball mill was used to break particles. The results from this experiment were used to estimate the kernel which represents the relationship between parent and progeny particles. A second feed, consisting primarily of heavy particles subsampled from the main ore was then ground through the same mill. The results from the first experiment were used to predict the product of the second experiment. The agreement between the predicted results and the actual results are very good. It is therefore recommended that more extensive validation is needed to fully evaluate the substance of the method. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
This paper summarises test results that were used to validate a model and scale-up procedure of the high pressure grinding roll (HPGR) which was developed at the JKMRC by Morrell et al. [Morrell, Lim, Tondo, David,1996. Modelling the high pressure grinding rolls. In: Mining Technology Conference, pp. 169-176.]. Verification of the model is based on results from four data sets that describe the performance of three industrial scale units fitted with both studded and smooth roll surfaces. The industrial units are currently in operation within the diamond mining industry and are represented by De Beers, BHP Billiton and Rio Tinto. Ore samples from the De Beers and BHP Billiton operations were sent to the JKMRC for ore characterisation and HPGR laboratory-scale tests. Rio Tinto contributed an historical data set of tests completed during a previous research project. The results conclude that the modelling of the HPGR process has matured to a point where the model may be used to evaluate new and to optimise existing comminution circuits. The model prediction of product size distribution is good and has been found to be strongly dependent of the characteristics of the material being tested. The prediction of throughput and corresponding power draw (based on throughput) is sensitive to inconsistent gap/diameter ratios observed between laboratory-scale tests and full-scale operations. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
The international circulation of commercial theatre in the early twentieth century was driven not only from the centres of Great Britain and the USA, but by the specific enterprise and habitus of managers in ‘complementary’ production sites such as Australia, South Africa, and New Zealand. The activity of this period suggests a de-centred competitive trade in theatrical commodities – whether performers, scripts, or productions – wherein the perceived entertainment preferences and geographies of non-metropolitan centres were formative of international enterprise. The major producers were linked in complex bonds of partnerships, family, or common experience which crossed the globe. The fractures and commonalities displayed in the partnerships of James Cassius Williamson and George Musgrove, which came to dominate and shape the fortunes of the Australian industry for much of the century, indicate the contradictory commercial and artistic pressures bearing upon entrepreneurs seeking to provide high-quality entertainment and form advantageous combinations in competition with other local and international managements. Clarke, Meynell and Gunn mounted just such spirited competition from 1906 to 1911, and their story demonstrates both the opportunities and the centralizing logic bearing upon local managements shopping and dealing in a global market. The author, Veronica Kelly, works at the University of Queensland. She is presently undertaking a study of commercial stars and managements in late nineteenth- and early twentieth-century Australia, with a focus on the star performer as model of history, gender, and nation.
Resumo:
The central composite rotatable design (CCRD) was used to design an experimental program to model the effects of inlet pressure, feed density, and length and diameter of the inner vortex finder on the operational performance of a 150-min three-product cyclone. The ranges of values of the variables used in the design were: inlet pressure: 80-130 kPa, feed density: 30 60%; length of IVF below the OVF: 50-585 mm; diameter of IVF: 35-50 mm. A total of 30 tests were conducted, which is 51 less; an that required for a three-level full factorial design. Because the model allows confident performance prediction by interpolation over the range of data in the database, it was used to construct response surface graphs to describe the effects of the variables on the performance of the three-product cyclone. To obtain a simple and yet a realistic model, it was refitted using only the variable terms that are significant at greater than or equal to 90% confidence level. Considering the selected operating variables, the resultant model is significant and predicts the experimental data well. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Simplicity in design and minimal floor space requirements render the hydrocyclone the preferred classifier in mineral processing plants. Empirical models have been developed for design and process optimisation but due to the complexity of the flow behaviour in the hydrocyclone these do not provide information on the internal separation mechanisms. To study the interaction of design variables, the flow behaviour needs to be considered, especially when modelling the new three-product cyclone. Computational fluid dynamics (CFD) was used to model the three-product cyclone, in particular the influence of the dual vortex finder arrangement on flow behaviour. From experimental work performed on the UG2 platinum ore, significant differences in the classification performance of the three-product cyclone were noticed with variations in the inner vortex finder length. Because of this simulations were performed for a range of inner vortex finder lengths. Simulations were also conducted on a conventional hydrocyclone of the same size to enable a direct comparison of the flow behaviour between the two cyclone designs. Significantly, high velocities were observed for the three-product cyclone with an inner vortex finder extended deep into the conical section of the cyclone. CFD studies revealed that in the three-product cyclone, a cylindrical shaped air-core is observed similar to conventional hydrocyclones. A constant diameter air-core was observed throughout the inner vortex finder length, while no air-core was present in the annulus. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
The Gauss-Marquardt-Levenberg (GML) method of computer-based parameter estimation, in common with other gradient-based approaches, suffers from the drawback that it may become trapped in local objective function minima, and thus report optimized parameter values that are not, in fact, optimized at all. This can seriously degrade its utility in the calibration of watershed models where local optima abound. Nevertheless, the method also has advantages, chief among these being its model-run efficiency, and its ability to report useful information on parameter sensitivities and covariances as a by-product of its use. It is also easily adapted to maintain this efficiency in the face of potential numerical problems (that adversely affect all parameter estimation methodologies) caused by parameter insensitivity and/or parameter correlation. The present paper presents two algorithmic enhancements to the GML method that retain its strengths, but which overcome its weaknesses in the face of local optima. Using the first of these methods an intelligent search for better parameter sets is conducted in parameter subspaces of decreasing dimensionality when progress of the parameter estimation process is slowed either by numerical instability incurred through problem ill-posedness, or when a local objective function minimum is encountered. The second methodology minimizes the chance of successive GML parameter estimation runs finding the same objective function minimum by starting successive runs at points that are maximally removed from previous parameter trajectories. As well as enhancing the ability of a GML-based method to find the global objective function minimum, the latter technique can also be used to find the locations of many non-global optima (should they exist) in parameter space. This can provide a useful means of inquiring into the well-posedness of a parameter estimation problem, and for detecting the presence of bimodal parameter and predictive probability distributions. The new methodologies are demonstrated by calibrating a Hydrological Simulation Program-FORTRAN (HSPF) model against a time series of daily flows. Comparison with the SCE-UA method in this calibration context demonstrates a high level of comparative model run efficiency for the new method. (c) 2006 Elsevier B.V. All rights reserved.