869 resultados para 150507 Pricing (incl. Consumer Value Estimation)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A modern system theory based nonlinear control design is discussed in this paper for successful operation of an air-breathing engine operating at supersonic speed. The primary objective of the control design of such an air-breathing engine is to ensure that the engine dynamically produces the thrust that tracks a commanded value of thrust as closely as possible by regulating the fuel flow to the combustion system. However, since the engine operates in the supersonic range, an important secondary objective is to manage the shock wave configuration in the intake section of the engine which is manipulated by varying the throat area of the nozzle. A nonlinear sliding mode control technique has been successfully used to achieve both of the above objectives. In this problem, since the process is faster than the actuators, independent control designs are also carried out for the actuators as well to assure the satisfactory performance of the system. Moreover, to filter out the sensor and process noises and to estimate the states for making the control design operate based on output feedback, an Extended Kalman Filter based state estimation design is also carried out. The promising simulation results suggest that the proposed control design approach is quite successful in obtaining robust performance of the air-breathing engine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A modified linear prediction (MLP) method is proposed in which the reference sensor is optimally located on the extended line of the array. The criterion of optimality is the minimization of the prediction error power, where the prediction error is defined as the difference between the reference sensor and the weighted array outputs. It is shown that the L2-norm of the least-squares array weights attains a minimum value for the optimum spacing of the reference sensor, subject to some soft constraint on signal-to-noise ratio (SNR). How this minimum norm property can be used for finding the optimum spacing of the reference sensor is described. The performance of the MLP method is studied and compared with that of the linear prediction (LP) method using resolution, detection bias, and variance as the performance measures. The study reveals that the MLP method performs much better than the LP technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Activity systems are the cognitively linked groups of activities that consumers carry out as a part of their daily life. The aim of this paper is to investigate how consumers experience value through their activities, and how services fit into the context of activity systems. A new technique for illustrating consumers’ activity systems is introduced. The technique consists of identifying a consumer’s activities through an interview, then quantitatively measuring how the consumer evaluates the identified activities on three dimensions: Experienced benefits, sacrifices and frequency. This information is used to create a graphical representation of the consumer’s activity system, an “activityscape map”. Activity systems work as infrastructure for the individual consumer’s value experience. The paper contributes to value and service literature, where there currently are no clearly described standardized techniques for visually mapping out individual consumer activity. Existing approaches are service- or relationship focused, and are mostly used to identify activities, not to understand them. The activityscape representation provides an overview of consumers’ perceptions of their activity patterns and the position of one or several services in this pattern. Comparing different consumers’ activityscapes, it shows the differences between consumers' activity structures, and provides insight into how services are used to create value within them. The paper is conceptual; an empirical illustration is used to indicate the potential in further empirical studies. The technique can be used by businesses to understand contexts for service use, which may uncover potential for business reconfiguration and customer segmentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study contributes to the neglect effect literature by looking at the relative trading volume in terms of value. The results for the Swedish market show a significant positive relationship between the accuracy of estimation and the relative trading volume. Market capitalisation and analyst coverage have in prior studies been used as proxies for neglect. These measures however, do not take into account the effort analysts put in when estimating corporate pre-tax profits. I also find evidence that the industry of the firm influence the accuracy of estimation. In addition, supporting earlier findings, loss making firms are associated with larger forecasting errors. Further, I find that the average forecast error increased in the year 2000 – in Sweden.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Friction force generated in lubricated cutting of steel is experimentally estimated by recording the tangential force experienced by the spherical face of a pin rubbing against a freshly cut surface. The pin and the cutting tool are both submerged in the lubricant and the pin is situated on the cut-track to record the force. The recording shows an instantaneous achievement of a peak in the force curve followed by a decline in time to a steady state value. The peak and not the steady state friction was found to be sensitive to the structure of the hydrocarbon and addition of additive to the oil. The configuration was designed and tested to demonstrate the influence of a reaction film which develops during cutting, on cutting tool friction. Given the strong correlation between the peak friction and the existence of a tribofilm in the cutting zone, the configuration is used to determine the lower limit of a cutting speed regime, which marks the initiation of lubricant starvation, in cutting of steel using an emulsion as a cutting fluid. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – This research paper studies how the strategy of repositioning enables marketers to communicate CSR as their brand’s differentiating factor. It aims at understanding how consumer perceptions can be managed to generate brand value through corporate brand repositioning when CSR is the differentiating factor. The purpose of this paper is to answer the following research question: How can consumer perceptions be managed to generate brand value through corporate brand repositioning when CSR is the differentiating factor? The two research objectives were: 1. to build a model, which describes the different components of consumer perceptions involved in generation of brand value through repositioning when CSR is the differentiating factor, 2. to identify the most critical components in the context of the case company, IKEA for generation of brand value during the process of corporate brand repositioning Design/methodology/approach – This paper is based on the literature review covering the logic of brand value generation, repositioning strategy and consumer perceptions connected to CSR activities. A key concept of the positioning theory, the brand’s differentiating factor, was explored. Previous studies have concluded that desirability of the differentiating factor largely determines the level of brand value-creation for the target customers. The criterion of desirability is based on three dimensions: relevance, distinctiveness and believability. A model was built in terms of these desirability dimensions. This paper takes a case study approach where the predefined theoretical framework is tested using IKEA as the case company. When developing insights on the multifaceted nature of brand perceptions, personal interviews and individual probing are vital. They enable the interviewees to reflect on their feelings and perceptions with their own words. This is why the data collection was based on means-end type of questioning. Qualitative interviews were conducted with 12 consumers. Findings – The paper highlights five critical components that may determine whether IKEA will fail in its repositioning efforts. The majority of the critical components involved believability perceptions. Hence, according to the findings, establishing credibility and trustworthiness for the brand in the context of CSR seems primary. The most critical components identified of the believability aspect were: providing proof of responsible codes of conduct via conducting specific and concrete CSR actions, connecting the company’s products and the social cause, and building a linkage between the initial and new positioning while also weakening the old positioning. Originality/value – Marketers’ obligation is to prepare the company for future demands. Companies all over the globe have recognized the durable trend of responsibility and sustainability. Consumer´s worry about the environmental and social impact of modern lifestyles is growing. This is why Corporate Social Responsibility (CSR) provides brands an important source of differentiation and strength in the future. The strategy of repositioning enables marketers to communicate CSR as their brand’s differentiating factor. This study aimed at understanding how consumer perceptions can be managed to generate brand value through corporate brand repositioning when CSR is the differentiating factor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of estimating the time-dependent statistical characteristics of a random dynamical system is studied under two different settings. In the first, the system dynamics is governed by a differential equation parameterized by a random parameter, while in the second, this is governed by a differential equation with an underlying parameter sequence characterized by a continuous time Markov chain. We propose, for the first time in the literature, stochastic approximation algorithms for estimating various time-dependent process characteristics of the system. In particular, we provide efficient estimators for quantities such as the mean, variance and distribution of the process at any given time as well as the joint distribution and the autocorrelation coefficient at different times. A novel aspect of our approach is that we assume that information on the parameter model (i.e., its distribution in the first case and transition probabilities of the Markov chain in the second) is not available in either case. This is unlike most other work in the literature that assumes availability of such information. Also, most of the prior work in the literature is geared towards analyzing the steady-state system behavior of the random dynamical system while our focus is on analyzing the time-dependent statistical characteristics which are in general difficult to obtain. We prove the almost sure convergence of our stochastic approximation scheme in each case to the true value of the quantity being estimated. We provide a general class of strongly consistent estimators for the aforementioned statistical quantities with regular sample average estimators being a specific instance of these. We also present an application of the proposed scheme on a widely used model in population biology. Numerical experiments in this framework show that the time-dependent process characteristics as obtained using our algorithm in each case exhibit excellent agreement with exact results. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An important tool in signal processing is the use of eigenvalue and singular value decompositions for extracting information from time-series/sensor array data. These tools are used in the so-called subspace methods that underlie solutions to the harmonic retrieval problem in time series and the directions-of-arrival (DOA) estimation problem in array processing. The subspace methods require the knowledge of eigenvectors of the underlying covariance matrix to estimate the parameters of interest. Eigenstructure estimation in signal processing has two important classes: (i) estimating the eigenstructure of the given covariance matrix and (ii) updating the eigenstructure estimates given the current estimate and new data. In this paper, we survey some algorithms for both these classes useful for harmonic retrieval and DOA estimation problems. We begin by surveying key results in the literature and then describe, in some detail, energy function minimization approaches that underlie a class of feedback neural networks. Our approaches estimate some or all of the eigenvectors corresponding to the repeated minimum eigenvalue and also multiple orthogonal eigenvectors corresponding to the ordered eigenvalues of the covariance matrix. Our presentation includes some supporting analysis and simulation results. We may point out here that eigensubspace estimation is a vast area and all aspects of this cannot be fully covered in a single paper. (C) 1995 Academic Press, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A study of environmental chloride and groundwater balance has been carried out in order to estimate their relative value for measuring average groundwater recharge under a humid climatic environment with a relatively shallow water table. The hybrid water fluctuation method allowed the split of the hydrologic year into two seasons of recharge (wet season) and no recharge (dry season) to appraise specific yield during the dry season and, second, to estimate recharge from the water table rise during the wet season. This well elaborated and suitable method has then been used as a standard to assess the effectiveness of the chloride method under forest humid climatic environment. Effective specific yield of 0.08 was obtained for the study area. It reflects an effective basin-wide process and is insensitive to local heterogeneities in the aquifer system. The hybrid water fluctuation method gives an average recharge value of 87.14 mm/year at the basin scale, which represents 5.7% of the annual rainfall. Recharge value estimated based on the chloride method varies between 16.24 and 236.95 mm/year with an average value of 108.45 mm/year. It represents 7% of the mean annual precipitation. The discrepancy observed between recharge value estimated by the hybrid water fluctuation and the chloride mass balance methods appears to be very important, which could imply the ineffectiveness of the chloride mass balance method for this present humid environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A reliable method for service life estimation of the structural element is a prerequisite for service life design. A new methodology for durability-based service life estimation of reinforced concrete flexural elements with respect to chloride-induced corrosion of reinforcement is proposed. The methodology takes into consideration the fuzzy and random uncertainties associated with the variables involved in service life estimation by using a hybrid method combining the vertex method of fuzzy set theory with Monte Carlo simulation technique. It is also shown how to determine the bounds for characteristic value of failure probability from the resulting fuzzy set for failure probability with minimal computational effort. Using the methodology, the bounds for the characteristic value of failure probability for a reinforced concrete T-beam bridge girder has been determined. The service life of the structural element is determined by comparing the upper bound of characteristic value of failure probability with the target failure probability. The methodology will be useful for durability-based service life design and also for making decisions regarding in-service inspections.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Monte Carlo model of ultrasound modulation of multiply scattered coherent light in a highly scattering media has been carried out for estimating the phase shift experienced by a photon beam on its transit through US insonified region. The phase shift is related to the tissue stiffness, thereby opening an avenue for possible breast tumor detection. When the scattering centers in the tissue medium is exposed to a deterministic forcing with the help of a focused ultrasound (US) beam, due to the fact that US-induced oscillation is almost along particular direction, the direction defined by the transducer axis, the scattering events increase, thereby increasing the phase shift experienced by light that traverses through the medium. The phase shift is found to increase with increase in anisotropy g of the medium. However, as the size of the focused region which is the region of interest (ROI) increases, a large number of scattering events take place within the ROI, the ensemble average of the phase shift (Delta phi) becomes very close to zero. The phase of the individual photon is randomly distributed over 2 pi when the scattered photon path crosses a large number of ultrasound wavelengths in the focused region. This is true at high ultrasound frequency (1 MHz) when mean free path length of photon l(s) is comparable to wavelength of US beam. However, at much lower US frequencies (100 Hz), the wavelength of sound is orders of magnitude larger than l(s), and with a high value of g (g 0.9), there is a distinct measurable phase difference for the photon that traverses through the insonified region. Experiments are carried out for validation of simulation results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimating program worst case execution time(WCET) accurately and efficiently is a challenging task. Several programs exhibit phase behavior wherein cycles per instruction (CPI) varies in phases during execution. Recent work has suggested the use of phases in such programs to estimate WCET with minimal instrumentation. However the suggested model uses a function of mean CPI that has no probabilistic guarantees. We propose to use Chebyshev's inequality that can be applied to any arbitrary distribution of CPI samples, to probabilistically bound CPI of a phase. Applying Chebyshev's inequality to phases that exhibit high CPI variation leads to pessimistic upper bounds. We propose a mechanism that refines such phases into sub-phases based on program counter(PC) signatures collected using profiling and also allows the user to control variance of CPI within a sub-phase. We describe a WCET analyzer built on these lines and evaluate it with standard WCET and embedded benchmark suites on two different architectures for three chosen probabilities, p={0.9, 0.95 and 0.99}. For p= 0.99, refinement based on PC signatures alone, reduces average pessimism of WCET estimate by 36%(77%) on Arch1 (Arch2). Compared to Chronos, an open source static WCET analyzer, the average improvement in estimates obtained by refinement is 5%(125%) on Arch1 (Arch2). On limiting variance of CPI within a sub-phase to {50%, 10%, 5% and 1%} of its original value, average accuracy of WCET estimate improves further to {9%, 11%, 12% and 13%} respectively, on Arch1. On Arch2, average accuracy of WCET improves to 159% when CPI variance is limited to 50% of its original value and improvement is marginal beyond that point.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, authors published a method to indirectly measure series capacitance (C-s) of a single, isolated, uniformly wound transformer winding, from its measured frequency response. The next step was to implement it on an actual three-phase transformer. This task is not as straightforward as it might appear at first glance, since the measured frequency response on a three-phase transformer is influenced by nontested windings and their terminal connections, core, tank, etc. To extract the correct value of C-s from this composite frequency response, the formulation has to be reworked to first identify all significant influences and then include their effects. Initially, the modified method and experimental results on a three-phase transformer (4 MVA, 33 kV/433 V) are presented along with results on the winding considered in isolation (for cross validation). Later, the method is directly implemented on another three-phase unit (3.5 MVA, 13.8 kV/765 V) to show repeatability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents the details of estimation of fracture parameters for high strength concrete (HSC, HSC1) and ultra high strength concrete (UHSC). Brief details about characterization of ingredients of HSC, HSC1 and UHSC have been provided. Experiments have been carried out on beams made up of HSC, HSC1 and UHSC considering various sizes and notch depths. Fracture characteristics such as size independent fracture energy (G(f)), size of fracture process zone (C-f), fracture toughness (K-IC) and crack tip opening displacement (CTODc) have been estimated based on the experimental observations. From the studies, it is observed that (i) UHSC has high fracture energy and ductility inspite of having a very low value of C-f; (ii) relatively much more homogeneous than other concretes, because of absence of coarse aggregates and well-graded smaller size particles; (iii) the critical SIF (K-IC) values are increasing with increase of beam depth and decreasing with increase of notch depth. Generally, it can be noted that there is significant increase in fracture toughness and CTODc. They are about 7 times in HSC1 and about 10 times in UHSC compared to those in HSC; (iv) for notch-to-depth ratio 0.1, Bazant's size effect model slightly overestimates the maximum failure loads compared to experimental observations and Karihaloo's model slightly underestimates the maximum failure loads. For the notch-to-depth ratio ranging from 0.2 to 0.4 for the case of UHSC, it can be observed that, both the size effect models predict more or less similar maximum failure loads compared to corresponding experimental values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bending at the valence angle N-C-alpha-C' (tau) is a known control feature for attenuating the stability of the rare intramolecular hydrogen bonded pseudo five-membered ring C-5 structures, the so called 2.0(5) helices, at Aib. The competitive 3(10)-helical structures still predominate over the C5 structures at Aib for most values of tau. However at Aib*, a mimic of Aib where the carbonyl 0 of Aib is replaced with an imidate N (in 5,6-dihydro-4H-1,3-oxazine = Oxa), in the peptidomimic Piv-Pro-Aib*-Oxa (1), the C(5)i structure is persistent in both crystals and in solution. Here we show that the i -> i hydrogen bond energy is a more determinant control for the relative stability of the C5 structure and estimate its value to be 18.5 +/- 0.7 kJ/mol at Aib* in 1, through the computational isodesmic reaction approach, using two independent sets of theoretical isodesmic reactions. (C) 2014 Elsevier Ltd. All rights reserved.