929 resultados para acceptance
Resumo:
Customer value has been identified as the reason for customers to patronize a firm, and as one of the fundamental blocks that market exchanges build upon. Despite the importance of customer value, it is often poorly defined, or seems to refer to different phenomena. This dissertation contributes to current marketing literature by subjecting the value concept to a critical investigation, and by clarifying its conceptual foundation. Based on the literature review, it is proposed that customer value can be divided into two separate, but interrelated aspects: value creation processes, and value outcome determination. This means that on one hand, it is possible to examine those activities through which value is created, and on the other hand, investigate how customers determine the value outcomes they receive. The results further show that customers may determine value in four different ways: value as a benefit/sacrifice ratio, as experience outcomes, as means-end chains, and value as phenomenological. In value as benefit/sacrifice ratio, customers are expected to calculate the ratio between service benefits (e.g. ease of use) and sacrifices (e.g. price). In value as experience outcomes, customers are suggested to experience multiple value components, such as functional, emotional, or social value. Customer value as means-ends chains in turn models value in terms of the relationships between service characteristics, use value, and desirable ends (e.g. social acceptance). Finally, value as phenomenological proposes that value emerges from lived, holistic experiences. The empirical papers investigate customer value in e-services, including online health care and mobile services, and show how value in e-service stems from the process and content quality, use context, and the service combination that a customer uses. In conclusion, marketers should understand that different value definitions generate different types of understanding of customer value. In addition, it is clear that studying value from several perspectives is useful, as it enables a richer understanding of value for the different actors. Finally, the interconnectedness between value creation and determination is surprisingly little researched, and this dissertation proposes initial steps towards understanding the relationship between the two.
Resumo:
Materials with high thermal conductivity and thermal expansion coefficient matching with that of Si or GaAs are being used for packaging high density microcircuits due to their ability of faster heat dissipation. Al/SiC is gaining wide acceptance as electronic packaging material due to the fact that its thermal expansion coefficient can be tailored to match with that of Si or GaAs by varying the Al:SiC ratio while maintaining the thermal conductivity more or less the same. In the present work, Al/SiC microwave integrated circuit (MIC) carriers have been fabricated by pressureless infiltration of Al-alloy into porous SiC preforms in air. This new technique provides a cheaper alternative to pressure infiltration or pressureless infiltration in nitrogen in producing Al/SiC composites for electronic packaging applications. Al-alloy/65vol% SiC composite exhibited a coefficient of thermal expansion of 7 x 10(-6) K-1 (25 degrees C-100 degrees C) and a thermal conductivity of 147 Wm(-1) K-1 at 30 degrees C. The hysteresis observed in thermal expansion coefficient of the composite in the temperature range 100 degrees C-400 degrees C has been attributed to the presence of thermal residual stresses in the composite. Thermal diffusivity of the composite measured over the temperature range from 30 degrees C to 400 degrees C showed a 55% decrease in thermal diffusivity with temperature. Such a large decrease in thermal diffusivity with temperature could be due to the presence of micropores, microcracks, and decohesion of the Al/SiC interfaces in the microstructure (all formed during cooling from the processing temperature). The carrier showed satisfactory performance after integrating it into a MIC.
Resumo:
Hypertexts are digital texts characterized by interactive hyperlinking and a fragmented textual organization. Increasingly prominent since the early 1990s, hypertexts have become a common text type both on the Internet and in a variety of other digital contexts. Although studied widely in disciplines like hypertext theory and media studies, formal linguistic approaches to hypertext continue to be relatively rare. This study examines coherence negotiation in hypertext with particularly reference to hypertext fiction. Coherence, or the quality of making sense, is a fundamental property of textness. Proceeding from the premise that coherence is a subjectively evaluated property rather than an objective quality arising directly from textual cues, the study focuses on the processes through which readers interact with hyperlinks and negotiate continuity between hypertextual fragments. The study begins with a typological discussion of textuality and an overview of the historical and technological precedents of modern hypertexts. Then, making use of text linguistic, discourse analytical, pragmatic, and narratological approaches to textual coherence, the study takes established models developed for analyzing and describing conventional texts, and examines their applicability to hypertext. Primary data derived from a collection of hyperfictions is used throughout to illustrate the mechanisms in practice. Hypertextual coherence negotiation is shown to require the ability to cognitively operate between local and global coherence by means of processing lexical cohesion, discourse topical continuities, inferences and implications, and shifting cognitive frames. The main conclusion of the study is that the style of reading required by hypertextuality fosters a new paradigm of coherence. Defined as fuzzy coherence, this new approach to textual sensemaking is predicated on an acceptance of the coherence challenges readers experience when the act of reading comes to involve repeated encounters with referentially imprecise hyperlinks and discourse topical shifts. A practical application of fuzzy coherence is shown to be in effect in the way coherence is actively manipulated in hypertext narratives.
Resumo:
We propose to compress weighted graphs (networks), motivated by the observation that large networks of social, biological, or other relations can be complex to handle and visualize. In the process also known as graph simplication, nodes and (unweighted) edges are grouped to supernodes and superedges, respectively, to obtain a smaller graph. We propose models and algorithms for weighted graphs. The interpretation (i.e. decompression) of a compressed, weighted graph is that a pair of original nodes is connected by an edge if their supernodes are connected by one, and that the weight of an edge is approximated to be the weight of the superedge. The compression problem now consists of choosing supernodes, superedges, and superedge weights so that the approximation error is minimized while the amount of compression is maximized. In this paper, we formulate this task as the 'simple weighted graph compression problem'. We then propose a much wider class of tasks under the name of 'generalized weighted graph compression problem'. The generalized task extends the optimization to preserve longer-range connectivities between nodes, not just individual edge weights. We study the properties of these problems and propose a range of algorithms to solve them, with dierent balances between complexity and quality of the result. We evaluate the problems and algorithms experimentally on real networks. The results indicate that weighted graphs can be compressed efficiently with relatively little compression error.
Resumo:
We consider discrete-time versions of two classical problems in the optimal control of admission to a queueing system: i) optimal routing of arrivals to two parallel queues and ii) optimal acceptance/rejection of arrivals to a single queue. We extend the formulation of these problems to permit a k step delay in the observation of the queue lengths by the controller. For geometric inter-arrival times and geometric service times the problems are formulated as controlled Markov chains with expected total discounted cost as the minimization objective. For problem i) we show that when k = 1, the optimal policy is to allocate an arrival to the queue with the smaller expected queue length (JSEQ: Join the Shortest Expected Queue). We also show that for this problem, for k greater than or equal to 2, JSEQ is not optimal. For problem ii) we show that when k = 1, the optimal policy is a threshold policy. There are, however, two thresholds m(0) greater than or equal to m(1) > 0, such that mo is used when the previous action was to reject, and mi is used when the previous action was to accept.
Resumo:
Total tRNAs isolated from chloroplasts and etioplasts of cucumber cotyledons were compared with respect to amino acid acceptance, isoacceptor distribution and extent of modification. Aminoacylation of the tRNAs with nine different amino acids studied indicated that the relative acceptor activities of chloroplast total tRNAs for four amino acids are significantly higher than etioplast total tRNAs. Two dimensional polyacrylamide gel electrophoresis (2D-PAGE) of chloroplast total tRNAs separated at least 32 spots, while approximately 41 spots were resolved from etioplast total tRNAs. Comparison of the reversed-phase chromatography (RPC-5) profiles of chloroplast and etioplast leucyl-, lysyl-, phenylalanyl-, and valyl-tRNA species showed no qualitative differences in the elution profiles. However, leucyl-, lysyl- and valyl-tRNA species showed quantitative differences in the relative amounts of the isoaccepting species present in chloroplasts and etioplasts. The analysis of modified nucleotides of total tRNAs from the two plastid types indicated that total tRNA from etioplasts was undermodified with respect to ribothymidine, isopentenyladenosine/hydroxy-isopentenyladenosine, 1-methylguanosine and 2-o-methylguanosine. This indicates that illumination may cause de novo synthesis of chloroplast tRNA-modifying enzymes encoded for by nuclear genes leading to the formation of highly modified tRNAs in chloroplasts. Based on these results, we speculate that the observed decrease in levels of aminoacylation, variations in the relative amounts of certain isoacceptors, and differences in the electrophoretic mobilities of some extra tRNA spots in the etioplast total tRNAs as compared to chloroplast total tRNAs could be due to some partially undermodified etioplast tRNAs. Taken together, the data suggested that the light-induced transformation of etioplasts into chloroplasts is accompanied by increases in the relative levels of some functional chloroplast tRNAs by post transcriptional nucleotide modifications.
Resumo:
Potassium titanyl phosphate (KTP) and its isomorphs have received enormous attention in the last 2 decades. In particular, KTP assumes importance due to its large nonlinear optic and electrooptic coefficients together with the broad thermal and angular acceptance for second harmonic generation. This article provides an overview of the material aspects, structural, physical, and chemical properties and device feasibility of the KTP family of crystals. Some of the current areas of research and development along with their significance in understanding the physical properties as well as device applications are addressed. Optical waveguide fabrication processes and characteristics with their relevance to the present-day technology are highlighted. Studies performed so far have enabled us to understand the fundamental aspects of these materials and what needs to be pursued vigorously is the exploitation of their device applications to the maximum extent.
Resumo:
The production of rainfed crops in semi-arid tropics exhibits large variation in response to the variation in seasonal rainfall. There are several farm-level decisions such as the choice of cropping pattern, whether to invest in fertilizers, pesticides etc., the choice of the period for planting, plant population density etc. for which the appropriate choice (associated with maximum production or minimum risk) depends upon the nature of the rainfall variability or the prediction for a specific year. In this paper, we have addressed the problem of identifying the appropriate strategies for cultivation of rainfed groundnut in the Anantapur region in a semi-arid part of the Indian peninsula. The approach developed involves participatory research with active collaboration with farmers, so that the problems with perceived need are addressed with the modern tools and data sets available. Given the large spatial variation of climate and soil, the appropriate strategies are necessarily location specific. With the approach adopted, it is possible to tap the detailed location specific knowledge of the complex rainfed ecosystem and gain an insight into the variety of options of land use and management practices available to each category of stakeholders. We believe such a participatory approach is essential for identifying strategies that have a favourable cost-benefit ratio over the region considered and hence are associated with a high chance of acceptance by the stakeholders. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Fully structured and matured open source spatial and temporal analysis technology seems to be the official carrier of the future for planning of the natural resources especially in the developing nations. This technology has gained enormous momentum because of technical superiority, affordability and ability to join expertise from all sections of the society. Sustainable development of a region depends on the integrated planning approaches adopted in decision making which requires timely and accurate spatial data. With the increased developmental programmes, the need for appropriate decision support system has increased in order to analyse and visualise the decisions associated with spatial and temporal aspects of natural resources. In this regard Geographic Information System (GIS) along with remote sensing data support the applications that involve spatial and temporal analysis on digital thematic maps and the remotely sensed images. Open source GIS would help in wide scale applications involving decisions at various hierarchical levels (for example from village panchayat to planning commission) on economic viability, social acceptance apart from technical feasibility. GRASS (Geographic Resources Analysis Support System, http://wgbis.ces.iisc.ernet.in/grass) is an open source GIS that works on Linux platform (freeware), but most of the applications are in command line argument, necessitating a user friendly and cost effective graphical user interface (GUI). Keeping these aspects in mind, Geographic Resources Decision Support System (GRDSS) has been developed with functionality such as raster, topological vector, image processing, statistical analysis, geographical analysis, graphics production, etc. This operates through a GUI developed in Tcltk (Tool command language / Tool kit) under Linux as well as with a shell in X-Windows. GRDSS include options such as Import /Export of different data formats, Display, Digital Image processing, Map editing, Raster Analysis, Vector Analysis, Point Analysis, Spatial Query, which are required for regional planning such as watershed Analysis, Landscape Analysis etc. This is customised to Indian context with an option to extract individual band from the IRS (Indian Remote Sensing Satellites) data, which is in BIL (Band Interleaved by Lines) format. The integration of PostgreSQL (a freeware) in GRDSS aids as an efficient database management system.
Resumo:
Structural Health Monitoring has gained wide acceptance in the recent past as a means to monitor a structure and provide an early warning of an unsafe condition using real-time data. Utilization of structurally integrated, distributed sensors to monitor the health of a structure through accurate interpretation of sensor signals and real-time data processing can greatly reduce the inspection burden. The rapid improvement of the Fiber Optic Sensor technology for strain, vibration, ultrasonic and acoustic emission measurements in recent times makes it feasible alternative to the traditional strain gauges, PVDF and conventional Piezoelectric sensors used for Non Destructive Evaluation (NDE) and Structural Health Monitoring (SHM). Optical fiber-based sensors offer advantages over conventional strain gauges, and PZT devices in terms of size, ease of embedment, immunity from electromagnetic interference (EMI) and potential for multiplexing a number of sensors. The objective of this paper is to demonstrate the acoustic wave sensing using Extrinsic Fabry-Perot Interferometric (EFPI) sensor on a GFRP composite laminates. For this purpose experiments have been carried out initially for strain measurement with Fiber Optic Sensors on GFRP laminates with intentionally introduced holes of different sizes as defects. The results obtained from these experiments are presented in this paper. Numerical modeling has been carried out to obtain the relationship between the defect size and strain.
Resumo:
The term Structural Health Monitoring has gained wide acceptance in the recent pastas a means to monitor a structure and provide an early warning of an unsafe conditionusing real-time data. Utilization of structurally integrated, distributed sensors tomonitor the health of a structure through accurate interpretation of sensor signals andreal-time data processing can greatly reduce the inspection burden. The rapidimprovement of the Fiber Bragg Grating sensor technology for strain, vibration andacoustic emission measurements in recent times make them a feasible alternatives tothe traditional strain gauges transducers and conventional Piezoelectric sensors usedfor Non Destructive Evaluation (NDE) and Structural Health Monitoring (SHM).Optical fiber-based sensors offers advantages over conventional strain gauges, PVDFfilm and PZT devices in terms of size, ease of embedment, immunity fromelectromagnetic interference(EMI) and potential for multiplexing a number ofsensors. The objective of this paper is to demonstrate the feasibility of Fiber BraggGrating sensor and compare its utility with the conventional strain gauges and PVDFfilm sensors. For this purpose experiments are being carried out in the laboratory on acomposite wing of a mini air vehicle (MAV). In this paper, the results obtained fromthese preliminary experiments are discussed.
Resumo:
Third World hinterlands provide most of the settings in which the quality of human life has improved the least over the decade since Our Common Future was published. This low quality of life promotes a desire for large number of offspring, fuelling population growth and an exodus to the urban centres of the Third World, Enhancing the quality of life of these people in ways compatible with the health of their environments is therefore the most significant of the challenges from the perspective of sustainable development. Human quality of life may be viewed in terms of access to goods, services and a satisfying social role. The ongoing processes of globalization are enhancing flows of goods worldwide, but these hardly reach the poor of Third World countrysides. But processes of globalization have also vastly improved everybody's access to Information, and there are excellent opportunities of putting this to good use to enhance the quality of life of the people of Third World countrysides through better access to education and health. More importantly, better access to information could promote a more satisfying social role through strengthening grass-roots involvement in development planning and management of natural resources. I illustrate these possibilities with the help of a series of concrete experiences form the south Indian state of Kerala. Such an effort does not call for large-scare material inputs, rather it calls for a culture of inform-and-share in place place of the prevalent culture of control-and-command. It calls for openness and transparency in transactions involving government agencies, NGOs, and national and transnational business enterprises. It calls for acceptance of accountability by such agencies.
Resumo:
Energy and energy services are the backbone of growth and development in India and is increasingly dependent upon the use of fossil based fuels that lead to greenhouse gases (GHG) emissions and related concerns. Algal biofuels are being evolved as carbon (C)-neutral alternative biofuels. Algae are photosynthetic microorganisms that convert sunlight, water and carbon dioxide (CO2) to various sugars and lipids Tri-Acyl-Glycols (TAG) and show promise as an alternative, renewable and green fuel source for India. Compared to land based oilseed crops algae have potentially higher yields (5-12 g/m(2)/d) and can use locations and water resources not suited for agriculture. Within India, there is little additional land area for algal cultivation and therefore needs to be carried out in places that are already used for agriculture, e.g. flooded paddy lands (20 Mha) with village level technologies and on saline wastelands (3 Mha). Cultivating algae under such conditions requires novel multi-tier, multi-cyclic approaches of sharing land area without causing threats to food and water security as well as demand for additional fertilizer resources by adopting multi-tier cropping (algae-paddy) in decentralized open pond systems. A large part of the algal biofuel production is possible in flooded paddy crop land before the crop reaches dense canopies, in wastewaters (40 billion litres per day), in salt affected lands and in nutrient/diversity impoverished shallow coastline fishery. Mitigation will be achieved through avoidance of GHG, C-capture options and substitution of fossil fuels. Estimates made in this paper suggest that nearly half of the current transportation petro-fuels could be produced at such locations without disruption of food security, water security or overall sustainability. This shift can also provide significant mitigation avenues. The major adaptation needs are related to socio-technical acceptance for reuse of various wastelands, wastewaters and waste-derived energy and by-products through policy and attitude change efforts.
Resumo:
Determining the spin and the parity quantum numbers of the recently discovered Higgs-like boson at the LHC is a matter of great importance. In this Letter, we consider the possibility of using the kinematics of the tagging jets in Higgs production via the vector boson fusion (VBF) process to test the tensor structure of the Higgs-vector boson (HVV) interaction and to determine the spin and CP properties of the observed resonance. We show that an anomalous HVV vertex, in particular its explicit momentum dependence, drastically affects the rapidity between the two scattered quarks and their transverse momenta and, hence, the acceptance of the kinematical cuts that allow to select the VBF topology. The sensitivity of these observables to different spin-parity assignments, including the dependence on the LHC center of mass energy, are evaluated. In addition, we show that in associated Higgs production with a vector boson some kinematical variables, such as the invariant mass of the system and the transverse momenta of the two bosons and their separation in rapidity, are also sensitive to the spin-parity assignments of the Higgs-like boson.
Resumo:
Northeast India is one of the most highly seismically active regions in the world with more than seven earthquakes on an average per year of magnitude 5.0 and above. Reliable seismic hazard assessment could provide the necessary design inputs for earthquake resistant design of structures in this' region. In this study, deterministic as well as probabilistic methods have been attempted for seismic hazard assessment of Tripura and Mizoram states at bedrock level condition. An updated earthquake catalogue was collected from various national and international seismological agencies for the period from 1731 to 2011. The homogenization, declustering and data completeness analysis of events have been carried out before hazard evaluation. Seismicity parameters have been estimated using G R relationship for each source zone. Based on the seismicity, tectonic features and fault rupture mechanism, this region was divided into six major subzones. Region specific correlations were used for magnitude conversion for homogenization of earthquake size. Ground motion equations (Atkinson and Boore 2003; Gupta 2010) were validated with the observed PGA (peak ground acceleration) values before use in the hazard evaluation. In this study, the hazard is estimated using linear sources, identified in and around the study area. Results are presented in the form of PGA using both DSHA (deterministic seismic hazard analysis) and PSHA (probabilistic seismic hazard analysis) with 2 and 10% probability of exceedance in 50 years, and spectral acceleration (T = 0. 2 s, 1.0 s) for both the states (2% probability of exceedance in 50 years). The results are important to provide inputs for planning risk reduction strategies, for developing risk acceptance criteria and financial analysis for possible damages in the study area with a comprehensive analysis and higher resolution hazard mapping.