991 resultados para reading value
Resumo:
The underpinning logic of value co-creation in service logic is analysed. It is observed that three of the ten foundational premises of the so-called service-dominant logic are problematic and do not support an understanding of value-co-creation and creation that is meaningful for theoretical development and decision making in business and marketing practice. Without a thorough understanding of the interaction concept, the locus and nature of value co-creation cannot be identified. Based on the analysis in the present article it is observed that a unique contribution of a service perspective on business (service logic) is not that customers always are co-creators of value, but that under certain circumstances the service provider gets opportunities to co-create value together with its customers. Finally, the three problematic premises are reformulated accordingly.
Resumo:
Electric activity of the heart consists of repeated cardiomyocyte depolarizations and repolarizations. Abnormalities in repolarization predispose to ventricular arrhythmias. In body surface electrocardiogram, ventricular repolarization generates the T wave. Several electrocardiographic measures have been developed both for clinical and research purposes to detect repolarization abnormalities. The study aim was to investigate modifiers of ventricular repolarization with the focus on the relationship of the left ventricular mass, antihypertensive drugs, and common gene variants, to electrocardiographic repolarization parameters. The prognostic value of repolarization parameters was also assessed. The study subjects originated from a population of more than 200 middle-aged hypertensive men attending the GENRES hypertension study, and from an epidemiological survey, the Health 2000 Study, including more than 6000 participants. Ventricular repolarization was analysed from digital standard 12-lead resting electrocardiograms with two QT-interval based repolarization parameters (QT interval, T-wave peak to T-wave end interval) and with a set of four T-wave morphology parameters. The results showed that in hypertensive men, a linear change in repolarization parameters is present even in the normal range of left ventricular mass, and that even mild left ventricular hypertrophy is associated with potentially adverse electrocardiographic repolarization changes. In addition, treatments with losartan, bisoprolol, amlodipine, and hydrochlorothiazide have divergent short-term effects on repolarization parameters in hypertensive men. Analyses of the general population sample showed that single nucleotide polymorphisms in KCNH2, KCNE1, and NOS1AP genes are associated with changes in QT-interval based repolarization parameters but not consistently with T-wave morphology parameters. T-wave morphology parameters, but not QT interval or T-wave peak to T-wave end interval, provided independent prognostic information on mortality. The prognostic value was specifically related to cardiovascular mortality. The results indicate that, in hypertension, altered ventricular repolarization is already present in mild left ventricular mass increase, and that commonly used antihypertensive drugs may relatively rapidly and treatment-specifically modify electrocardiographic repolarization parameters. Common variants in cardiac ion channel genes and NOS1AP gene may also modify repolarization-related arrhythmia vulnerability. In the general population, T-wave morphology parameters may be useful in the risk assessment of cardiovascular mortality.
Resumo:
An exact solution is derived for a boundary-value problem for Laplace's equation which is a generalization of the one occurring in the course of solution of the problem of diffraction of surface water waves by a nearly vertical submerged barrier. The method of solution involves the use of complex function theory, the Schwarz reflection principle, and reduction to a system of two uncoupled Riemann-Hilbert problems. Known results, representing the reflection and transmission coefficients of the water wave problem involving a nearly vertical barrier, are derived in terms of the shape function.
Resumo:
This study is a pragmatic description of the evolution of the genre of English witchcraft pamphlets from the mid-sixteenth century to the end of the seventeenth century. Witchcraft pamphlets were produced for a new kind of readership semi-literate, uneducated masses and the central hypothesis of this study is that publishing for the masses entailed rethinking the ways of writing and printing texts. Analysis of the use of typographical variation and illustrations indicates how printers and publishers catered to the tastes and expectations of this new audience. Analysis of the language of witchcraft pamphlets shows how pamphlet writers took into account the new readership by transforming formal written source materials trial proceedings into more immediate ways of writing. The material for this study comes from the Corpus of Early Modern English Witchcraft Pamphlets, which has been compiled by the author. The multidisciplinary analysis incorporates both visual and linguistic aspects of the texts, with methodologies and theoretical insights adopted eclectically from historical pragmatics, genre studies, book history, corpus linguistics, systemic functional linguistics and cognitive psychology. The findings are anchored in the socio-historical context of early modern publishing, reading, literacy and witchcraft beliefs. The study shows not only how consideration of a new audience by both authors and printers influenced the development of a genre, but also the value of combining visual and linguistic features in pragmatic analyses of texts.
Resumo:
The object of this work is Hegel's Logic, which comprises the first third of his philosophical System that also includes the Philosophy of Nature and the Philosophy of Spirit. The work is divided into two parts, where the first part investigates Hegel s Logic in itself or without an explicit reference to rest of Hegel's System. It is argued in the first part that Hegel's Logic contains a methodology for constructing examples of basic ontological categories. The starting point on which this construction is based is a structure Hegel calls Nothing, which I argue to be identical with an empty situation, that is, a situation with no objects in it. Examples of further categories are constructed, firstly, by making previous structures objects of new situations. This rule makes it possible for Hegel to introduce examples of ontological structures that contain objects as constituents. Secondly, Hegel takes also the very constructions he uses as constituents of further structures: thus, he is able to exemplify ontological categories involving causal relations. The final result of Hegel's Logic should then be a model of Hegel s Logic itself, or at least of its basic methods. The second part of the work focuses on the relation of Hegel's Logic to the other parts of Hegel's System. My interpretation tries to avoid, firstly, the extreme of taking Hegel's System as a grand metaphysical attempt to deduce what exists through abstract thinking, and secondly, the extreme of seeing Hegel's System as mere diluted Kantianism or a second-order investigation of theories concerning objects instead of actual objects. I suggest a third manner of reading Hegel's System, based on extending the constructivism of Hegel's Logic to the whole of his philosophical System. According to this interpretation, transitions between parts of Hegel's System should not be understood as proofs of any sort, but as constructions of one structure or its model from another structure. Hence, these transitions involve at least, and especially within the Philosophy of Nature, modelling of one type of object or phenomenon through characteristics of an object or phenomenon of another type, and in the best case, and especially within the Philosophy of Spirit, transformations of an object or phenomenon of one type into an object or phenomenon of another type. Thus, the transitions and descriptions within Hegel's System concern actual objects and not mere theories, but they still involve no fallacious deductions.
Resumo:
Research on reading has been successful in revealing how attention guides eye movements when people read single sentences or text paragraphs in simplified and strictly controlled experimental conditions. However, less is known about reading processes in more naturalistic and applied settings, such as reading Web pages. This thesis investigates online reading processes by recording participants eye movements. The thesis consists of four experimental studies that examine how location of stimuli presented outside the currently fixated region (Study I and III), text format (Study II), animation and abrupt onset of online advertisements (Study III), and phase of an online information search task (Study IV) affect written language processing. Furthermore, the studies investigate how the goal of the reading task affects attention allocation during reading by comparing reading for comprehension with free browsing, and by varying the difficulty of an information search task. The results show that text format affects the reading process, that is, vertical text (word/line) is read at a slower rate than a standard horizontal text, and the mean fixation durations are longer for vertical text than for horizontal text. Furthermore, animated online ads and abrupt ad onsets capture online readers attention and direct their gaze toward the ads, and distract the reading process. Compared to a reading-for-comprehension task, online ads are attended to more in a free browsing task. Moreover, in both tasks abrupt ad onsets result in rather immediate fixations toward the ads. This effect is enhanced when the ad is presented in the proximity of the text being read. In addition, the reading processes vary when Web users proceed in online information search tasks, for example when they are searching for a specific keyword, looking for an answer to a question, or trying to find a subjectively most interesting topic. A scanning type of behavior is typical at the beginning of the tasks, after which participants tend to switch to a more careful reading state before finishing the tasks in the states referred to as decision states. Furthermore, the results also provided evidence that left-to-right readers extract more parafoveal information to the right of the fixated word than to the left, suggesting that learning biases attentional orienting towards the reading direction.
Resumo:
The K-means algorithm for clustering is very much dependent on the initial seed values. We use a genetic algorithm to find a near-optimal partitioning of the given data set by selecting proper initial seed values in the K-means algorithm. Results obtained are very encouraging and in most of the cases, on data sets having well separated clusters, the proposed scheme reached a global minimum.
Resumo:
In order to further develop the logic of service, value creation, value co-creation and value have to be formally and rigorously defined, so that the nature, content and locus of value and the roles of service providers and customers in value creation can be unambiguously assessed. In the present article, following the underpinning logic of value-in-use, it is demonstrated that in order to achieve this, value creation is best defined as the customer’s creation of value-in-use. The analysis shows that the firm’s and customer’s processes and activities can be divided into a provider sphere, closed for the customer, and a customer sphere, closed for the firm. Value creation occurs in the customer sphere, whereas firms in the provider sphere facilitate value creation by producing resources and processes which represent potential value or expected value-in use for their customers. By getting access to the closed customer sphere, firms can create a joint value sphere and engage in customers’ value creation as co-creators of value with them. This approach establishes a theoretically sound foundation for understanding value creation in service logic, and enables meaningful managerial implications, for example as to what is required for co-creation of value, and also further theoretical elaborations.
Resumo:
Our study concerns an important current problem, that of diffusion of information in social networks. This problem has received significant attention from the Internet research community in the recent times, driven by many potential applications such as viral marketing and sales promotions. In this paper, we focus on the target set selection problem, which involves discovering a small subset of influential players in a given social network, to perform a certain task of information diffusion. The target set selection problem manifests in two forms: 1) top-k nodes problem and 2) lambda-coverage problem. In the top-k nodes problem, we are required to find a set of k key nodes that would maximize the number of nodes being influenced in the network. The lambda-coverage problem is concerned with finding a set of k key nodes having minimal size that can influence a given percentage lambda of the nodes in the entire network. We propose a new way of solving these problems using the concept of Shapley value which is a well known solution concept in cooperative game theory. Our approach leads to algorithms which we call the ShaPley value-based Influential Nodes (SPINs) algorithms for solving the top-k nodes problem and the lambda-coverage problem. We compare the performance of the proposed SPIN algorithms with well known algorithms in the literature. Through extensive experimentation on four synthetically generated random graphs and six real-world data sets (Celegans, Jazz, NIPS coauthorship data set, Netscience data set, High-Energy Physics data set, and Political Books data set), we show that the proposed SPIN approach is more powerful and computationally efficient. Note to Practitioners-In recent times, social networks have received a high level of attention due to their proven ability in improving the performance of web search, recommendations in collaborative filtering systems, spreading a technology in the market using viral marketing techniques, etc. It is well known that the interpersonal relationships (or ties or links) between individuals cause change or improvement in the social system because the decisions made by individuals are influenced heavily by the behavior of their neighbors. An interesting and key problem in social networks is to discover the most influential nodes in the social network which can influence other nodes in the social network in a strong and deep way. This problem is called the target set selection problem and has two variants: 1) the top-k nodes problem, where we are required to identify a set of k influential nodes that maximize the number of nodes being influenced in the network and 2) the lambda-coverage problem which involves finding a set of influential nodes having minimum size that can influence a given percentage lambda of the nodes in the entire network. There are many existing algorithms in the literature for solving these problems. In this paper, we propose a new algorithm which is based on a novel interpretation of information diffusion in a social network as a cooperative game. Using this analogy, we develop an algorithm based on the Shapley value of the underlying cooperative game. The proposed algorithm outperforms the existing algorithms in terms of generality or computational complexity or both. Our results are validated through extensive experimentation on both synthetically generated and real-world data sets.
Resumo:
Relentless CMOS scaling coupled with lower design tolerances is making ICs increasingly susceptible to wear-out related permanent faults and transient faults, necessitating on-chip fault tolerance in future chip microprocessors (CMPs). In this paper we introduce a new energy-efficient fault-tolerant CMP architecture known as Redundant Execution using Critical Value Forwarding (RECVF). RECVF is based on two observations: (i) forwarding critical instruction results from the leading to the trailing core enables the latter to execute faster, and (ii) this speedup can be exploited to reduce energy consumption by operating the trailing core at a lower voltage-frequency level. Our evaluation shows that RECVF consumes 37% less energy than conventional dual modular redundant (DMR) execution of a program. It consumes only 1.26 times the energy of a non-fault-tolerant baseline and has a performance overhead of just 1.2%.
Resumo:
A discussion of a technical note with the aforementioned title by Day and Marsh, published in this journal (Volume 121, Number 7, July 1995), is presented. Discussers Robinson and Allam assert that the authors' application of the pore-pressure parameter A to predict and quantify swell or collapse of compacted soils is hard to use because the authors visualize the collapse-swell phenomenon to occur in compacted soils broadly classified as sands and clays. The literature demonstrates that mineralogy has an important role in the volume change behavior of fine-grained soils. Robinson and Allam state that the A-value measurements may not completely predict the type of volume change anticipated in compacted soils on soaking without soil clay mineralogy details. Discussion is followed by closure from the authors.
Resumo:
Eight new dimeric lipids, in which the two Me2N+ ion headgroups are separated by a variable number of polymethylene units [-(CH2)(m)-], have been synthesized. The electron micrograph (TEM) and dynamic light scattering (DLS) of their aqueous dispersions confirmed the formation of vesicular-type aggregates. The vesicle sizes and morphologies were found to depend strongly on the m value, the method, and thermal history of the vesicle preparation. Information on the thermotropic properties of the resulting vesicles was obtained from microcalorimetry and temperature-dependent fluorescence anisotropy measurements. Interestingly, the T-m values for these vesicles revealed a nonlinear dependence on spacer chain length (m value). These vesicles were able to entrap riboflavin. The rates of permeation of the OH- ion under an imposed transmembrane pH gradient were also found to depend significantly on the m value. X-Ray diffraction of the cast films of the lipid dispersions elucidated the nature and the thickness of these membrane organizations, and it was revealed that these lipids organize in three different ways depending on the m value. The EPR spin-probe method with the doxylstearic acids 5NS, 12NS, and 16NS, spin-labeled at various positions of stearic acid, was used to establish, the chain-flexibility gradient and homogeneity of these bilayer assemblies. The apparent fusogenic propensities of these bipolar tetraether lipids were investigated in the presence of Na2SO4 with fluorescence-resonance energy-transfer fusion assay. Small unilamellar vesicles formed from 1 and three representative biscationic lipids were also studied with fluorescence anisotropy and H-1 NMR spectroscopic techniques in the absence and the presence of varying amounts of cholesterol.