33 resultados para Means-end approach


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of the paper is to identify and describe differences in cognitive structures between consumer segments with differing levels of acceptance of genetically modified (GM) food. Among a sample of 60 mothers three segments are distinguished with respect to purchase intentions for GM yogurt: non-buyers, maybe-buyers and likely-buyers. A homogeneity test for the elicited laddering data suggests merging maybe- and likely-buyers, yielding two segments termed accepters and rejecters. Still, overlap between the segments’ cognitive structures is considerable, in particular with respect to a health focus in the evaluation of perceived consequences and ambivalence in technology assessment. Distinct differences are found in the assessment of benefits offered by GM food and the importance of values driving product evaluation and thus purchase decisions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Charities need to understand why volunteers choose one brand rather than another in order to attract more volunteers to their organisation. There has been considerable academic interest in understanding why people volunteer generally. However, this research explores the more specific question of why a volunteer chooses one charity brand rather than another. It builds on previous conceptualisations of volunteering as a consumption decision. Seen through the lens of the individual volunteer, it considers the under-researched area of the decision-making process. The research adopts an interpretivist epistemology and subjectivist ontology. Qualitative data was collected through depth interviews and analysed using both Means-End Chain (MEC) and Framework Analysis methodology. The primary contribution of the research is to theory: understanding the role of brand in the volunteer decision-making process. It identifies two roles for brand. The first is as a specific reason for choice, an ‘attribute’ of the decision. Through MEC, volunteering for a well-known brand connects directly through to a sense of self, both self-respect but also social recognition by others. All four components of the symbolic consumption construct are found in the data: volunteers choose a well-known brand to say something about themselves. The brand brings credibility and reassurance, it reduces the risk and enables the volunteer to meet their need to make a difference and achieve a sense of accomplishment. The second closely related role for brand is within the process of making the volunteering decision. Volunteers built up knowledge about the charity brands from a variety of brand touchpoints, over time. At the point of decision-making that brand knowledge and engagement becomes relevant, enabling some to make an automatic choice despite the significant level of commitment being made. The research identifies four types of decision-making behaviour. The research also makes secondary contributions to MEC methodology and to the non-profit context. It concludes within practical implications for management practice and a rich agenda for future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigated the development of three aspects of linguistic prosody in a group of children with Williams syndrome compared to typically developing children. The prosodic abilities investigated were: (1) the ability to understand and use prosody to make specific words or syllables stand out in an utterance (focus); (2) the ability to understand and use prosody to disambiguate complex noun phrases (chunking); (3) the ability to understand and use prosody to regulate conversational behaviour (turn-end). The data were analysed using a cross-sectional developmental trajectory approach. The results showed that, relative to chronological age, there was a delayed onset in the development of the ability of children with WS to use prosody to signal the most important word in an utterance (the focus function). Delayed rate of development was found for all the other aspects of expressive and receptive prosody under investigation. However, when non-verbal mental age was taken into consideration, there were no differences between the children with WS and the controls neither with the onset nor with the rate of development for any of the prosodic skills under investigation apart from the ability to use prosody in order to regulate conversational behaviour. We conclude that prosody is not a ‘preserved’ cognitive skill in WS. The genetic factors, development in other cognitive domains and environmental influences affect developmental pathways and as a result, development proceeds along an atypical trajectory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clustering is defined as the grouping of similar items in a set, and is an important process within the field of data mining. As the amount of data for various applications continues to increase, in terms of its size and dimensionality, it is necessary to have efficient clustering methods. A popular clustering algorithm is K-Means, which adopts a greedy approach to produce a set of K-clusters with associated centres of mass, and uses a squared error distortion measure to determine convergence. Methods for improving the efficiency of K-Means have been largely explored in two main directions. The amount of computation can be significantly reduced by adopting a more efficient data structure, notably a multi-dimensional binary search tree (KD-Tree) to store either centroids or data points. A second direction is parallel processing, where data and computation loads are distributed over many processing nodes. However, little work has been done to provide a parallel formulation of the efficient sequential techniques based on KD-Trees. Such approaches are expected to have an irregular distribution of computation load and can suffer from load imbalance. This issue has so far limited the adoption of these efficient K-Means techniques in parallel computational environments. In this work, we provide a parallel formulation for the KD-Tree based K-Means algorithm and address its load balancing issues.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Europe's widely distributed climate modelling expertise, now organized in the European Network for Earth System Modelling (ENES), is both a strength and a challenge. Recognizing this, the European Union's Program for Integrated Earth System Modelling (PRISM) infrastructure project aims at designing a flexible and friendly user environment to assemble, run and post-process Earth System models. PRISM was started in December 2001 with a duration of three years. This paper presents the major stages of PRISM, including: (1) the definition and promotion of scientific and technical standards to increase component modularity; (2) the development of an end-to-end software environment (graphical user interface, coupling and I/O system, diagnostics, visualization) to launch, monitor and analyse complex Earth system models built around state-of-art community component models (atmosphere, ocean, atmospheric chemistry, ocean bio-chemistry, sea-ice, land-surface); and (3) testing and quality standards to ensure high-performance computing performance on a variety of platforms. PRISM is emerging as a core strategic software infrastructure for building the European research area in Earth system sciences. Copyright (c) 2005 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The article considers screening human populations with two screening tests. If any of the two tests is positive, then full evaluation of the disease status is undertaken; however, if both diagnostic tests are negative, then disease status remains unknown. This procedure leads to a data constellation in which, for each disease status, the 2 × 2 table associated with the two diagnostic tests used in screening has exactly one empty, unknown cell. To estimate the unobserved cell counts, previous approaches assume independence of the two diagnostic tests and use specific models, including the special mixture model of Walter or unconstrained capture–recapture estimates. Often, as is also demonstrated in this article by means of a simple test, the independence of the two screening tests is not supported by the data. Two new estimators are suggested that allow associations of the screening test, although the form of association must be assumed to be homogeneous over disease status. These estimators are modifications of the simple capture–recapture estimator and easy to construct. The estimators are investigated for several screening studies with fully evaluated disease status in which the superior behavior of the new estimators compared to the previous conventional ones can be shown. Finally, the performance of the new estimators is compared with maximum likelihood estimators, which are more difficult to obtain in these models. The results indicate the loss of efficiency as minor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The measurement of the impact of technical change has received significant attention within the economics literature. One popular method of quantifying the impact of technical change is the use of growth accounting index numbers. However, in a recent article Nelson and Pack (1999) criticise the use of such index numbers in situations where technical change is likely to be biased in favour of one or other inputs. In particular they criticise the common approach of applying observed cost shares, as proxies for partial output elasticities, to weight the change in quantities which they claim is only valid under Hicks neutrality. Recent advances in the measurement of product and factor biases of technical change developed by Balcombe et al (2000) provide a relatively straight-forward means of correcting product and factor shares in the face of biased technical progress. This paper demonstrates the correction of both revenue and cost shares used in the construction of a TFP index for UK agriculture over the period 1953 to 2000 using both revenue and cost function share equations appended with stochastic latent variables to capture the bias effect. Technical progress is shown to be biased between both individual input and output groups. Output and input quantity aggregates are then constructed using both observed and corrected share weights and the resulting TFPs are compared. There does appear to be some significant bias in TFP if the effect of biased technical progress is not taken into account when constructing the weights

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study presents a new simple approach for combining empirical with raw (i.e., not bias corrected) coupled model ensemble forecasts in order to make more skillful interval forecasts of ENSO. A Bayesian normal model has been used to combine empirical and raw coupled model December SST Niño-3.4 index forecasts started at the end of the preceding July (5-month lead time). The empirical forecasts were obtained by linear regression between December and the preceding July Niño-3.4 index values over the period 1950–2001. Coupled model ensemble forecasts for the period 1987–99 were provided by ECMWF, as part of the Development of a European Multimodel Ensemble System for Seasonal to Interannual Prediction (DEMETER) project. Empirical and raw coupled model ensemble forecasts alone have similar mean absolute error forecast skill score, compared to climatological forecasts, of around 50% over the period 1987–99. The combined forecast gives an increased skill score of 74% and provides a well-calibrated and reliable estimate of forecast uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The article considers screening human populations with two screening tests. If any of the two tests is positive, then full evaluation of the disease status is undertaken; however, if both diagnostic tests are negative, then disease status remains unknown. This procedure leads to a data constellation in which, for each disease status, the 2 x 2 table associated with the two diagnostic tests used in screening has exactly one empty, unknown cell. To estimate the unobserved cell counts, previous approaches assume independence of the two diagnostic tests and use specific models, including the special mixture model of Walter or unconstrained capture-recapture estimates. Often, as is also demonstrated in this article by means of a simple test, the independence of the two screening tests is not supported by the data. Two new estimators are suggested that allow associations of the screening test, although the form of association must be assumed to be homogeneous over disease status. These estimators are modifications of the simple capture-recapture estimator and easy to construct. The estimators are investigated for several screening studies with fully evaluated disease status in which the superior behavior of the new estimators compared to the previous conventional ones can be shown. Finally, the performance of the new estimators is compared with maximum likelihood estimators, which are more difficult to obtain in these models. The results indicate the loss of efficiency as minor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: To assess the potential source of variation that surgeon may add to patient outcome in a clinical trial of surgical procedures. Methods: Two large (n = 1380) parallel multicentre randomized surgical trials were undertaken to compare laparoscopically assisted hysterectomy with conventional methods of abdominal and vaginal hysterectomy; involving 43 surgeons. The primary end point of the trial was the occurrence of at least one major complication. Patients were nested within surgeons giving the data set a hierarchical structure. A total of 10% of patients had at least one major complication, that is, a sparse binary outcome variable. A linear mixed logistic regression model (with logit link function) was used to model the probability of a major complication, with surgeon fitted as a random effect. Models were fitted using the method of maximum likelihood in SAS((R)). Results: There were many convergence problems. These were resolved using a variety of approaches including; treating all effects as fixed for the initial model building; modelling the variance of a parameter on a logarithmic scale and centring of continuous covariates. The initial model building process indicated no significant 'type of operation' across surgeon interaction effect in either trial, the 'type of operation' term was highly significant in the abdominal trial, and the 'surgeon' term was not significant in either trial. Conclusions: The analysis did not find a surgeon effect but it is difficult to conclude that there was not a difference between surgeons. The statistical test may have lacked sufficient power, the variance estimates were small with large standard errors, indicating that the precision of the variance estimates may be questionable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At the end of its tether! The fusion of a six-membered ring onto the four-carbon-atom tether of substrate 1 provides an efficient approach toward the polycyclic ring systems of the natural products aphidicolin and stemodinone. The reaction represents a unique example of a preference for product formation from an endo exciplex in an intramolecular system (exo:endo 2:3=1.0:1.2).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present paper details the synthesis, characterization, and preliminary physical analyses of a series of polyisobutylene derivatives featuring urethane and urea end-groups that enable supramolecular network formation to occur via hydrogen bonding. These polymers are readily accessible from relatively inexpensive and commercially available starting materials using a simple two-step synthetic approach. In the bulk, these supramolecular networks were found to possess thermoreversible and elastomeric characteristics as determined by temperature-dependent rheological analysis. These thermoreversible and elastomeric properties make these supramolecular materials potentially very useful in applications such as adhesives and healable surface coatings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The availability of a network strongly depends on the frequency of service outages and the recovery time for each outage. The loss of network resources includes complete or partial failure of hardware and software components, power outages, scheduled maintenance such as software and hardware, operational errors such as configuration errors and acts of nature such as floods, tornadoes and earthquakes. This paper proposes a practical approach to the enhancement of QoS routing by means of providing alternative or repair paths in the event of a breakage of a working path. The proposed scheme guarantees that every Protected Node (PN) is connected to a multi-repair path such that no further failure or breakage of single or double repair paths can cause any simultaneous loss of connectivity between an ingress node and an egress node. Links to be protected in an MPLS network are predefined and an LSP request involves the establishment of a working path. The use of multi-protection paths permits the formation of numerous protection paths allowing greater flexibility. Our analysis will examine several methods including single, double and multi-repair routes and the prioritization of signals along the protected paths to improve the Quality of Service (QoS), throughput, reduce the cost of the protection path placement, delay, congestion and collision.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

K-Means is a popular clustering algorithm which adopts an iterative refinement procedure to determine data partitions and to compute their associated centres of mass, called centroids. The straightforward implementation of the algorithm is often referred to as `brute force' since it computes a proximity measure from each data point to each centroid at every iteration of the K-Means process. Efficient implementations of the K-Means algorithm have been predominantly based on multi-dimensional binary search trees (KD-Trees). A combination of an efficient data structure and geometrical constraints allow to reduce the number of distance computations required at each iteration. In this work we present a general space partitioning approach for improving the efficiency and the scalability of the K-Means algorithm. We propose to adopt approximate hierarchical clustering methods to generate binary space partitioning trees in contrast to KD-Trees. In the experimental analysis, we have tested the performance of the proposed Binary Space Partitioning K-Means (BSP-KM) when a divisive clustering algorithm is used. We have carried out extensive experimental tests to compare the proposed approach to the one based on KD-Trees (KD-KM) in a wide range of the parameters space. BSP-KM is more scalable than KDKM, while keeping the deterministic nature of the `brute force' algorithm. In particular, the proposed space partitioning approach has shown to overcome the well-known limitation of KD-Trees in high-dimensional spaces and can also be adopted to improve the efficiency of other algorithms in which KD-Trees have been used.