186 resultados para Processes of maxima


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recent claims of equivalence of animal and human reasoning are evaluated and a study of avian cognition serves as an exemplar of weaknesses in these arguments. It is argued that current research into neurobiological cognition lacks theoretical breadth to substantiate comparative analyses of cognitive function. Evaluation of a greater range of theoretical explanations is needed to verify claims of equivalence in animal and human cognition. We conclude by exemplifying how the notion of affordances in multi-scale dynamics can capture behavior attributed to processes of analogical and inferential reasoning in animals and humans.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This research investigated students' construction of knowledge about the topics of magnetism and electricity emergent from a visit to an interactive science centre and subsequent classroom-based activities linked to the science centre exhibits. The significance of this study is that it analyses critically an aspect of school visits to informal learning centres that has been neglected by researchers in the past, namely the influence of post-visit activities in the classroom on subsequent learning and knowledge construction. Employing an interpretive methodology, the study focused on three areas of endeavour. Firstly, the establishment of a set of principles for the development of post-visit activities, from a constructivist framework, to facilitate students' learning of science. Secondly, to describe and interpret students' scientific understandings : prior t o a visit t o a science museum; following a visit t o a science museum; and following post-visit activities that were related to their museum experiences. Finally, to describe and interpret the ways in which students constructed their understandings: prior to a visit to a science museum; following a visit to a science museum; and following post-visit activities directly related to their museum experiences. The study was designed and implemented in three stages: 1) identification and establishment of the principles for design and evaluation of post-visit activities; 2) a pilot study of specific post-visit activities and data gathering strategies related to student construction of knowledge; and 3) interpretation of students' construction of knowledge from a visit to a science museum and subsequent completion of post-visit activities, which constituted the main study. Twelve students were selected from a year 7 class to participate in the study. This study provides evidence that the series of post-visit activities, related to the museum experiences, resulted in students constructing and reconstructing their personal knowledge of science concepts and principles represented in the science museum exhibits, sometimes towards the accepted scientific understanding and sometimes in different and surprising ways. Findings demonstrate the interrelationships between learning that occurs at school, at home and in informal learning settings. The study also underscores for teachers and staff of science museums and similar centres the importance of planning pre- and post-visit activities, not only to support the development of scientific conceptions, but also to detect and respond to alternative conceptions that may be produced or strengthened during a visit to an informal learning centre. Consistent with contemporary views of constructivism, the study strongly supports the views that : 1) knowledge is uniquely structured by the individual; 2) the processes of knowledge construction are gradual, incremental, and assimilative in nature; 3) changes in conceptual understanding are can be interpreted in the light of prior knowledge and understanding; and 4) knowledge and understanding develop idiosyncratically, progressing and sometimes appearing to regress when compared with contemporary science. This study has implications for teachers, students, museum educators, and the science education community given the lack of research into the processes of knowledge construction in informal contexts and the roles that post-visit activities play in the overall process of learning.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study explores through a lifestream narrative how the life experiences of a female primary school principal are organised as practical knowledge, and are used to inform action that is directed towards creating a sustainable school culture. An alternative model of school leadership is presented which describes the thinking and activity of a leader as a process. The process demonstrates how a leader's practical knowledge is dynamic, broadly based in experiential life, and open to change. As such, it is described as a model of sustainable leadership-in-process. The research questions at the heart of this study are: How does a leader construct and organize knowledge in the enactment of the principal ship to deal with the dilemmas and opportunities that arise everyday in school life? And: What does this particular way of organising knowledge look like in the effort to build a sustainable school community? The sustainable leadership-in-process thesis encapsulates new ways of leading primary schools through the principalship. These new ways are described as developing and maintaining the following dimensions of leadership: quality relationships, a collective (shared vision), collaboration and partnerships, and high achieving learning environments. Such dimensions are enacted by the principal through the activities of conversations, performance development, research and data-driven action, promoting innovation, and anticipating and predicting the future. Sustainable leadership-in-process is shared, dynamic, visible and transparent and is conducted through the processes of positioning, defining, organising, experimenting and evaluating in a continuous and iterative way. A rich understanding of the specificity of the life of a female primary school principal was achieved using story telling, story listening and story creation in a collaborative relationship between the researcher and the researched participant. as a means of educational theorising. Analysis and interpretation were undertaken as a recursive process in which the immediate interpretations were shared with the researched participant. The view of theorising adopted in this research is that of theory as hermeneutic; that is, theory is generated out of the stories of experiential life, rather than discovered in the stories.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This research has established, through ultrasound, near infrared spectroscopy and biomechanics experiments, parameters and parametric relationships that can form the framework for quantifying the integrity of the articular cartilage-on-bone laminate, and objectively distinguish between normal/healthy and abnormal/degenerated joint tissue, with a focus on articular cartilage. This has been achieved by: 1. using traditional experimental methods to produce new parameters for cartilage assessment; 2. using novel methodologies to develop new parameters; and 3. investigating the interrelationships between mechanical, structural and molec- ular properties to identify and select those parameters and methodologies that can be used in a future arthroscopic probe based on points 1 and 2. By combining the molecular, micro- and macro-structural characteristics of the tissue with its mechanical properties, we arrive at a set of critical benchmarking parameters for viable and early-stage non-viable cartilage. The interrelationships between these characteristics, examined using a multivariate analysis based on principal components analysis, multiple linear regression and general linear modeling, could then to deter- mine those parameters and relationships which have the potential to be developed into a future clinical device. Specifically, this research has found that the ultrasound and near infrared techniques can subsume the mechanical parameters and combine to characterise the tissue at the molecular, structural and mechanical levels over the full depth of the cartilage matrix. It is the opinion in this thesis that by enabling the determination of the precise area of in uence of a focal defect or disease in the joint, demarcating the boundaries of articular cartilage with dierent levels of degeneration around a focal defect, better surgical decisions that will advance the processes of joint management and treatment will be achieved. Providing the basis for a surgical tool, this research will contribute to the enhancement and quanti�cation of arthroscopic procedures, extending to post- treatment monitoring and as a research tool, will enable a robust method for evaluating developing (particularly focalised) treatments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The human-technology nexus is a strong focus of Information Systems (IS) research; however, very few studies have explored this phenomenon in anaesthesia. Anaesthesia has a long history of adoption of technological artifacts, ranging from early apparatus to present-day information systems such as electronic monitoring and pulse oximetry. This prevalence of technology in modern anaesthesia and the rich human-technology relationship provides a fertile empirical setting for IS research. This study employed a grounded theory approach that began with a broad initial guiding question and, through simultaneous data collection and analysis, uncovered a core category of technology appropriation. This emergent basic social process captures a central activity of anaesthestists and is supported by three major concepts: knowledge-directed medicine, complementary artifacts and culture of anaesthesia. The outcomes of this study are: (1) a substantive theory that integrates the aforementioned concepts and pertains to the research setting of anaesthesia and (2) a formal theory, which further develops the core category of appropriation from anaesthesia-specific to a broader, more general perspective. These outcomes fulfill the objective of a grounded theory study, being the formation of theory that describes and explains observed patterns in the empirical field. In generalizing the notion of appropriation, the formal theory is developed using the theories of Karl Marx. This Marxian model of technology appropriation is a three-tiered theoretical lens that examines appropriation behaviours at a highly abstract level, connecting the stages of natural, species and social being to the transition of a technology-as-artifact to a technology-in-use via the processes of perception, orientation and realization. The contributions of this research are two-fold: (1) the substantive model contributes to practice by providing a model that describes and explains the human-technology nexus in anaesthesia, and thereby offers potential predictive capabilities for designers and administrators to optimize future appropriations of new anaesthetic technological artifacts; and (2) the formal model contributes to research by drawing attention to the philosophical foundations of appropriation in the work of Marx, and subsequently expanding the current understanding of contemporary IS theories of adoption and appropriation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With the growth of high-technology industries and knowledge intensive services, the pursuit of industrial competitiveness has progressed from a broad concern with the processes of industrialisation to a more focused analysis of the factors explaining cross-national variation in the level of participation in knowledge industries. From an examination of cross-national data, the paper develops the proposition that particular elements of the domestic science, technology and industry infrastructure—such as the stock of knowledge and competence in the economy, the capacity for learning and generation of new ideas and the capacity to commercialise new ideas—vary cross-nationally and are related to the level of participation of a nation in knowledge intensive activities. Existing understandings of the role of the state in promoting industrial competitiveness might be expanded to incorporate an analysis of the contribution of the state through the building of competencies in science, technology and industry. Keywords: Knowledge; economy; comparative public policy; innovation; science and technology policy

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Introduction The Australian Nurse Practitioner Project (AUSPRAC) was initiated to examine the introduction of nurse practitioners into the Australian health service environment. The nurse practitioner concept was introduced to Australia over two decades ago and has been evolving since. Today, however, the scope of practice, role and educational preparation of nurse practitioners is well defined (Gardner et al, 2006). Amendments to specific pre-existing legislation at a State level have permitted nurse practitioners to perform additional activities including some once in the domain of the medical profession. In the Australian Capital Territory, for example 13 diverse Acts and Regulations required amendments and three new Acts were established (ACT Health, 2006). Nurse practitioners are now legally authorized to diagnose, treat, refer and prescribe medications in all Australian states and territories. These extended practices differentiate nurse practitioners from other advanced practice roles in nursing (Gardner, Chang & Duffield, 2007). There are, however, obstacles for nurse practitioners wishing to use these extended practices. Restrictive access to Medicare funding via the Medicare Benefit Scheme (MBS) and the Pharmaceutical Benefit Scheme (PBS) limit the scope of nurse practitioner service in the private health sector and community settings. A recent survey of Australian nurse practitioners (n=202) found that two-thirds of respondents (66%) stated that lack of legislative support limited their practice. Specifically, 78% stated that lack of a Medicare provider number was ‘extremely limiting’ to their practice and 71% stated that no access to the PBS was ‘extremely limiting’ to their practice (Gardner et al, in press). Changes to Commonwealth legislation is needed to enable nurse practitioners to prescribe medication so that patients have access to PBS subsidies where they exist; currently patients with scripts which originated from nurse practitioners must pay in full for these prescriptions filled outside public hospitals. This report presents findings from a sub-study of Phase Two of AUSPRAC. Phase Two was designed to enable investigation of the process and activities of nurse practitioner service. Process measurements of nurse practitioner services are valuable to healthcare organisations and service providers (Middleton, 2007). Processes of practice can be evaluated through clinical audit, however as Middleton cautions, no direct relationship between these processes and patient outcomes can be assumed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Visualisation provides a method to efficiently convey and understand the complex nature and processes of groundwater systems. This technique has been applied to the Lockyer Valley to aid in comprehending the current condition of the system. The Lockyer Valley in southeast Queensland hosts intensive irrigated agriculture sourcing groundwater from alluvial aquifers. The valley is around 3000 km2 in area and the alluvial deposits are typically 1-3 km wide and to 20-35 m deep in the main channels, reducing in size in subcatchments. The configuration of the alluvium is of a series of elongate “fingers”. In this roughly circular valley recharge to the alluvial aquifers is largely from seasonal storm events, on the surrounding ranges. The ranges are overlain by basaltic aquifers of Tertiary age, which overall are quite transmissive. Both runoff from these ranges and infiltration into the basalts provided ephemeral flow to the streams of the valley. Throughout the valley there are over 5,000 bores extracting alluvial groundwater, plus lesser numbers extracting from underlying sandstone bedrock. Although there are approximately 2500 monitoring bores, the only regularly monitored area is the formally declared management zone in the lower one third. This zone has a calibrated Modflow model (Durick and Bleakly, 2000); a broader valley Modflow model was developed in 2002 (KBR), but did not have extensive extraction data for detailed calibration. Another Modflow model focused on a central area river confluence (Wilson, 2005) with some local production data and pumping test results. A recent subcatchment simulation model incorporates a network of bores with short-period automated hydrographic measurements (Dvoracek and Cox, 2008). The above simulation models were all based on conceptual hydrogeological models of differing scale and detail.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There is a strong quest in several countries including Australia for greater national consistency in education and intensifying interest in standards for reporting. Given this, it is important to make explicit the intended and unintended consequences of assessment reform strategies and the pressures to pervert and conform. In a policy context that values standardisation, the great danger is that the technical, rationalist approaches that generalise and make superficial assessment practices, will emerge. In this article, the authors contend that the centrality and complexity of teacher judgement practice in such a policy context need to be understood. To this end, we discuss and analyse recorded talk in teacher moderation meetings showing the processes that teachers use as they work with stated standards to award grades (A to E). We show how they move to and fro between (1) supplied textual artefacts, including stated standards and samples of student responses, (2) tacit knowledge of different types, drawing into the moderation, and (3) social processes of dialogue and negotiation. While the stated standards play a part in judgement processes, in and of themselves they are shown to be insufficient to account for how the teachers ascribe value and award a grade to student work in moderation. At issue is the nature of judgement as cognitive and social practice in moderation and the legitimacy (or otherwise) of the mix of factors that shape how judgement occurs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The term Design is used to describe a wide range of activities. Like the term innovation, it is often used to describe both an activity and an outcome. Many products and services are often described as being designed, as they describe a conscious process of linking form and function. Alternatively, the many and varied processes of design are often used to describe a cost centre of an organisation to demonstrate a particular competency. However design is often not used to describe the ‘value’ it provides to an organisation and more importantly the ‘value’ it provides to both existing and future customers. Design Led Innovation bridges this gap. Design Led Innovation is a process of creating a sustainable competitive advantage, by radically changing the customer value proposition. A conceptual model has been developed to assist organisations apply and embed design in a company’s vision, strategy, culture, leadership and development processes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Malaysian urban river corridors are facing major physical transformations in the 21st century. The effects of rapid development exacerbated by the competition between two key industry sectors, commercial base and tourism development in conjunction with urbanisation and industrialisation, have posted a high demand for the uses of these spaces. The political scenario and lack on consideration of ecological principles in its design solution have sparked stiff environmental and cultural constraint towards its landscape character as well as the ecological system. Therefore, a holistic approach towards improving the landscape design processes is extremely necessary to protect values of these places. Limited research has been carried out and further has created an urgent need to explore better ways to improve the landscape design processes of Malaysian urban river corridor developments that encompass the needs and aspirations of its multi-ethnic society without making any drastic changes to the landscape character of the rivers. This paper provides a brief introduction to address this significant gap and hence serves to contribute to the literature review.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The draft Year 1 Literacy and Numeracy Checkpoints Assessments were in open and supported trial during Semester 2, 2010. The purpose of these trials was to evaluate the Year 1 Literacy and Numeracy Checkpoints Assessments (hereafter the Year 1 Checkpoints) that were designed in 2009 as a way to incorporate the use of the Year 1 Literacy and Numeracy Indicators as formative assessment in Year 1 in Queensland Schools. In these trials there were no mandated reporting requirements. The processes of assessment were related to future teaching decisions. As such the trials were trials of materials and the processes of using those materials to assess students, plan and teach in year 1 classrooms. In their current form the Year 1 Checkpoints provide assessment resources for teachers to use in February, June and October. They aim to support teachers in monitoring children's progress and making judgments about their achievement of the targeted P‐3 Literacy and Numeracy Indicators by the end of Year 1 (Queensland Studies Authority, 2010 p. 1). The Year 1 Checkpoints include support materials for teachers and administrators, an introductory statement on assessment, work samples, and a Data Analysis Assessment Record (DAAR) to record student performance. The Supported Trial participants were also supported with face‐to‐face and on‐line training sessions, involvement in a moderation process after the October Assessments, opportunities to participate in discussion forums as well as additional readings and materials. The assessment resources aim to use effective early years assessment practices in that the evidence is gathered from hands‐on teaching and learning experiences, rather than more formal assessment methods. They are based in a model of assessment for learning, and aim to support teachers in the “on‐going process of determining future learning directions” (Queensland Studies Authority, 2010 p. 1) for all students. Their aim is to focus teachers on interpreting and analysing evidence to make informed judgments about the achievement of all students, as a way to support subsequent planning for learning and teaching. The Evaluation of the Year 1 Literacy and Numeracy Checkpoints Assessments Supported Trial (hereafter the Evaluation) aimed to gather information about the appropriateness, effectiveness and utility of the Year 1 Checkpoints Assessments from early years’ teachers and leaders in up to one hundred Education Queensland schools who had volunteered to be part of the Supported Trial. These sample schools represent schools across a variety of Education Queensland regions and include schools with:  - A high Indigenous student population; - Urban, rural and remote school locations; - Single and multi‐age early phase classes; - A high proportion of students from low SES backgrounds. The purpose of the Evaluation was to: Evaluate the materials and report on the views of school‐based staff involved in the trial on the process, materials, and assessment practices utilised. The Evaluation has reviewed the materials, and used surveys, interviews, and observations of processes and procedures to collect relevant data to help present an informed opinion on the Year 1 Checkpoints as assessment for the early years of schooling. Student work samples and teacher planning and assessment documents were also collected. The evaluation has not evaluated the Year 1 Checkpoints in any other capacity than as a resource for Year 1 teachers and relevant support staff.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Longitudinal panel studies of large, random samples of business start-ups captured at the pre-operational stage allow researchers to address core issues for entrepreneurship research, namely, the processes of creation of new business ventures as well as their antecedents and outcomes. Here, we perform a methods-orientated review of all 83 journal articles that have used this type of data set, our purpose being to assist users of current data sets as well as designers of new projects in making the best use of this innovative research approach. Our review reveals a number of methods issues that are largely particular to this type of research. We conclude that amidst exemplary contributions, much of the reviewed research has not adequately managed these methods challenges, nor has it made use of the full potential of this new research approach. Specifically, we identify and suggest remedies for context-specific and interrelated methods challenges relating to sample definition, choice of level of analysis, operationalization and conceptualization, use of longitudinal data and dealing with various types of problematic heterogeneity. In addition, we note that future research can make further strides towards full utilization of the advantages of the research approach through better matching (from either direction) between theories and the phenomena captured in the data, and by addressing some under-explored research questions for which the approach may be particularly fruitful.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A basic element in advertising strategy is the choice of an appeal. In business-to-business (B2B) marketing communication, a long-standing approach relies on literal and factual, benefit-laden messages. Given the highly complex, costly and involved processes of business purchases, such approaches are certainly understandable. This project challenges the traditional B2B approach and asks if an alternative approach—using symbolic messages that operate at a more intrinsic or emotional level—is effective in the B2B arena. As an alternative to literal (factual) messages, there is an emerging body of literature that asserts stronger, more enduring results can be achieved through symbolic messages (imagery or text) in an advertisement. The present study contributes to this stream of research. From a theoretical standpoint, the study explores differences in literal-symbolic message content in B2B advertisements. There has been much discussion—mainly in the consumer literature—on the ability of symbolic messages to motivate a prospect to process advertising information by necessitating more elaborate processing and comprehension. Business buyers are regarded as less receptive to indirect or implicit appeals because their purchase decisions are based on direct evidence of product superiority. It is argued here, that these same buyers may be equally influenced by advertising that stimulates internally-directed motivation, feelings and cognitions about the brand. Thus far, studies on the effect of literalism and symbolism are fragmented, and few focus on the B2B market. While there have been many studies about the effects of symbolism no adequate scale exists to measure the continuum of literalism-symbolism. Therefore, a first task for this study was to develop such a scale. Following scale development, content analysis of 748 B2B print advertisements was undertaken to investigate whether differences in literalism-symbolism led to higher advertising performance. Variations of time and industry were also measured. From a practical perspective, the results challenge the prevailing B2B practice of relying on literal messages. While definitive support was not established for the use of symbolic message content, literal messages also failed to predict advertising performance. If the ‘fact, benefit laden’ assumption within B2B advertising cannot be supported, then other approaches used in the business-to-consumer (B2C) sector, such as symbolic messages may be also appropriate in business markets. Further research will need to test the potential effects of such messages, thereby building a revised foundation that can help drive advances in B2B advertising. Finally, the study offers a contribution to the growing body of knowledge on symbolism in advertising. While the specific focus of the study relates to B2B advertising, the Literalism-Symbolism scale developed here provides a reliable measure to evaluate literal and symbolic message content in all print advertisements. The value of this scale to advance our understanding about message strategy may be significant in future consumer and business advertising research.