972 resultados para Single Responsibility Principle
Resumo:
Corporate social responsibility (CSR) is increasingly seen as an imperative for sustainable business and there is a growing literature on the effect of CSR on corporate reputation. Despite this, a pall of ambiguity and uncertainty remains around what CSR means and how it should be practiced. This paper offers a unique addition to the body of literature to date by revealing that CSR is an emerging industry in Australia, which is in the process of developing its own reputation as a set of business practices. The paper is based on exploratory qualitative research using a case study methodology. Interviews were conducted with key actors within the industry to investigate shared understandings of what CSR means, perceptions of CSR practice and of the industry as a whole, and who is involved in shaping these perceptions. The research revealed that the CSR industry in Australia is in its early stages of development and is therefore in need of increased internal cooperation if it is to develop a strong reputation.
Resumo:
Since the 1980s, industries and researchers have sought to better understand the quality of services due to the rise in their importance (Brogowicz, Delene and Lyth 1990). More recent developments with online services, coupled with growing recognition of service quality (SQ) as a key contributor to national economies and as an increasingly important competitive differentiator, amplify the need to revisit our understanding of SQ and its measurement. Although ‘SQ’ can be broadly defined as “a global overarching judgment or attitude relating to the overall excellence or superiority of a service” (Parasuraman, Berry and Zeithaml 1988), the term has many interpretations. There has been considerable progress on how to measure SQ perceptions, but little consensus has been achieved on what should be measured. There is agreement that SQ is multi-dimensional, but little agreement as to the nature or content of these dimensions (Brady and Cronin 2001). For example, within the banking sector, there exist multiple SQ models, each consisting of varying dimensions. The existence of multiple conceptions and the lack of a unifying theory bring the credibility of existing conceptions into question, and beg the question of whether it is possible at some higher level to define SQ broadly such that it spans all service types and industries. This research aims to explore the viability of a universal conception of SQ, primarily through a careful re-visitation of the services and SQ literature. The study analyses the strengths and weaknesses of the highly regarded and widely used global SQ model (SERVQUAL) which reflects a single-level approach to SQ measurement. The SERVQUAL model states that customers evaluate SQ (of each service encounter) based on five dimensions namely reliability, assurance, tangibles, empathy and responsibility. SERVQUAL, however, failed to address what needs to be reliable, assured, tangible, empathetic and responsible. This research also addresses a more recent global SQ model from Brady and Cronin (2001); the B&C (2001) model, that has potential to be the successor of SERVQUAL in that it encompasses other global SQ models and addresses the ‘what’ questions that SERVQUAL didn’t. The B&C (2001) model conceives SQ as being multidimensional and multi-level; this hierarchical approach to SQ measurement better reflecting human perceptions. In-line with the initial intention of SERVQUAL, which was developed to be generalizable across industries and service types, this research aims to develop a conceptual understanding of SQ, via literature and reflection, that encompasses the content/nature of factors related to SQ; and addresses the benefits and weaknesses of various SQ measurement approaches (i.e. disconfirmation versus perceptions-only). Such understanding of SQ seeks to transcend industries and service types with the intention of extending our knowledge of SQ and assisting practitioners in understanding and evaluating SQ. The candidate’s research has been conducted within, and seeks to contribute to, the ‘IS-Impact’ research track of the IT Professional Services (ITPS) Research Program at QUT. The vision of the track is “to develop the most widely employed model for benchmarking Information Systems in organizations for the joint benefit of research and practice.” The ‘IS-Impact’ research track has developed an Information Systems (IS) success measurement model, the IS-Impact Model (Gable, Sedera and Chan 2008), which seeks to fulfill the track’s vision. Results of this study will help future researchers in the ‘IS-Impact’ research track address questions such as: • Is SQ an antecedent or consequence of the IS-Impact model or both? • Has SQ already been addressed by existing measures of the IS-Impact model? • Is SQ a separate, new dimension of the IS-Impact model? • Is SQ an alternative conception of the IS? Results from the candidate’s research suggest that SQ dimensions can be classified at a higher level which is encompassed by the B&C (2001) model’s 3 primary dimensions (interaction, physical environment and outcome). The candidate also notes that it might be viable to re-word the ‘physical environment quality’ primary dimension to ‘environment quality’ so as to better encompass both physical and virtual scenarios (E.g: web sites). The candidate does not rule out the global feasibility of the B&C (2001) model’s nine sub-dimensions, however, acknowledges that more work has to be done to better define the sub-dimensions. The candidate observes that the ‘expertise’, ‘design’ and ‘valence’ sub-dimensions are supportive representations of the ‘interaction’, physical environment’ and ‘outcome’ primary dimensions respectively. The latter statement suggests that customers evaluate each primary dimension (or each higher level of SQ classification) namely ‘interaction’, physical environment’ and ‘outcome’ based on the ‘expertise’, ‘design’ and ‘valence’ sub-dimensions respectively. The ability to classify SQ dimensions at a higher level coupled with support for the measures that make up this higher level, leads the candidate to propose the B&C (2001) model as a unifying theory that acts as a starting point to measuring SQ and the SQ of IS. The candidate also notes, in parallel with the continuing validation and generalization of the IS-Impact model, that there is value in alternatively conceptualizing the IS as a ‘service’ and ultimately triangulating measures of IS SQ with the IS-Impact model. These further efforts are beyond the scope of the candidate’s study. Results from the candidate’s research also suggest that both the disconfirmation and perceptions-only approaches have their merits and the choice of approach would depend on the objective(s) of the study. Should the objective(s) be an overall evaluation of SQ, the perceptions-only approached is more appropriate as this approach is more straightforward and reduces administrative overheads in the process. However, should the objective(s) be to identify SQ gaps (shortfalls), the (measured) disconfirmation approach is more appropriate as this approach has the ability to identify areas that need improvement.
Resumo:
One of the oldest problems in philosophy concerns the relationship between free will and moral responsibility. If we adopt the position that we lack free will, in the absolute sense—as have most philosophers who have addressed this issue—how can we truly be held accountable for what we do? This paper will contend that the most significant and interesting challenge to the long-standing status-quo on the matter comes not from philosophy, jurisprudence, or even physics, but rather from psychology. By examining this debate through the lens of contemporary behaviour disorders, such as ADHD, it will be argued that notions of free will, along with its correlate, moral responsibility, are being eroded through the logic of psychology which is steadily reconfiguring large swathes of familiar human conduct as pathology. The intention of the paper is not only to raise some concerns over the exponential growth of behaviour disorders, but also, and more significantly, to flag the ongoing relevance of philosophy for prying open contemporary educational problems in new and interesting ways.
Resumo:
One of the oldest problems in philosophy concerns the relationship between free will and moral responsibility. If we adopt the position that we lack free will, in the absolute sense—as have most philosophers who have addressed this issue—how can we truly be held accountable for what we do? This paper will contend that the most significant and interesting challenge to the long-standing status-quo on the matter comes not from philosophy, jurisprudence, or even physics, but rather from psychology. By examining this debate through the lens of contemporary behaviour disorders, such as ADHD, it will be argued that notions of free will, along with its correlate, moral responsibility, are being eroded through the logic of psychology which is steadily reconfiguring large swathes of familiar human conduct as pathology. The intention of the paper is not only to raise some concerns over the exponential growth of behaviour disorders, but also, and more significantly, to flag the ongoing relevance of philosophy for prying open contemporary educational problems in new and interesting ways.
Resumo:
The purpose of this article is to raise some concerns over ongoing changes to the nature and scope of the teaching profession. Teaching is a responsible profession, and teachers have always been charged with the job of turning out the next generation of citizens—educated, healthy in mind, and healthy in body. The question is: how far should this responsibility extend? Just what should schools be responsible for? This article proposes some limits to teacher responsibility.
Resumo:
Teachers have always been charged with the task of turning out the next generation of citizens—educated, healthy in mind, and healthy in body. The central question here is: how far should this responsibility extend? Just how much can we reasonably expect teachers to be responsible for? And just what should schools be responsible for? Obviously, teachers ought to be responsible for the fundamentals of learning, but should they be held accountable for what the children eat, or how they choose to behave, or for every single risk, direct or indirect, that could conceivably occur within the school grounds? These are precisely the kinds of expectation that have become part of educational life. Bit by bit, new forms of responsibility are being added to the site of the school and, more specifically, to the professional life of the teacher. The intention here is not necessarily to challenge any of these diverse changes, but rather to express concern about their cumulative effect on the ability of the school to do its primary job effectively.
Resumo:
This combined PET and ERP study was designed to identify the brain regions activated in switching and divided attention between different features of a single object using matched sensory stimuli and motor response. The ERP data have previously been reported in this journal [64]. We now present the corresponding PET data. We identified partially overlapping neural networks with paradigms requiring the switching or dividing of attention between the elements of complex visual stimuli. Regions of activation were found in the prefrontal and temporal cortices and cerebellum. Each task resulted in different prefrontal cortical regions of activation lending support to the functional subspecialisation of the prefrontal and temporal cortices being based on the cognitive operations required rather than the stimuli themselves.
Resumo:
Any cycle of production and exchange – be it economic, cultural or aesthetic – involves an element of risk. It involves uncertainty, unpredictability, and a potential for new insight and innovation (the boom) as well as blockages, crises and breakdown (the bust). In performance, the risks are plentiful – economic, political, social, physical and psychological. The risks people are willing to take depend on their position in the exchange (performer, producer, venue manager or spectator), and their aesthetic preferences. This paper considers the often uncertain, confronting or ‘risky’ moment of exchange between performer, spectator and culture in Live Art practices. Encompassing body art, autobiographical art, site-specific art and other sorts of performative intervention in the public sphere, Live Art eschews the artifice of theatre, breaking down barriers between art and life, artist and spectator, to speak back to the public sphere, and challenge assumptions about bodies, identities, memories, relationships and histories. In the process, Live Art frequently privileges an uncertain, confrontational or ‘risky’ mode of exchange between performer, spectator and culture, as a way of challenging power structures. This paper examines the moment of exchange in terms of risk, vulnerability, responsibility and ethics. Why the romance with ‘risky’ behaviours and exchanges? Who is really taking a risk? What risk? With whose permission (or lack thereof)? What potential does a ‘risky’ exchange hold to destabilise aesthetic, social or political norms? Where lies the fine line between subversive intervention in the public sphere and sheer self-indulgence? What are the social and ethical implications of a moment of exchange that puts bodies, beliefs or social boundaries at ‘risk’? In this paper, these questions are addressed with reference to historical and contemporary practices under the broadly defined banner of Live Art, from the early work of Abrovamic and Burden, through to contemporary Australian practitioners like Fiona McGregor.
Resumo:
Using the generative processes developed over two stages of creative development and the performance of The Physics Project at the Loft at the Creative Industries Precinct at the Queensland University of Technology (QUT) from 5th – 8th April 2006 as a case study, this exegesis considers how the principles of contemporary physics can be reframed as aesthetic principles in the creation of contemporary performance. The Physics Project is an original performance work that melds live performance, video and web-casting and overlaps an exploration of personal identity with the physics of space, time, light and complementarity. It considers the acts of translation between the language of physics and the language of contemporary performance that occur via process and form. This exegesis also examines the devices in contemporary performance making and contemporary performance that extend the reach of the performance, including the integration of the live and the mediated and the use of metanarratives.
Resumo:
Relationships between self-reported retrospective falls and cognitive measures (executive function, reaction time, processing speed, working memory, visual attention) were examined in a population based sample of older adults (n = 658). Two of the choice reaction time tests involved inhibiting responses to either targets of a specific color or location with hand and foot responses. Potentially confounding demographic variables, medical conditions and postural sway were controlled for in logistic regression models, excluding participants with possible cognitive impairment. A factor analysis of cognitive measures extracted factors measuring reaction time, accuracy and inhibition, and visual search. Single fallers did not differ from non-fallers in terms of health, sway or cognitive function, except that they performed worse on accuracy and inhibition. In contrast, recurrent fallers performed worse than non-fallers on all measures. Results suggest that occasional falls in late life may be associated with subtle age-related changes in the pre-frontal cortex leading to failures of executive control, whereas recurrent falling may result from more advanced brain ageing that is associated with generalized cognitive decline.
Resumo:
A wide range of screening strategies have been employed to isolate antibodies and other proteins with specific attributes, including binding affinity, specificity, stability and improved expression. However, there remains no high-throughput system to screen for target-binding proteins in a mammalian, intracellular environment. Such a system would allow binding reagents to be isolated against intracellular clinical targets such as cell signalling proteins associated with tumour formation (p53, ras, cyclin E), proteins associated with neurodegenerative disorders (huntingtin, betaamyloid precursor protein), and various proteins crucial to viral replication (e.g. HIV-1 proteins such as Tat, Rev and Vif-1), which are difficult to screen by phage, ribosome or cell-surface display. This study used the β-lactamase protein complementation assay (PCA) as the display and selection component of a system for screening a protein library in the cytoplasm of HEK 293T cells. The colicin E7 (ColE7) and Immunity protein 7 (Imm7) *Escherichia coli* proteins were used as model interaction partners for developing the system. These proteins drove effective β-lactamase complementation, resulting in a signal-to-noise ratio (9:1 – 13:1) comparable to that of other β-lactamase PCAs described in the literature. The model Imm7-ColE7 interaction was then used to validate protocols for library screening. Single positive cells that harboured the Imm7 and ColE7 binding partners were identified and isolated using flow cytometric cell sorting in combination with the fluorescent β-lactamase substrate, CCF2/AM. A single-cell PCR was then used to amplify the Imm7 coding sequence directly from each sorted cell. With the screening system validated, it was then used to screen a protein library based the Imm7 scaffold against a proof-of-principle target. The wild-type Imm7 sequence, as well as mutants with wild-type residues in the ColE7- binding loop were enriched from the library after a single round of selection, which is consistent with other eukaryotic screening systems such as yeast and mammalian cell-surface display. In summary, this thesis describes a new technology for screening protein libraries in a mammalian, intracellular environment. This system has the potential to complement existing screening technologies by allowing access to intracellular proteins and expanding the range of targets available to the pharmaceutical industry.