764 resultados para policing of knowledge
Resumo:
An often neglected but well recognised aspect of successful engineering asset management is the achievement of co-operation and collaboration between various occupational, functional and hierarchical levels present within complex technical environments. Engineering and technical contexts have been well documented for the presence of highly cohesive groups based around around functional or role orientations. However while highly cohesive groups are potentially advantageous they are also often correlated with the emergence of knowledge and information silos based around those same functional or occupational clusters. Improved collaboration and co-operation between groups has been demonstrated to result in a number of positive outcomes at an individual, group and organisational level. Example outcomes include an increased capacity for problem solving, improved responsiveness and adaptation to organisational crises, higher morale and an increased ability to leverage workforce capability. However, an essential challenge for organisations wishing to overcome informational silos is to implement mechanisms that facilitate, encourage and sustain interactions between otherwise disconnected groups. This paper reviews the ability of Web 2.0 technologies and mobile computing devices to facilitate and encourage knowledge sharing between “silo’d” groups. Commonly available tools such as Facebook, Twitter, Blogs, Wiki’s and others will be reviewed in relation to their applicability, functionality and ease-of-use by engineering and technical personnel. The paper also documents three case examples of engineering organisations that have successfully employed Web 2.0 to achieve superior knowledge management. With a number of clear recommendations the paper is an essential starting point for any organization looking at the use of new generation technologies for achieving the significant outcomes associated with knowledge transfer.
Resumo:
Early this year the Australian Department of Environment and Heritage commissioned a desktop literature review with a focus on ultrafine particles including analysis of health impacts of the particles as well as the impact of sulphur content of diesel fuel on ultrafine particle emission. This paper summarizes the findings of the report on the link between the sulphur content of diesel fuels and the number of ultrafine particles in diesel emissions. The literature search on this topic resulted in over 150 publications. The majority of these publications, although investigating different aspects of the influence of fuel sulphur level on diesel vehicle emissions, were not directly concerned with ultrafine particle emissions. A specific focus of the paper is on: ----- ----- summary of state of knowledge established by the review, and ----- ----- summary of recommendations on the research priorities for Australia to address the information gaps for this issue, and on the appropriate management responses.
Resumo:
Complex surveillance problems are common in biosecurity, such as prioritizing detection among multiple invasive species, specifying risk over a heterogeneous landscape, combining multiple sources of surveillance data, designing for specified power to detect, resource management, and collateral effects on the environment. Moreover, when designing for multiple target species, inherent biological differences among species result in different ecological models underpinning the individual surveillance systems for each. Species are likely to have different habitat requirements, different introduction mechanisms and locations, require different methods of detection, have different levels of detectability, and vary in rates of movement and spread. Often there is a further challenge of a lack of knowledge, literature, or data, for any number of the above problems. Even so, governments and industry need to proceed with surveillance programs which aim to detect incursions in order to meet environmental, social and political requirements. We present an approach taken to meet these challenges in one comprehensive and statistically powerful surveillance design for non-indigenous terrestrial vertebrates on Barrow Island, a high conservation nature reserve off the Western Australian coast. Here, the possibility of incursions is increased due to construction and expanding industry on the island. The design, which includes mammals, amphibians and reptiles, provides a complete surveillance program for most potential terrestrial vertebrate invaders. Individual surveillance systems were developed for various potential invaders, and then integrated into an overall surveillance system which meets the above challenges using a statistical model and expert elicitation. We discuss the ecological basis for the design, the flexibility of the surveillance scheme, how it meets the above challenges, design limitations, and how it can be updated as data are collected as a basis for adaptive management.
Resumo:
In the partnering with students and industry it is important for universities to recognize and value the nature of knowledge and learning that emanates from work integrated learning experiences is different to formal university based learning. Learning is not a by-product of work rather learning is fundamental to engaging in work practice. Work integrated learning experiences provide unique opportunities for students to integrate theory and practice through the solving of real world problems. This paper reports findings to date of a project that sought to identify key issues and practices faced by academics, industry partners and students engaged in the provision and experience of work integrated learning within an undergraduate creative industries program at a major metropolitan university. In this paper, those findings are focused on some of the particular qualities and issues related to the assessment of learning at and through the work integrated experience. The findings suggest that the assessment strategies needed to better value the knowledges and practices of the Creative Industries. The paper also makes recommendations about how industry partners might best contribute to the assessment of students’ developing capabilities and to continuous reflection on courses and the assurance of learning agenda.
Resumo:
Despite its proscription in legal jurisdictions around the world, workplace sexual harassment (SH) continues to be experienced by many women and some men in a variety of organizational settings. The aims of this review article are threefold: first, with a focus on workplace SH as it pertains to management and organizations, to synthesize the accumulated state of knowledge in the field; second, to evaluate this evidence, highlighting competing perspectives; and third, to canvass areas in need of further investigation. Variously ascribed through individual (psychological or legal consciousness) frameworks, sociocultural explanations and organizational perspectives, research consistently demonstrates that, like other forms of sexual violence, individuals who experience workplace SH suffer significant psychological, health- and job-related consequences. Yet they often do not make formal complaints through internal organizational procedures or to outside bodies. Laws, structural reforms and policy initiatives have had some success in raising awareness of the problem and have shaped rules and norms in the employment context. However, there is an imperative to target further workplace actions to effectively prevent and respond to SH.
Resumo:
A basic element in advertising strategy is the choice of an appeal. In business-to-business (B2B) marketing communication, a long-standing approach relies on literal and factual, benefit-laden messages. Given the highly complex, costly and involved processes of business purchases, such approaches are certainly understandable. This project challenges the traditional B2B approach and asks if an alternative approach—using symbolic messages that operate at a more intrinsic or emotional level—is effective in the B2B arena. As an alternative to literal (factual) messages, there is an emerging body of literature that asserts stronger, more enduring results can be achieved through symbolic messages (imagery or text) in an advertisement. The present study contributes to this stream of research. From a theoretical standpoint, the study explores differences in literal-symbolic message content in B2B advertisements. There has been much discussion—mainly in the consumer literature—on the ability of symbolic messages to motivate a prospect to process advertising information by necessitating more elaborate processing and comprehension. Business buyers are regarded as less receptive to indirect or implicit appeals because their purchase decisions are based on direct evidence of product superiority. It is argued here, that these same buyers may be equally influenced by advertising that stimulates internally-directed motivation, feelings and cognitions about the brand. Thus far, studies on the effect of literalism and symbolism are fragmented, and few focus on the B2B market. While there have been many studies about the effects of symbolism no adequate scale exists to measure the continuum of literalism-symbolism. Therefore, a first task for this study was to develop such a scale. Following scale development, content analysis of 748 B2B print advertisements was undertaken to investigate whether differences in literalism-symbolism led to higher advertising performance. Variations of time and industry were also measured. From a practical perspective, the results challenge the prevailing B2B practice of relying on literal messages. While definitive support was not established for the use of symbolic message content, literal messages also failed to predict advertising performance. If the ‘fact, benefit laden’ assumption within B2B advertising cannot be supported, then other approaches used in the business-to-consumer (B2C) sector, such as symbolic messages may be also appropriate in business markets. Further research will need to test the potential effects of such messages, thereby building a revised foundation that can help drive advances in B2B advertising. Finally, the study offers a contribution to the growing body of knowledge on symbolism in advertising. While the specific focus of the study relates to B2B advertising, the Literalism-Symbolism scale developed here provides a reliable measure to evaluate literal and symbolic message content in all print advertisements. The value of this scale to advance our understanding about message strategy may be significant in future consumer and business advertising research.
Resumo:
In today’s electronic world vast amounts of knowledge is stored within many datasets and databases. Often the default format of this data means that the knowledge within is not immediately accessible, but rather has to be mined and extracted. This requires automated tools and they need to be effective and efficient. Association rule mining is one approach to obtaining knowledge stored with datasets / databases which includes frequent patterns and association rules between the items / attributes of a dataset with varying levels of strength. However, this is also association rule mining’s downside; the number of rules that can be found is usually very big. In order to effectively use the association rules (and the knowledge within) the number of rules needs to be kept manageable, thus it is necessary to have a method to reduce the number of association rules. However, we do not want to lose knowledge through this process. Thus the idea of non-redundant association rule mining was born. A second issue with association rule mining is determining which ones are interesting. The standard approach has been to use support and confidence. But they have their limitations. Approaches which use information about the dataset’s structure to measure association rules are limited, but could yield useful association rules if tapped. Finally, while it is important to be able to get interesting association rules from a dataset in a manageable size, it is equally as important to be able to apply them in a practical way, where the knowledge they contain can be taken advantage of. Association rules show items / attributes that appear together frequently. Recommendation systems also look at patterns and items / attributes that occur together frequently in order to make a recommendation to a person. It should therefore be possible to bring the two together. In this thesis we look at these three issues and propose approaches to help. For discovering non-redundant rules we propose enhanced approaches to rule mining in multi-level datasets that will allow hierarchically redundant association rules to be identified and removed, without information loss. When it comes to discovering interesting association rules based on the dataset’s structure we propose three measures for use in multi-level datasets. Lastly, we propose and demonstrate an approach that allows for association rules to be practically and effectively used in a recommender system, while at the same time improving the recommender system’s performance. This especially becomes evident when looking at the user cold-start problem for a recommender system. In fact our proposal helps to solve this serious problem facing recommender systems.
Resumo:
The paper analyses knowledge integration processes at Fujitsu from a multi-level and systemic perspective. The focus is on team-building capability, capturing and utilising individual tacit knowledge, and communication networks for integrating dispersed specialist knowledge required in the development of new products and services. The analysis shows how knowledge integration is performed by Fujitsu at different layers of the company.
Resumo:
This paper examines the interactions between knowledge and power in the adoption of technologies central to municipal water supply plans, specifically investigating decisions in Progressive Era Chicago regarding water meters. The invention and introduction into use of the reliable water meter early in the Progressive Era allowed planners and engineers to gauge water use, and enabled communities willing to invest in the new infrastructure to allocate costs for provision of supply to consumers relative to use. In an era where efficiency was so prized and the role of technocratic expertise was increasing, Chicago’s continued failure to adopt metering (despite levels of per capita consumption nearly twice that of comparable cities and acknowledged levels of waste nearing half of system production) may indicate that the underlying characteristics of the city’s political system and its elite stymied the implementation of metering technologies as in Smith’s (1977) comparative study of nineteenth century armories. Perhaps, as with Flyvbjerg’s (1998) study of the city of Aalborg, the powerful know what they want and data will not interfere with their conclusions: if the data point to a solution other than what is desired, then it must be that the data are wrong. Alternatively, perhaps the technocrats failed adequately to communicate their findings in a language which the political elite could understand, with the failure lying in assumptions of scientific or technical literacy rather than with dissatisfaction in outcomes (Benveniste 1972). When examined through a historical institutionalist perspective, the case study of metering adoption lends itself to exploration of larger issues of knowledge and power in the planning process: what governs decisions regarding knowledge acquisition, how knowledge and power interact, whether the potential to improve knowledge leads to changes in action, and, whether the decision to overlook available knowledge has an impact on future decisions.
Resumo:
Many initiatives to improve Business processes are emerging. The essential roles and contributions of Business Analyst (BA) and Business Process Management (BPM) professionals to such initiatives have been recognized in literature and practice. The roles and responsibilities of a BA or BPM practitioner typically require different skill-sets; however these differences are often vague. This vagueness creates much confusion in practice and academia. While both the BA and BPM communities have made attempts to describe their domains through capability defining empirical research and developments of Bodies of knowledge, there has not yet been any attempt to identify the commonality of skills required and points of uniqueness between the two professions. This study aims to address this gap and presents the findings of a detailed content mapping exercise (using NVivo as a qualitative data analysis tool) of the International Institution of Business Analysis (IIBA®) Guide to the Business Analysis Body of Knowledge (BABOK® Guide) against core BPM competency and capability frameworks.
Resumo:
In 2008, a three-year pilot ‘pay for performance’ (P4P) program, known as ‘Clinical Practice Improvement Payment’ (CPIP) was introduced into Queensland Health (QHealth). QHealth is a large public health sector provider of acute, community, and public health services in Queensland, Australia. The organisation has recently embarked on a significant reform agenda including a review of existing funding arrangements (Duckett et al., 2008). Partly in response to this reform agenda, a casemix funding model has been implemented to reconnect health care funding with outcomes. CPIP was conceptualised as a performance-based scheme that rewarded quality with financial incentives. This is the first time such a scheme has been implemented into the public health sector in Australia with a focus on rewarding quality, and it is unique in that it has a large state-wide focus and includes 15 Districts. CPIP initially targeted five acute and community clinical areas including Mental Health, Discharge Medication, Emergency Department, Chronic Obstructive Pulmonary Disease, and Stroke. The CPIP scheme was designed around key concepts including the identification of clinical indicators that met the set criteria of: high disease burden, a well defined single diagnostic group or intervention, significant variations in clinical outcomes and/or practices, a good evidence, and clinician control and support (Ward, Daniels, Walker & Duckett, 2007). This evaluative research targeted Phase One of implementation of the CPIP scheme from January 2008 to March 2009. A formative evaluation utilising a mixed methodology and complementarity analysis was undertaken. The research involved three research questions and aimed to determine the knowledge, understanding, and attitudes of clinicians; identify improvements to the design, administration, and monitoring of CPIP; and determine the financial and economic costs of the scheme. Three key studies were undertaken to ascertain responses to the key research questions. Firstly, a survey of clinicians was undertaken to examine levels of knowledge and understanding and their attitudes to the scheme. Secondly, the study sought to apply Statistical Process Control (SPC) to the process indicators to assess if this enhanced the scheme and a third study examined a simple economic cost analysis. The CPIP Survey of clinicians elicited 192 clinician respondents. Over 70% of these respondents were supportive of the continuation of the CPIP scheme. This finding was also supported by the results of a quantitative altitude survey that identified positive attitudes in 6 of the 7 domains-including impact, awareness and understanding and clinical relevance, all being scored positive across the combined respondent group. SPC as a trending tool may play an important role in the early identification of indicator weakness for the CPIP scheme. This evaluative research study supports a previously identified need in the literature for a phased introduction of Pay for Performance (P4P) type programs. It further highlights the value of undertaking a formal risk assessment of clinician, management, and systemic levels of literacy and competency with measurement and monitoring of quality prior to a phased implementation. This phasing can then be guided by a P4P Design Variable Matrix which provides a selection of program design options such as indicator target and payment mechanisms. It became evident that a clear process is required to standardise how clinical indicators evolve over time and direct movement towards more rigorous ‘pay for performance’ targets and the development of an optimal funding model. Use of this matrix will enable the scheme to mature and build the literacy and competency of clinicians and the organisation as implementation progresses. Furthermore, the research identified that CPIP created a spotlight on clinical indicators and incentive payments of over five million from a potential ten million was secured across the five clinical areas in the first 15 months of the scheme. This indicates that quality was rewarded in the new QHealth funding model, and despite issues being identified with the payment mechanism, funding was distributed. The economic model used identified a relative low cost of reporting (under $8,000) as opposed to funds secured of over $300,000 for mental health as an example. Movement to a full cost effectiveness study of CPIP is supported. Overall the introduction of the CPIP scheme into QHealth has been a positive and effective strategy for engaging clinicians in quality and has been the catalyst for the identification and monitoring of valuable clinical process indicators. This research has highlighted that clinicians are supportive of the scheme in general; however, there are some significant risks that include the functioning of the CPIP payment mechanism. Given clinician support for the use of a pay–for-performance methodology in QHealth, the CPIP scheme has the potential to be a powerful addition to a multi-faceted suite of quality improvement initiatives within QHealth.
Resumo:
In an Australian context, the term hooning refers to risky driving behaviours such as illegal street racing and speed trials, as well as behaviours that involve unnecessary noise and smoke, which include burn outs, donuts, fish tails, drifting and other skids. Hooning receives considerable negative media attention in Australia, and since the 1990s all Australian jurisdictions have implemented vehicle impoundment programs to deal with the problem. However, there is limited objective evidence of the road safety risk associated with hooning behaviours. Attempts to estimate the risk associated with hooning are limited by official data collection and storage practices, and the willingness of drivers to admit to their illegal behaviour in the event of a crash. International evidence suggests that illegal street racing is associated with only a small proportion of fatal crashes; however, hooning in an Australian context encompasses a broader group of driving behaviours than illegal street racing alone, and it is possible that the road safety risks will differ with these behaviours. There is evidence from North American jurisdictions that vehicle impoundment programs are effective for managing drink driving offenders, and drivers who continue to drive while disqualified or suspended both during and post-impoundment. However, these programs used impoundment periods of 30 – 180 days (depending on the number of previous offences). In Queensland the penalty for a first hooning offence is 48 hours, while the vehicle can be impounded for up to 3 months for a second offence, or permanently for a third or subsequent offence within three years. Thus, it remains unclear whether similar effects will be seen for hooning offenders in Australia, as no evaluations of vehicle impoundment programs for hooning have been published. To address these research needs, this program of research consisted of three complementary studies designed to: (1) investigate the road safety implications of hooning behaviours in terms of the risks associated with the specific behaviours, and the drivers who engage in these behaviours; and (2) assess the effectiveness of current approaches to dealing with the problem; in order to (3) inform policy and practice in the area of hooning behaviour. Study 1 involved qualitative (N = 22) and quantitative (N = 290) research with drivers who admitted engaging in hooning behaviours on Queensland roads. Study 2 involved a systematic profile of a large sample of drivers (N = 834) detected and punished for a hooning offence in Queensland, and a comparison of their driving and crash histories with a randomly sampled group of Queensland drivers with the same gender and age distribution. Study 3 examined the post-impoundment driving behaviour of hooning offenders (N = 610) to examine the effects of vehicle impoundment on driving behaviour. The theoretical framework used to guide the research incorporated expanded deterrence theory, social learning theory, and driver thrill-seeking perspectives. This framework was used to explore factors contributing to hooning behaviours, and interpret the results of the aspects of the research designed to explore the effectiveness of vehicle impoundment as a countermeasure for hooning. Variables from each of the perspectives were related to hooning measures, highlighting the complexity of the behaviour. This research found that the road safety risk of hooning behaviours appears low, as only a small proportion of the hooning offences in Study 2 resulted in a crash. However, Study 1 found that hooning-related crashes are less likely to be reported than general crashes, particularly when they do not involve an injury, and that higher frequencies of hooning behaviours are associated with hooning-related crash involvement. Further, approximately one fifth of drivers in Study 1 reported being involved in a hooning-related crash in the previous three years, which is comparable to general crash involvement among the general population of drivers in Queensland. Given that hooning-related crashes represented only a sub-set of crash involvement for this sample, this suggests that there are risks associated with hooning behaviour that are not apparent in official data sources. Further, the main evidence of risk associated with the behaviour appears to relate to the hooning driver, as Study 2 found that these drivers are likely to engage in other risky driving behaviours (particularly speeding and driving vehicles with defects or illegal modifications), and have significantly more traffic infringements, licence sanctions and crashes than drivers of a similar (i.e., young) age. Self-report data from the Study 1 samples indicated that Queensland’s vehicle impoundment and forfeiture laws are perceived as severe, and that many drivers have reduced their hooning behaviour to avoid detection. However, it appears that it is more common for drivers to have simply changed the location of their hooning behaviour to avoid detection. When the post-impoundment driving behaviour of the sample of hooning offenders was compared to their pre-impoundment behaviour to examine the effectiveness of vehicle impoundment in Study 3, it was found that there was a small but significant reduction in hooning offences, and also for other traffic infringements generally. As Study 3 was observational, it was not possible to control for extraneous variables, and is, therefore, possible that some of this reduction was due to other factors, such as a reduction in driving exposure, the effects of changes to Queensland’s Graduated Driver Licensing scheme that were implemented during the study period and affected many drivers in the offender sample due to their age, or the extension of vehicle impoundment to other types of offences in Queensland during the post-impoundment period. However, there was a protective effect observed, in that hooning offenders did not show the increase in traffic infringements in the post period that occurred within the comparison sample. This suggests that there may be some effect of vehicle impoundment on the driving behaviour of hooning offenders, and that this effect is not limited to their hooning driving behaviour. To be more confident in these results, it is necessary to measure driving exposure during the post periods to control for issues such as offenders being denied access to vehicles. While it was not the primary aim of this program of research to compare the utility of different theoretical perspectives, the findings of the research have a number of theoretical implications. For example, it was found that only some of the deterrence variables were related to hooning behaviours, and sometimes in the opposite direction to predictions. Further, social learning theory variables had stronger associations with hooning. These results suggest that a purely legal approach to understanding hooning behaviours, and designing and implementing countermeasures designed to reduce these behaviours, are unlikely to be successful. This research also had implications for policy and practice, and a number of recommendations were made throughout the thesis to improve the quality of relevant data collection practices. Some of these changes have already occurred since the expansion of the application of vehicle impoundment programs to other offences in Queensland. It was also recommended that the operational and resource costs of these laws should be compared to the road safety benefits in ongoing evaluations of effectiveness to ensure that finite traffic policing resources are allocated in a way that produces maximum road safety benefits. However, as the evidence of risk associated with the hooning driver is more compelling than that associated with hooning behaviour, it was argued that the hooning driver may represent the better target for intervention. Suggestions for future research include ongoing evaluations of the effectiveness of vehicle impoundment programs for hooning and other high-risk driving behaviours, and the exploration of additional potential targets for intervention to reduce hooning behaviour. As the body of knowledge regarding the factors contributing to hooning increases, along with the identification of potential barriers to the effectiveness of current countermeasures, recommendations for changes in policy and practice for hooning behaviours can be made.
Resumo:
This special issue presents an excellent opportunity to study applied epistemology in public policy. This is an important task because the arena of public policy is the social domain in which macro conditions for ‘knowledge work’ and ‘knowledge industries’ are defined and created. We argue that knowledge-related public policy has become overly concerned with creating the politico-economic parameters for the commodification of knowledge. Our policy scope is broader than that of Fuller (1988), who emphasizes the need for a social epistemology of science policy. We extend our focus to a range of policy documents that include communications, science, education and innovation policy (collectively called knowledge-related public policy in acknowledgement of the fact that there is no defined policy silo called ‘knowledge policy’), all of which are central to policy concerned with the ‘knowledge economy’ (Rooney and Mandeville, 1998). However, what we will show here is that, as Fuller (1995) argues, ‘knowledge societies’ are not industrial societies permeated by knowledge, but that knowledge societies are permeated by industrial values. Our analysis is informed by an autopoietic perspective. Methodologically, we approach it from a sociolinguistic position that acknowledges the centrality of language to human societies (Graham, 2000). Here, what we call ‘knowledge’ is posited as a social and cognitive relationship between persons operating on and within multiple social and non-social (or, crudely, ‘physical’) environments. Moreover, knowing, we argue, is a sociolinguistically constituted process. Further, we emphasize that the evaluative dimension of language is most salient for analysing contemporary policy discourses about the commercialization of epistemology (Graham, in press). Finally, we provide a discourse analysis of a sample of exemplary texts drawn from a 1.3 million-word corpus of knowledge-related public policy documents that we compiled from local, state, national and supranational legislatures throughout the industrialized world. Our analysis exemplifies a propensity in policy for resorting to technocratic, instrumentalist and anti-intellectual views of knowledge in policy. We argue that what underpins these patterns is a commodity-based conceptualization of knowledge, which is underpinned by an axiology of narrowly economic imperatives at odds with the very nature of knowledge. The commodity view of knowledge, therefore, is flawed in its ignorance of the social systemic properties of knowing’.
Resumo:
The Dark Ages are generally held to be a time of technological and intellectual stagnation in western development. But that is not necessarily the case. Indeed, from a certain perspective, nothing could be further from the truth. In this paper we draw historical comparisons, focusing especially on the thirteenth and fourteenth centuries, between the technological and intellectual ruptures in Europe during the Dark Ages, and those of our current period. Our analysis is framed in part by Harold Innis’s2 notion of "knowledge monopolies". We give an overview of how these were affected by new media, new power struggles, and new intellectual debates that emerged in thirteenth and fourteenth century Europe. The historical salience of our focus may seem elusive. Our world has changed so much, and history seems to be an increasingly far-from-favoured method for understanding our own period and its future potentials. Yet our seemingly distant historical focus provides some surprising insights into the social dynamics that are at work today: the fracturing of established knowledge and power bases; the democratisation of certain "sacred" forms of communication and knowledge, and, conversely, the "sacrosanct" appropriation of certain vernacular forms; challenges and innovations in social and scientific method and thought; the emergence of social world-shattering media practices; struggles over control of vast networks of media and knowledge monopolies; and the enclosure of public discursive and social spaces for singular, manipulative purposes. The period between the eleventh and fourteenth centuries in Europe prefigured what we now call the Enlightenment, perhaps moreso than any other period before or after; it shaped what the Enlightenment was to become. We claim no knowledge of the future here. But in the "post-everything" society, where history is as much up for sale as it is for argument, we argue that our historical perspective provides a useful analogy for grasping the wider trends in the political economy of media, and for recognising clear and actual threats to the future of the public sphere in supposedly democratic societies.