875 resultados para State symbols and flags
Resumo:
The current paper examines the dissimilarities that have occurred in news framing by state-sponsored news outlets in their different language versions. The comparative framing analysis is conducted on the news coverage of the Russian intervention in Syria (2016) in RT and Radio Liberty in Russian and English languages. The certain discrepancies in framing of this event are found in both news outlets. The strongest distinction between Russian and English versions occurred in framing of responsibility and humanitarian crisis in Syria. The study attempts to explain the identified differences in a framework of public diplomacy and propaganda studies. The existing theories explain that political ideology and foreign policy orientation influences principles of state propaganda and state-sponsored international broadcasting. However, the current findings suggest that other influence factors may exist in the field – such as the local news discourse and the journalistic principles. This conclusion is preliminary, as there are not many studies with the comparable research design, which could support the current discussion. The studies of localized strategies of the international media (whether private networks or state-funded channels) can refine the current conclusions and bring a new perspective to global media studies.
Resumo:
This thesis is concerned with certain aspects of the Public Inquiry into the accident at Houghton Main Colliery in June 1975. It examines whether prior to the accident there existed at the Colliery a situation in which too much reliance was being placed upon state regulation and too Iittle upon personal responsibility. I study the phenomenon of state regulation. This is done (a) by analysis of selected writings on state regulation/intervention/interference/bureaucracy (the words are used synonymously) over the last two hundred years, specifically those of Marx on the 1866 Committee on Mines, and (b) by studying Chadwick and Tremenheere, leading and contrasting "bureaucrats" of the mid-nineteenth century. The bureaucratisation of the mining industry over the period 1835-1954 is described, and it is demonstrated that the industry obtained and now possesses those characteristics outlined by Max Weber in his model of bureaucracy. I analyse criticisms of the model and find them to be relevant, in that they facilitate understanding both of the circumstances of the accident and of the Inquiry . Further understanding of the circumstances and causes of the accident was gained by attendance at the lnquiry and by interviewing many of those involved in the Inquiry. I analyse many aspects of the Inquiry - its objectives. structure, procedure and conflicting interests - and find that, although the Inquiry had many of the symbols of bureaucracy, it suffered not from " too much" outside interference. but rather from the coal mining industry's shared belief in its ability to solve its own problems. I found nothing to suggest that, prior to the accident, colliery personnel relied. or were encouraged to rely, "too much" upon state regulation.
Resumo:
The rise of celebrity culture is a theme that has attracted a significant amount of attention within both mainstream sociology and cultural studies in more recent times. Ensuing debate has identified contemporary sports figures as an important facet of the celebrity‐media nexus and as possible signifiers of cultural change. In this paper we take one particular sports celebrity, South African soccer star Mark Fish, and evaluate his image in relation to debates surrounding sport, politics and the post‐apartheid state. We argue that because Fish appears to enjoy all the benefits of celebrity status (within his home country at least), an analysis of his career and identity provide a useful means by which to think about the changing political and nationalistic values within South African society.
Resumo:
This dissertation examined how United States illicit drug control policy, often commonly referred to as the "war on drugs," contributes to the reproduction of gendered and racialized social relations. Specifically, it analyzed the identity producing practices of United States illicit drug control policy as it relates to the construction of U.S. identities. ^ Drawing on the theoretical contributions of feminist postpositivists, three cases of illicit drug policy practice were discussed. In the first case, discourse analysis was employed to examine recent debates (1986-2005) in U.S. Congressional Hearings about the proper understanding of the illicit drug "threat." The analysis showed how competing policy positions are tied to differing understandings of proper masculinity and the role of policymakers as protectors of the national interest. Utilizing critical visual methodologies, the second case examined a public service media campaign circulated by the Office of National Drug Control Policy that tied the "war on drugs" with another security concern in the U.S., the "war on terror." This case demonstrated how the media campaign uses messages about race, masculinity, and femininity to produce privileged notions of state identity and proper citizenship. The third case examined the gendered politics of drug interdiction at the U.S. border. Using qualitative research methodologies including semi-structured interviews and participant observation, it examined how gender is produced through drug interdiction at border sites like Miami International Airport. By paying attention to the discourse that circulates about women drug couriers, it showed how gender is normalized in a national security setting. ^ What this dissertation found is that illicit drug control policy takes the form it does because of the politics of gender and racial identity and that, as a result, illicit drug policy is implicated in the reproduction of gender and racial inequities. It concluded that a more socially conscious and successful illicit drug policy requires an awareness of the gendered and racialized assumptions that inform and shape policy practices.^
Resumo:
An Aerosol Time-Of-Flight Mass Spectrometer (ATOFMS) was deployed to investigate the size-resolved chemical composition of single particles at an urban background site in Paris, France, as part of the MEGAPOLI winter campaign in January/February 2010. ATOFMS particle counts were scaled to match coincident Twin Differential Mobility Particle Sizer (TDMPS) data in order to generate hourly size-resolved mass concentrations for the single particle classes observed. The total scaled ATOFMS particle mass concentration in the size range 150–1067 nm was found to agree very well with the sum of concurrent High-Resolution Time-of-Flight Aerosol Mass Spectrometer (HR-ToF-AMS) and Multi-Angle Absorption Photometer (MAAP) mass concentration measurements of organic carbon (OC), inorganic ions and black carbon (BC) (R2 = 0.91). Clustering analysis of the ATOFMS single particle mass spectra allowed the separation of elemental carbon (EC) particles into four classes: (i) EC attributed to biomass burning (ECbiomass), (ii) EC attributed to traffic (ECtraffic), (iii) EC internally mixed with OC and ammonium sulfate (ECOCSOx), and (iv) EC internally mixed with OC and ammonium nitrate (ECOCNOx). Average hourly mass concentrations for EC-containing particles detected by the ATOFMS were found to agree reasonably well with semi-continuous quantitative thermal/optical EC and optical BC measurements (r2 = 0.61 and 0.65–0.68 respectively, n = 552). The EC particle mass assigned to fossil fuel and biomass burning sources also agreed reasonably well with BC mass fractions assigned to the same sources using seven-wavelength aethalometer data (r2 = 0.60 and 0.48, respectively, n = 568). Agreement between the ATOFMS and other instrumentation improved noticeably when a period influenced by significantly aged, internally mixed EC particles was removed from the intercomparison. 88% and 12% of EC particle mass was apportioned to fossil fuel and biomass burning respectively using the ATOFMS data compared with 85% and 15% respectively for BC estimated from the aethalometer model. On average, the mass size distribution for EC particles is bimodal; the smaller mode is attributed to locally emitted, mostly externally mixed EC particles, while the larger mode is dominated by aged, internally mixed ECOCNOx particles associated with continental transport events. Periods of continental influence were identified using the Lagrangian Particle Dispersion Model (LPDM) "FLEXPART". A consistent minimum between the two EC mass size modes was observed at approximately 400 nm for the measurement period. EC particles below this size are attributed to local emissions using chemical mixing state information and contribute 79% of the scaled ATOFMS EC particle mass, while particles above this size are attributed to continental transport events and contribute 21% of the EC particle mass. These results clearly demonstrate the potential benefit of monitoring size-resolved mass concentrations for the separation of local and continental EC emissions. Knowledge of the relative input of these emissions is essential for assessing the effectiveness of local abatement strategies.
Resumo:
Recent research into resting-state functional magnetic resonance imaging (fMRI) has shown that the brain is very active during rest. This thesis work utilizes blood oxygenation level dependent (BOLD) signals to investigate the spatial and temporal functional network information found within resting-state data, and aims to investigate the feasibility of extracting functional connectivity networks using different methods as well as the dynamic variability within some of the methods. Furthermore, this work looks into producing valid networks using a sparsely-sampled sub-set of the original data.
In this work we utilize four main methods: independent component analysis (ICA), principal component analysis (PCA), correlation, and a point-processing technique. Each method comes with unique assumptions, as well as strengths and limitations into exploring how the resting state components interact in space and time.
Correlation is perhaps the simplest technique. Using this technique, resting-state patterns can be identified based on how similar the time profile is to a seed region’s time profile. However, this method requires a seed region and can only identify one resting state network at a time. This simple correlation technique is able to reproduce the resting state network using subject data from one subject’s scan session as well as with 16 subjects.
Independent component analysis, the second technique, has established software programs that can be used to implement this technique. ICA can extract multiple components from a data set in a single analysis. The disadvantage is that the resting state networks it produces are all independent of each other, making the assumption that the spatial pattern of functional connectivity is the same across all the time points. ICA is successfully able to reproduce resting state connectivity patterns for both one subject and a 16 subject concatenated data set.
Using principal component analysis, the dimensionality of the data is compressed to find the directions in which the variance of the data is most significant. This method utilizes the same basic matrix math as ICA with a few important differences that will be outlined later in this text. Using this method, sometimes different functional connectivity patterns are identifiable but with a large amount of noise and variability.
To begin to investigate the dynamics of the functional connectivity, the correlation technique is used to compare the first and second halves of a scan session. Minor differences are discernable between the correlation results of the scan session halves. Further, a sliding window technique is implemented to study the correlation coefficients through different sizes of correlation windows throughout time. From this technique it is apparent that the correlation level with the seed region is not static throughout the scan length.
The last method introduced, a point processing method, is one of the more novel techniques because it does not require analysis of the continuous time points. Here, network information is extracted based on brief occurrences of high or low amplitude signals within a seed region. Because point processing utilizes less time points from the data, the statistical power of the results is lower. There are also larger variations in DMN patterns between subjects. In addition to boosted computational efficiency, the benefit of using a point-process method is that the patterns produced for different seed regions do not have to be independent of one another.
This work compares four unique methods of identifying functional connectivity patterns. ICA is a technique that is currently used by many scientists studying functional connectivity patterns. The PCA technique is not optimal for the level of noise and the distribution of the data sets. The correlation technique is simple and obtains good results, however a seed region is needed and the method assumes that the DMN regions is correlated throughout the entire scan. Looking at the more dynamic aspects of correlation changing patterns of correlation were evident. The last point-processing method produces a promising results of identifying functional connectivity networks using only low and high amplitude BOLD signals.
Resumo:
Report on Iowa State University of Science and Technology, Ames, Iowa for the year ended June 30, 2015
Resumo:
Report on a special investigation of the Center for Agricultural Law and Taxation at Iowa State University, for the period April 1, 2009 through December 15, 2015
Resumo:
Successful implementation of fault-tolerant quantum computation on a system of qubits places severe demands on the hardware used to control the many-qubit state. It is known that an accuracy threshold Pa exists for any quantum gate that is to be used for such a computation to be able to continue for an unlimited number of steps. Specifically, the error probability Pe for such a gate must fall below the accuracy threshold: Pe < Pa. Estimates of Pa vary widely, though Pa ∼ 10−4 has emerged as a challenging target for hardware designers. I present a theoretical framework based on neighboring optimal control that takes as input a good quantum gate and returns a new gate with better performance. I illustrate this approach by applying it to a universal set of quantum gates produced using non-adiabatic rapid passage. Performance improvements are substantial comparing to the original (unimproved) gates, both for ideal and non-ideal controls. Under suitable conditions detailed below, all gate error probabilities fall by 1 to 4 orders of magnitude below the target threshold of 10−4. After applying the neighboring optimal control theory to improve the performance of quantum gates in a universal set, I further apply the general control theory in a two-step procedure for fault-tolerant logical state preparation, and I illustrate this procedure by preparing a logical Bell state fault-tolerantly. The two-step preparation procedure is as follow: Step 1 provides a one-shot procedure using neighboring optimal control theory to prepare a physical qubit state which is a high-fidelity approximation to the Bell state |β01⟩ = 1/√2(|01⟩ + |10⟩). I show that for ideal (non-ideal) control, an approximate |β01⟩ state could be prepared with error probability ϵ ∼ 10−6 (10−5) with one-shot local operations. Step 2 then takes a block of p pairs of physical qubits, each prepared in |β01⟩ state using Step 1, and fault-tolerantly prepares the logical Bell state for the C4 quantum error detection code.
Resumo:
The chapter will set out to explain the KBUD and urban policy making processes in Queensland, Australia. This chapter will draw on providing a clear understanding on policy frameworks and relevant ICT applications of the Queensland ‘Smart State’ experience. The chapter is consisted of six sections. The first section following the introduction provides background information. The second section focuses on the KBUD processes in Queensland. The third section offers a comprehensive analysis of the ‘Queensland Smart State’ initiative, and it also identifies actors and goals of the agenda of Smart State experience. The fourth section reviews knowledge based development and ICT applications and policies of the Queensland Smart State and Brisbane Smart City experiences, and their impacts on Brisbane’s successful KBUD. The fifth section discusses knowledge hubs and ICT developments within the Brisbane metropolitan area. Then the chapter concludes with future trends and conclusion sections.
Resumo:
In this article I outline and demonstrate a synthesis of the methods developed by Lemke (1998) and Martin (2000) for analyzing evaluations in English. I demonstrate the synthesis using examples from a 1.3-million-word technology policy corpus drawn from institutions at the local, state, national, and supranational levels. Lemke's (1998) critical model is organized around the broad 'evaluative dimensions' that are deployed to evaluate propositions and proposals in English. Martin's (2000) model is organized with a more overtly systemic-functional orientation around the concept of 'encoded feeling'. In applying both these models at different times, whilst recognizing their individual usefulness and complementarity, I found specific limitations that led me to work towards a synthesis of the two approaches. I also argue for the need to consider genre, media, and institutional aspects more explicitly when claiming intertextual and heteroglossic relations as the basis for inferred evaluations. A basic assertion made in this article is that the perceived Desirability of a process, person, circumstance, or thing is identical to its 'value'. But the Desirability of anything is a socially and thus historically conditioned attribution that requires significant amounts of institutional inculcation of other 'types' of value-appropriateness, importance, beauty, power, and so on. I therefore propose a method informed by critical discourse analysis (CDA) that sees evaluation as happening on at least four interdependent levels of abstraction.
Resumo:
More than a century ago in their definitive work “The Right to Privacy” Samuel D. Warren and Louis D. Brandeis highlighted the challenges posed to individual privacy by advancing technology. Today’s workplace is characterised by its reliance on computer technology, particularly the use of email and the Internet to perform critical business functions. Increasingly these and other workplace activities are the focus of monitoring by employers. There is little formal regulation of electronic monitoring in Australian or United States workplaces. Without reasonable limits or controls, this has the potential to adversely affect employees’ privacy rights. Australia has a history of legislating to protect privacy rights, whereas the United States has relied on a combination of constitutional guarantees, federal and state statutes, and the common law. This thesis examines a number of existing and proposed statutory and other workplace privacy laws in Australia and the United States. The analysis demonstrates that existing measures fail to adequately regulate monitoring or provide employees with suitable remedies where unjustifiable intrusions occur. The thesis ultimately supports the view that enacting uniform legislation at the national level provides a more effective and comprehensive solution for both employers and employees. Chapter One provides a general introduction and briefly discusses issues relevant to electronic monitoring in the workplace. Chapter Two contains an overview of privacy law as it relates to electronic monitoring in Australian and United States workplaces. In Chapter Three there is an examination of the complaint process and remedies available to a hypothetical employee (Mary) who is concerned about protecting her privacy rights at work. Chapter Four provides an analysis of the major themes emerging from the research, and also discusses the draft national uniform legislation. Chapter Five details the proposed legislation in the form of the Workplace Surveillance and Monitoring Act, and Chapter Six contains the conclusion.
Resumo:
We all know that the future of news is digital. But mainstream news providers are still grappling with how to entice more customers to digital news. This paper provides context for a survey currently underway on user intentions towards digital news and entertainment, by exploring: 1. Consumer behaviours and intentions towards digital news and information use; 2. Current trends in the Australian online news and information sector; 3. Issues and emerging opportunities in the Australian (and global) environment. Key influences on digital use of news and information are pricing and access. The paper highlights emerging technical opportunities and flags service gaps as at December 2008. These gaps include multiple disconnects between: 1. Changing user intentions towards online and location based news (news based on a specific locality as chosen by the user) and information; 2. The ability by consumers to act on these intentions via the availability and cost of technologies; 3. Younger users prefer entertainment to news; 4. Current digital offerings of traditional news providers and opportunities. These disconnects present an opportunity for online news suppliers to appraise and resolve. Doing so may enhance their online news and information offering, attract consumers and improve loyalty. Outcomes from this paper will be used to identify knowledge gaps and contribute to the development of further analysis on Australian consumers and their behaviours and intentions towards online news and information. This will be ndertaken via focus groups as part of a broader study by researchers at the Creative Industries Faculty at the Queensland University of Technology supported by the Smart Services Cooperative Research Centre.
Resumo:
Abstract: Purpose – The purpose of this paper is to provide a parallel review of the role and processes of monitoring and regulation of corporate identities, examining both the communication and the performance measurement literature. Design/methodology/approach – Two questions are posed: Is it possible to effectively monitor and regulate corporate identities as a management control process? and, What is the relationship between corporate identity and performance measurement? Findings – Corporate identity management is positioned as a strategically complex task embracing the shaping of a range of dimensions of organisational life. The performance measurement literature likewise now emphasises organisational ability to incorporate both financial and “soft” non-financial performance measures. Consequently, the balanced scorecard has the potential to play multiple roles in monitoring and regulating the key dimensions of corporate identities. These shifts in direction in both fields suggest that performance measurement systems, as self-producing and self-referencing systems, have the potential to become both organic and powerful as organisational symbols and communication tools. Through this process of understanding and mobilising the interaction of both approaches to management, it may be possible to create a less obtrusive and more subtle way to control the nature of the organisation. Originality/value – This paper attempts the theoretical and practical fusion of disciplinary knowledge around corporate identities and performance measurement systems, potentially making a significant contribution to understanding, shaping and managing organisational identities.
Resumo:
Research on analogies in science education has focussed on student interpretation of teacher and textbook analogies, psychological aspects of learning with analogies and structured approaches for teaching with analogies. Few studies have investigated how analogies might be pivotal in students’ growing participation in chemical discourse. To study analogies in this way requires a sociocultural perspective on learning that focuses on ways in which language, signs, symbols and practices mediate participation in chemical discourse. This study reports research findings from a teacher-research study of two analogy-writing activities in a chemistry class. The study began with a theoretical model, Third Space, which informed analyses and interpretation of data. Third Space was operationalized into two sub-constructs called Dialogical Interactions and Hybrid Discourses. The aims of this study were to investigate sociocultural aspects of learning chemistry with analogies in order to identify classroom activities where students generate Dialogical Interactions and Hybrid Discourses, and to refine the operationalization of Third Space. These aims were addressed through three research questions. The research questions were studied through an instrumental case study design. The study was conducted in my Year 11 chemistry class at City State High School for the duration of one Semester. Data were generated through a range of data collection methods and analysed through discourse analysis using the Dialogical Interactions and Hybrid Discourse sub-constructs as coding categories. Results indicated that student interactions differed between analogical activities and mathematical problem-solving activities. Specifically, students drew on discourses other than school chemical discourse to construct analogies and their growing participation in chemical discourse was tracked using the Third Space model as an interpretive lens. Results of this study led to modification of the theoretical model adopted at the beginning of the study to a new model called Merged Discourse. Merged Discourse represents the mutual relationship that formed during analogical activities between the Analog Discourse and the Target Discourse. This model can be used for interpreting and analysing classroom discourse centred on analogical activities from sociocultural perspectives. That is, it can be used to code classroom discourse to reveal students’ growing participation with chemical (or scientific) discourse consistent with sociocultural perspectives on learning.