228 resultados para splice variants
Resumo:
Principal Topic: Project structures are often created by entrepreneurs and large corporate organizations to develop new products. Since new product development projects (NPDP) are more often situated within a larger organization, intrapreneurship or corporate entrepreneurship plays an important role in bringing these projects to fruition. Since NPDP often involves the development of a new product using immature technology, we describe development of an immature technology. The Joint Strike Fighter (JSF) F-35 aircraft is being developed by the U.S. Department of Defense and eight allied nations. In 2001 Lockheed Martin won a $19 billion contract to develop an affordable, stealthy and supersonic all-weather strike fighter designed to replace a wide range of aging fighter aircraft. In this research we define a complex project as one that demonstrates a number of sources of uncertainty to a degree, or level of severity, that makes it extremely difficult to predict project outcomes, to control or manage project (Remington & Zolin, Forthcoming). Project complexity has been conceptualized by Remington and Pollock (2007) in terms of four major sources of complexity; temporal, directional, structural and technological complexity (See Figure 1). Temporal complexity exists when projects experience significant environmental change outside the direct influence or control of the project. The Global Economic Crisis of 2008 - 2009 is a good example of the type of environmental change that can make a project complex as, for example in the JSF project, where project managers attempt to respond to changes in interest rates, international currency exchange rates and commodity prices etc. Directional complexity exists in a project where stakeholders' goals are unclear or undefined, where progress is hindered by unknown political agendas, or where stakeholders disagree or misunderstand project goals. In the JSF project all the services and all non countries have to agree to the specifications of the three variants of the aircraft; Conventional Take Off and Landing (CTOL), Short Take Off/Vertical Landing (STOVL) and the Carrier Variant (CV). Because the Navy requires a plane that can take off and land on an aircraft carrier, that required a special variant of the aircraft design, adding complexity to the project. Technical complexity occurs in a project using technology that is immature or where design characteristics are unknown or untried. Developing a plane that can take off on a very short runway and land vertically created may highly interdependent technological challenges to correctly locate, direct and balance the lift fans, modulate the airflow and provide equivalent amount of thrust from the downward vectored rear exhaust to lift the aircraft and at the same time control engine temperatures. These technological challenges make costing and scheduling equally challenging. Structural complexity in a project comes from the sheer numbers of elements such as the number of people, teams or organizations involved, ambiguity regarding the elements, and the massive degree of interconnectedness between them. While Lockheed Martin is the prime contractor, they are assisted in major aspects of the JSF development by Northrop Grumman, BAE Systems, Pratt & Whitney and GE/Rolls-Royce Fighter Engineer Team and innumerable subcontractors. In addition to identifying opportunities to achieve project goals, complex projects also need to identify and exploit opportunities to increase agility in response to changing stakeholder demands or to reduce project risks. Complexity Leadership Theory contends that in complex environments adaptive and enabling leadership are needed (Uhl-Bien, Marion and McKelvey, 2007). Adaptive leadership facilitates creativity, learning and adaptability, while enabling leadership handles the conflicts that inevitably arise between adaptive leadership and traditional administrative leadership (Uhl-Bien and Marion, 2007). Hence, adaptive leadership involves the recognition and opportunities to adapt, while and enabling leadership involves the exploitation of these opportunities. Our research questions revolve around the type or source of complexity and its relationship to opportunity recognition and exploitation. For example, is it only external environmental complexity that creates the need for the entrepreneurial behaviours, such as opportunity recognition and opportunity exploitation? Do the internal dimensions of project complexity, such as technological and structural complexity, also create the need for opportunity recognition and opportunity exploitation? The Kropp, Zolin and Lindsay model (2009) describes a relationship between entrepreneurial orientation (EO), opportunity recognition (OR), and opportunity exploitation (OX) in complex projects, with environmental and organizational contextual variables as moderators. We extend their model by defining the affects of external complexity and internal complexity on OR and OX. ---------- Methodology/Key Propositions: When the environment complex EO is more likely to result in OR because project members will be actively looking for solutions to problems created by environmental change. But in projects that are technologically or structurally complex project leaders and members may try to make the minimum changes possible to reduce the risk of creating new problems due to delays or schedule changes. In projects with environmental or technological complexity project leaders who encourage the innovativeness dimension of EO will increase OR in complex projects. But projects with technical or structural complexity innovativeness will not necessarily result in the recognition and exploitation of opportunities due to the over-riding importance of maintaining stability in the highly intricate and interconnected project structure. We propose that in projects with environmental complexity creating the need for change and innovation project leaders, who are willing to accept and manage risk, are more likely to identify opportunities to increase project effectiveness and efficiency. In contrast in projects with internal complexity a much higher willingness to accept risk will be necessary to trigger opportunity recognition. In structurally complex projects we predict it will be less likely to find a relationship between risk taking and OP. When the environment is complex, and a project has autonomy, they will be motivated to execute opportunities to improve the project's performance. In contrast, when the project has high internal complexity, they will be more cautious in execution. When a project experiences high competitive aggressiveness and their environment is complex, project leaders will be motivated to execute opportunities to improve the project's performance. In contrast, when the project has high internal complexity, they will be more cautious in execution. This paper reports the first stage of a three year study into the behaviours of managers, leaders and team members of complex projects. We conduct a qualitative study involving a Group Discussion with experienced project leaders. The objective is to determine how leaders of large and potentially complex projects perceive that external and internal complexity will influence the affects of EO on OR. ---------- Results and Implications: These results will help identify and distinguish the impact of external and internal complexity on entrepreneurial behaviours in NPDP. Project managers will be better able to quickly decide how and when to respond to changes in the environment and internal project events.
Resumo:
Campylobacter jejuni followed by Campylobacter coli contribute substantially to the economic and public health burden attributed to food-borne infections in Australia. Genotypic characterisation of isolates has provided new insights into the epidemiology and pathogenesis of C. jejuni and C. coli. However, currently available methods are not conducive to large scale epidemiological investigations that are necessary to elucidate the global epidemiology of these common food-borne pathogens. This research aims to develop high resolution C. jejuni and C. coli genotyping schemes that are convenient for high throughput applications. Real-time PCR and High Resolution Melt (HRM) analysis are fundamental to the genotyping schemes developed in this study and enable rapid, cost effective, interrogation of a range of different polymorphic sites within the Campylobacter genome. While the sources and routes of transmission of campylobacters are unclear, handling and consumption of poultry meat is frequently associated with human campylobacteriosis in Australia. Therefore, chicken derived C. jejuni and C. coli isolates were used to develop and verify the methods described in this study. The first aim of this study describes the application of MLST-SNP (Multi Locus Sequence Typing Single Nucleotide Polymorphisms) + binary typing to 87 chicken C. jejuni isolates using real-time PCR analysis. These typing schemes were developed previously by our research group using isolates from campylobacteriosis patients. This present study showed that SNP + binary typing alone or in combination are effective at detecting epidemiological linkage between chicken derived Campylobacter isolates and enable data comparisons with other MLST based investigations. SNP + binary types obtained from chicken isolates in this study were compared with a previously SNP + binary and MLST typed set of human isolates. Common genotypes between the two collections of isolates were identified and ST-524 represented a clone that could be worth monitoring in the chicken meat industry. In contrast, ST-48, mainly associated with bovine hosts, was abundant in the human isolates. This genotype was, however, absent in the chicken isolates, indicating the role of non-poultry sources in causing human Campylobacter infections. This demonstrates the potential application of SNP + binary typing for epidemiological investigations and source tracing. While MLST SNPs and binary genes comprise the more stable backbone of the Campylobacter genome and are indicative of long term epidemiological linkage of the isolates, the development of a High Resolution Melt (HRM) based curve analysis method to interrogate the hypervariable Campylobacter flagellin encoding gene (flaA) is described in Aim 2 of this study. The flaA gene product appears to be an important pathogenicity determinant of campylobacters and is therefore a popular target for genotyping, especially for short term epidemiological studies such as outbreak investigations. HRM curve analysis based flaA interrogation is a single-step closed-tube method that provides portable data that can be easily shared and accessed. Critical to the development of flaA HRM was the use of flaA specific primers that did not amplify the flaB gene. HRM curve analysis flaA interrogation was successful at discriminating the 47 sequence variants identified within the 87 C. jejuni and 15 C. coli isolates and correlated to the epidemiological background of the isolates. In the combinatorial format, the resolving power of flaA was additive to that of SNP + binary typing and CRISPR (Clustered regularly spaced short Palindromic repeats) HRM and fits the PHRANA (Progressive hierarchical resolving assays using nucleic acids) approach for genotyping. The use of statistical methods to analyse the HRM data enhanced sophistication of the method. Therefore, flaA HRM is a rapid and cost effective alternative to gel- or sequence-based flaA typing schemes. Aim 3 of this study describes the development of a novel bioinformatics driven method to interrogate Campylobacter MLST gene fragments using HRM, and is called ‘SNP Nucleated Minim MLST’ or ‘Minim typing’. The method involves HRM interrogation of MLST fragments that encompass highly informative “Nucleating SNPS” to ensure high resolution. Selection of fragments potentially suited to HRM analysis was conducted in silico using i) “Minimum SNPs” and ii) the new ’HRMtype’ software packages. Species specific sets of six “Nucleating SNPs” and six HRM fragments were identified for both C. jejuni and C. coli to ensure high typeability and resolution relevant to the MLST database. ‘Minim typing’ was tested empirically by typing 15 C. jejuni and five C. coli isolates. The association of clonal complexes (CC) to each isolate by ‘Minim typing’ and SNP + binary typing were used to compare the two MLST interrogation schemes. The CCs linked with each C. jejuni isolate were consistent for both methods. Thus, ‘Minim typing’ is an efficient and cost effective method to interrogate MLST genes. However, it is not expected to be independent, or meet the resolution of, sequence based MLST gene interrogation. ‘Minim typing’ in combination with flaA HRM is envisaged to comprise a highly resolving combinatorial typing scheme developed around the HRM platform and is amenable to automation and multiplexing. The genotyping techniques described in this thesis involve the combinatorial interrogation of differentially evolving genetic markers on the unified real-time PCR and HRM platform. They provide high resolution and are simple, cost effective and ideally suited to rapid and high throughput genotyping for these common food-borne pathogens.
Resumo:
Research on efficient pairing implementation has focussed on reducing the loop length and on using high-degree twists. Existence of twists of degree larger than 2 is a very restrictive criterion but luckily constructions for pairing-friendly elliptic curves with such twists exist. In fact, Freeman, Scott and Teske showed in their overview paper that often the best known methods of constructing pairing-friendly elliptic curves over fields of large prime characteristic produce curves that admit twists of degree 3, 4 or 6. A few papers have presented explicit formulas for the doubling and the addition step in Miller’s algorithm, but the optimizations were all done for the Tate pairing with degree-2 twists, so the main usage of the high- degree twists remained incompatible with more efficient formulas. In this paper we present efficient formulas for curves with twists of degree 2, 3, 4 or 6. These formulas are significantly faster than their predecessors. We show how these faster formulas can be applied to Tate and ate pairing variants, thereby speeding up all practical suggestions for efficient pairing implementations over fields of large characteristic.
Resumo:
This paper examines the algebraic cryptanalysis of small scale variants of the LEX-BES. LEX-BES is a stream cipher based on the Advanced Encryption Standard (AES) block cipher. LEX is a generic method proposed for constructing a stream cipher from a block cipher, initially introduced by Biryukov at eSTREAM, the ECRYPT Stream Cipher project in 2005. The Big Encryption System (BES) is a block cipher introduced at CRYPTO 2002 which facilitates the algebraic analysis of the AES block cipher. In this paper, experiments were conducted to find solution of the equation system describing small scale LEX-BES using Gröbner Basis computations. This follows a similar approach to the work by Cid, Murphy and Robshaw at FSE 2005 that investigated algebraic cryptanalysis on small scale variants of the BES. The difference between LEX-BES and BES is that due to the way the keystream is extracted, the number of unknowns in LEX-BES equations is fewer than the number in BES. As far as the author knows, this attempt is the first at creating solvable equation systems for stream ciphers based on the LEX method using Gröbner Basis computations.
Resumo:
This thesis is devoted to the study of linear relationships in symmetric block ciphers. A block cipher is designed so that the ciphertext is produced as a nonlinear function of the plaintext and secret master key. However, linear relationships within the cipher can still exist if the texts and components of the cipher are manipulated in a number of ways, as shown in this thesis. There are four main contributions of this thesis. The first contribution is the extension of the applicability of integral attacks from word-based to bitbased block ciphers. Integral attacks exploit the linear relationship between texts at intermediate stages of encryption. This relationship can be used to recover subkey bits in a key recovery attack. In principle, integral attacks can be applied to bit-based block ciphers. However, specific tools to define the attack on these ciphers are not available. This problem is addressed in this thesis by introducing a refined set of notations to describe the attack. The bit patternbased integral attack is successfully demonstrated on reduced-round variants of the block ciphers Noekeon, Present and Serpent. The second contribution is the discovery of a very small system of equations that describe the LEX-AES stream cipher. LEX-AES is based heavily on the 128-bit-key (16-byte) Advanced Encryption Standard (AES) block cipher. In one instance, the system contains 21 equations and 17 unknown bytes. This is very close to the upper limit for an exhaustive key search, which is 16 bytes. One only needs to acquire 36 bytes of keystream to generate the equations. Therefore, the security of this cipher depends on the difficulty of solving this small system of equations. The third contribution is the proposal of an alternative method to measure diffusion in the linear transformation of Substitution-Permutation-Network (SPN) block ciphers. Currently, the branch number is widely used for this purpose. It is useful for estimating the possible success of differential and linear attacks on a particular SPN cipher. However, the measure does not give information on the number of input bits that are left unchanged by the transformation when producing the output bits. The new measure introduced in this thesis is intended to complement the current branch number technique. The measure is based on fixed points and simple linear relationships between the input and output words of the linear transformation. The measure represents the average fraction of input words to a linear diffusion transformation that are not effectively changed by the transformation. This measure is applied to the block ciphers AES, ARIA, Serpent and Present. It is shown that except for Serpent, the linear transformations used in the block ciphers examined do not behave as expected for a random linear transformation. The fourth contribution is the identification of linear paths in the nonlinear round function of the SMS4 block cipher. The SMS4 block cipher is used as a standard in the Chinese Wireless LAN Wired Authentication and Privacy Infrastructure (WAPI) and hence, the round function should exhibit a high level of nonlinearity. However, the findings in this thesis on the existence of linear relationships show that this is not the case. It is shown that in some exceptional cases, the first four rounds of SMS4 are effectively linear. In these cases, the effective number of rounds for SMS4 is reduced by four, from 32 to 28. The findings raise questions about the security provided by SMS4, and might provide clues on the existence of a flaw in the design of the cipher.
Resumo:
This thesis addresses the problem of detecting and describing the same scene points in different wide-angle images taken by the same camera at different viewpoints. This is a core competency of many vision-based localisation tasks including visual odometry and visual place recognition. Wide-angle cameras have a large field of view that can exceed a full hemisphere, and the images they produce contain severe radial distortion. When compared to traditional narrow field of view perspective cameras, more accurate estimates of camera egomotion can be found using the images obtained with wide-angle cameras. The ability to accurately estimate camera egomotion is a fundamental primitive of visual odometry, and this is one of the reasons for the increased popularity in the use of wide-angle cameras for this task. Their large field of view also enables them to capture images of the same regions in a scene taken at very different viewpoints, and this makes them suited for visual place recognition. However, the ability to estimate the camera egomotion and recognise the same scene in two different images is dependent on the ability to reliably detect and describe the same scene points, or ‘keypoints’, in the images. Most algorithms used for this purpose are designed almost exclusively for perspective images. Applying algorithms designed for perspective images directly to wide-angle images is problematic as no account is made for the image distortion. The primary contribution of this thesis is the development of two novel keypoint detectors, and a method of keypoint description, designed for wide-angle images. Both reformulate the Scale- Invariant Feature Transform (SIFT) as an image processing operation on the sphere. As the image captured by any central projection wide-angle camera can be mapped to the sphere, applying these variants to an image on the sphere enables keypoints to be detected in a manner that is invariant to image distortion. Each of the variants is required to find the scale-space representation of an image on the sphere, and they differ in the approaches they used to do this. Extensive experiments using real and synthetically generated wide-angle images are used to validate the two new keypoint detectors and the method of keypoint description. The best of these two new keypoint detectors is applied to vision based localisation tasks including visual odometry and visual place recognition using outdoor wide-angle image sequences. As part of this work, the effect of keypoint coordinate selection on the accuracy of egomotion estimates using the Direct Linear Transform (DLT) is investigated, and a simple weighting scheme is proposed which attempts to account for the uncertainty of keypoint positions during detection. A word reliability metric is also developed for use within a visual ‘bag of words’ approach to place recognition.
Resumo:
This paper addresses the problem of constructing consolidated business process models out of collections of process models that share common fragments. The paper considers the construction of unions of multiple models (called merged models) as well as intersections (called digests). Merged models are intended for analysts who wish to create a model that subsumes a collection of process models - typically representing variants of the same underlying process - with the aim of replacing the variants with the merged model. Digests, on the other hand, are intended for analysts who wish to identify the most recurring fragments across a collection of process models, so that they can focus their efforts on optimizing these fragments. The paper presents an algorithm for computing merged models and an algorithm for extracting digests from a merged model. The merging and digest extraction algorithms have been implemented and tested against collections of process models taken from multiple application domains. The tests show that the merging algorithm produces compact models and scales up to process models containing hundreds of nodes. Furthermore, a case study conducted in a large insurance company has demonstrated the usefulness of the merging and digest extraction operators in a practical setting.
Resumo:
The concept of sustainable urban development has been pushed to the forefront of policy-making and politics as the world wakes up to the impacts of climate change and the effects of modern urban lifestyles. Today, sustainable development has become a very prominent element in the day-to-day debate on urban policy and the expression of that policy in urban planning and development decisions. As a result of this, during the last few years, sustainable development automation applications such as sustainable urban development decision support systems have become popular tools as they offer new opportunities for local governments to realise their sustainable development agendas. This chapter explores a range of issues associated with the application of information and communication technologies and decision support systems in the process of underpinning sustainable urban development. The chapter considers how information and communication technologies can be applied to enhance urban planning, raise environmental awareness, share decisions and improve public participation. It introduces and explores three web-based geographical information systems projects as best practice. These systems are developed as support tools to include public opinion in the urban planning and development processes, and to provide planners with comprehensive tools for the analysis of sustainable urban development variants in order to prepare the best plans for constructing sustainable urban communities and futures.
Resumo:
This work examines the algebraic cryptanalysis of small scale variants of the LEX-BES. LEX-BES is a stream cipher based on the Advanced Encryption Standard (AES) block cipher. LEX is a generic method proposed for constructing a stream cipher from a block cipher, initially introduced by Biryukov at eSTREAM, the ECRYPT Stream Cipher project in 2005. The Big Encryption System (BES) is a block cipher introduced at CRYPTO 2002 which facilitates the algebraic analysis of the AES block cipher. In this article, experiments were conducted to find solutions of equation systems describing small scale LEX-BES using Gröbner Basis computations. This follows a similar approach to the work by Cid, Murphy and Robshaw at FSE 2005 that investigated algebraic cryptanalysis on small scale variants of the BES. The difference between LEX-BES and BES is that due to the way the keystream is extracted, the number of unknowns in LEX-BES equations is fewer than the number in BES. As far as the authors know, this attempt is the first at creating solvable equation systems for stream ciphers based on the LEX method using Gröbner Basis computations.
Resumo:
Governments around the world are facing the challenge of responding to increased expectations by their customers with regard to public service delivery. Citizens, for example, expect governments to provide better and more efficient electronic services on the Web in an integrated way. Online portals have become the approach of choice in online service delivery to meet these requirements and become more customer-focussed. This study describes and analyses existing variants of online service delivery models based upon an empirical study and provides valuable insights for researchers and practitioners in government. For this study, we have conducted interviews with senior management representatives from five international governments. Based on our findings, we distinguish three different classes of service delivery models. We describe and characterise each of these models in detail and provide an in-depth discussion of the strengths and weaknesses of these approaches.
Resumo:
There is increasing epidemiological and molecular evidence that cutaneous melanomas arise through multiple causal pathways. The purpose of this study was to explore the relationship between germline and somatic mutations in a population-based series of melanoma patients to reshape and refine the divergent pathway model for melanoma. Melanomas collected from 123 Australian patients were analyzed for melanocortin-1 receptor (MC1R) variants and mutations in the BRAF and NRAS genes. Detailed phenotypic and sun exposure data were systematically collected from all patients. We found that BRAF-mutant melanomas were significantly more likely from younger patients and those with high nevus counts, and were more likely in melanomas with adjacent neval remnants. Conversely, BRAF-mutant melanomas were significantly less likely in people with high levels of lifetime sun exposure. We observed no association between germline MC1R status and somatic BRAF mutations in melanomas from this population. BRAF-mutant melanomas have different origins from other cutaneous melanomas. These data support the divergent pathways hypothesis for melanoma, which may require a reappraisal of targeted cancer prevention activities.
Resumo:
Petri nets are often used to model and analyze workflows. Many workflow languages have been mapped onto Petri nets in order to provide formal semantics or to verify correctness properties. Typically, the so-called Workflow nets are used to model and analyze workflows and variants of the classical soundness property are used as a correctness notion. Since many workflow languages have cancelation features, a mapping to workflow nets is not always possible. Therefore, it is interesting to consider workflow nets with reset arcs. Unfortunately, soundness is undecidable for workflow nets with reset arcs. In this paper, we provide a proof and insights into the theoretical limits of workflow verification.
Resumo:
Ureaplasma species are the bacteria most frequently isolated from human amniotic fluid in asymptomatic pregnancies and placental infections. Ureaplasma parvum serovars 3 and 6 are the most prevalent serovars isolated from men and women. We hypothesized that the effects on the fetus and chorioamnion of chronic ureaplasma infection in amniotic fluid are dependent on the serovar, dose, and variation of the ureaplasma multiple banded antigen (MBA) and mba gene. We injected high- or low dose U. parvum serovar 3, serovar 6, or vehicle intra-amniotically into pregnant ewes at 55 days of gestation (term = 150 days) and examined the chorioamnion, amniotic fluid, and fetal lung tissue of animals delivered by cesarean section at 125 days of gestation. Variation of the multiple banded antigen/mba generated by serovar 3 and serovar 6 ureaplasmas in vivo were compared by PCR assay and Western blot. Ureaplasma inoculums demonstrated only one (serovar 3) or two (serovar 6) MBA variants in vitro, but numerous antigenic variants were generated in vivo: serovar 6 passage 1 amniotic fluid cultures contained more MBA size variants than serovar 3 (P = 0.005),and ureaplasma titers were inversely related to the number of variants (P = 0.025). The severity of chorioamnionitis varied between animals. Low numbers of mba size variants (five or fewer) within amniotic fluid were associated with severe inflammation, whereas the chorioamnion from animals with nine or more mba variants showed little or no inflammation. These differences in chorioamnion inflammation may explain why not all women with in utero Ureaplasma spp. experience adverse pregnancy outcomes.
Resumo:
Most salad vegetables are eaten fresh by consumers. However, raw vegetables may pose a risk of transmitting opportunistic bacteria to immunocompromised people, including cystic fibrosis (CF) patients. In particular, CF patients are vulnerable to chronic Pseudomonas aeruginosa lung infections and this organism is the primary cause of morbidity and mortality in this group. Clonal variants of P. aeruginosa have been identified as emerging threats to people afflicted with CF; however it has not yet been proven from where these clones originate or how they are transmitted. Due to the organisms‟ aquatic environmental niche, it was hypothesised that vegetables may be a source of these clones. To test this hypothesis, lettuce, tomatoes, mushrooms and bean sprout packages (n = 150) were analysed from a green grocer, supermarket and farmers‟ market within the Brisbane region, availability permitting. The internal and external surfaces of the vegetables were separately analysed for the presence of clonal strains of P. aeruginosa using washings and homogenisation techniques, respectively. This separation was in an attempt to establish which surface was contaminated, so that recommendations could be made to decrease or eliminate P. aeruginosa from these foods prior to consumption. Soil and water samples (n = 17) from local farms were also analysed for the presence of P. aeruginosa. Presumptive identification of isolates recovered from these environmental samples was made based on growth on Cetrimide agar at 42°C, presence of the cytochrome-oxidase enzyme and inability to ferment lactose. P. aeruginosa duplex real-time polymerase chain reaction assay (PAduplex) was performed on all bacterial isolates presumptively identified as P. aeruginosa. Enterobacterial repetitive intergenic consensus strain typing PCR (ERIC-PCR) was subsequently performed on confirmed bacterial isolates. Although 72 P. aeruginosa were isolated, none of these proved to be clonal strains. The significance of these findings is that vegetables may pose a risk of transmitting sporadic strains of P. aeruginosa to people afflicted with CF and possibly, other immunocompromised people.
Resumo:
Issues of equity and inequity have always been part of employment relations and are a fundamental part of the industrial landscape. For example, in most countries in the nineteenth century and a large part of the twentieth century women and members of ethnic groups (often a minority in the workforce) were barred from certain occupations, industries or work locations, and received less pay than the dominant male ethnic group for the same work. In recent decades attention has been focused on issues of equity between groups, predominantly women and different ethnic groups in the workforce. This has been embodied in industrial legislation, for example in equal pay for women and men, and frequently in specific equity legislation. In this way a whole new area of law and associated workplace practice has developed in many countries. Historically, employment relations and industrial relations research has not examined employment issues disaggregated by gender or ethnic group. Born out of concern with conflict and regulation at the workplace, studies tended to concentrate on white, male, unionized workers in manufacturing and heavy industry (Ackers, 2002, p. 4). The influential systems model crafted by Dunlop (1958) gave rise to The discipline’s preoccupation with the ‘problem of order’ [which] ensures the invisibility of women, not only because women have generally been less successful in mobilizing around their own needs and discontents, but more profoundly because this approach identifies the employment relationship as the ultimate source of power and conflict at work (Forrest, 1993, p. 410). While ‘the system approach does not deliberately exclude gender . . . by reproducing a very narrow research approach and understanding of issues of relevance for the research, gender is in general excluded or looked on as something of peripheral interest’ (Hansen, 2002, p. 198). However, long-lived patterns of gender segregation in occupations and industries, together with discriminatory access to work and social views about women and ethnic groups in the paid workforce, mean that the employment experience of women and ethnic groups is frequently quite different to that of men in the dominant ethnic group. Since the 1980s, research into women and employment has figured in the employment relations literature, but it is often relegated to a separate category in specific articles or book chapters, with women implicitly or explicitly seen as the atypical or exceptional worker (Hansen, 2002; Wajcman, 2000). The same conclusion can be reached for other groups with different labour force patterns and employment outcomes. This chapter proposes that awareness of equity issues is central to employment relations. Like industrial relations legislation and approaches, each country will have a unique set of equity policies and legislation, reflecting their history and culture. Yet while most books on employment and industrial relations deal with issues of equity in a separate chapter (most commonly on equity for women or more recently on ‘diversity’), the reality in the workplace is that all types of legislation and policies which impact on the wages and working conditions interact, and their impact cannot be disentangled one from another. When discussing equity in workplaces in the twenty-first century we are now faced with a plethora of different terms in English. Terms used include discrimination, equity, equal opportunity, affirmative action and diversity with all its variants (workplace diversity, managing diversity, and so on). There is a lack of agreed definitions, particularly when the terms are used outside of a legislative context. This ‘shifting linguistic terrain’ (Kennedy-Dubourdieu, 2006b, p. 3) varies from country to country and changes over time even within the one country. There is frequently a division made between equity and its related concepts and the range of expressions using the term ‘diversity’ (Wilson and Iles, 1999; Thomas and Ely, 1996). These present dilemmas for practitioners and researchers due to the amount and range of ideas prevalent – and the breadth of issues that are covered when we say ‘equity and diversity in employment’. To add to these dilemmas, the literature on equity and diversity has become bifurcated: the literature on workplace diversity/management diversity appears largely in the business literature while that on equity in employment appears frequently in legal and industrial relations journals. Workplaces of the twenty-first century differ from those of the nineteenth and twentieth century not only in the way they deal with individual and group differences but also in the way they interpret what are fair and equitable outcomes for different individuals and groups. These variations are the result of a range of social conditions, legislation and workplace constraints that have influenced the development of employment equity and the management of diversity. Attempts to achieve employment equity have primarily been dealt with through legislative means, and in the last fifty years this legislation has included elements of anti-discrimination, affirmative action, and equal employment opportunity in virtually all OECD countries (Mor Barak, 2005, pp. 17–52). Established on human rights and social justice principles, this legislation is based on the premise that systemic discrimination has and/or continues to exist in the labour force and particular groups of citizens have less advantageous employment outcomes. It is based on group identity, and employment equity programmes in general apply across all workplaces and are mandatory. The more recent notions of diversity in the workplace are based on ideas coming principally from the USA in the 1980s which have spread widely in the Western world since the 1990s. Broadly speaking, diversity ideas focus on individual differences either on their own or in concert with the idea of group differences. The diversity literature is based on a business case: that is diversity is profitable in a variety of ways for business, and generally lacks a social justice or human rights justification (Burgess et al., 2009, pp. 81–2). Managing diversity is represented at the organizational level as a voluntary and local programme. This chapter discusses some major models and theories for equity and diversity. It begins by charting the history of ideas about equity in employment and then briefly discusses what is meant by equality and equity. The chapter then analyses the major debates about the ways in which equity can be achieved. The more recent ideas about diversity are then discussed, including the history of these ideas and the principles which guide this concept. The following section discusses both major frameworks of equity and diversity. The chapter then raises some ways in which insights from the equity and diversity literature can inform employment relations. Finally, the future of equity and diversity ideas is discussed.