961 resultados para Gravitational segregation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

According to ecological theory, the coexistence of competitors in patchy environments may be facilitated by hierarchical spatial segregation along axes of environmental variation, but empirical evidence is limited. Cabrera and water voles show a metapopulation-like structure in Mediterranean farmland, where they are known to segregate along space, habitat, and time axes within habitat patches. Here, we assess whether segregation also occurs among and within landscapes, and how this is influenced by patch-network and matrix composition. We surveyed 75 landscapes, each covering 78 ha, where we mapped all habitat patches potentially suitable for Cabrera and water voles, and the area effectively occupied by each species (extent of occupancy). The relatively large water vole tended to be the sole occupant of landscapes with high habitat amount but relatively low patch density (i.e., with a few large patches), and with a predominantly agricultural matrix, whereas landscapes with high patch density (i.e.,many small patches) and low agricultural cover, tended to be occupied exclusively by the small Cabrera vole. The two species tended to co-occur in landscapes with intermediate patch-network and matrix characteristics, though their extents of occurrence were negatively correlated after controlling for environmental effects. In combination with our previous studies on the Cabrera-water vole system, these findings illustrated empirically the occurrence of hierarchical spatial segregation, ranging from withinpatches to among-landscapes. Overall, our study suggests that recognizing the hierarchical nature of spatial segregation patterns and their major environmental drivers should enhance our understanding of species coexistence in patchy environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Troubled dynamics between residents of an Aboriginal town in Queensland and the local health system were established during colonisation and consolidated during those periods of Australian history where the policies of 'protection' (segregation), integration and then assimilation held sway. The status of Aboriginal health is, in part, related to interactions between the residents' current and historical experiences of the health and criminal justice systems as together these agencies used medical and moral policing to legitimate dispossession, marginalisation, institutionalisation and control of the residents. The punitive regulations and ethnocentric strategies used by these institutions are within the living memory of many of the residents or in the published accounts of preceding generations. This paper explores current residents' memories and experiences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Aerosol production during normal breathing is often attributed to turbulence in the respiratory tract. That mechanism is not consistent with a high degree of asymmetry between aerosol production during inhalation and exhalation. The objective was to investigate production symmetry during breathing. Methods: The aerosol size distribution in exhaled breath was examined for different breathing patterns including normal breathing, varied breath holding periods and contrasting inhalation and exhalation rates. The aerosol droplet size distribution measured in the exhaled breath was examined in real time using an aerodynamic particle sizer. Results and Conclusions: The dependence of the particle concentration decay rate on diameter during breath holding was consistent with gravitational settling in the alveolar spaces. Also, deep exhalation resulted in a 4 to 6 fold increase in concentration and rapid inhalation produced a further 2 to 3 fold increase in concentration. In contrast rapid exhalation had little effect on the measured concentration. A positive correlation of the breath aerosol concentration with subject age was observed. The results were consistent with the breath aerosol being produced through fluid film rupture in the respiratory bronchioles in the early stages of inhalation and the resulting aerosol being drawn into the alveoli and held before exhalation. The observed asymmetry of production in the breathing cycle with very little aerosol being produced during exhalation, is inconsistent with the widely assumed turbulence induced aerosolization mechanism.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dispersion characteristics of respiratory droplets in indoor environments are of special interest in controlling transmission of airborne diseases. This study adopts an Eulerian method to investigate the spatial concentration distribution and temporal evolution of exhaled and sneezed/coughed droplets within the range of 1.0~10.0μm in an office room with three air distribution methods, i.e. mixing ventilation (MV), displacement ventilation (DV), and under-floor air distribution (UFAD). The diffusion, gravitational settling, and deposition mechanism of particulate matters are well accounted in the one-way coupling Eulerian approach. The simulation results find that exhaled droplets with diameters up to 10.0μm from normal respiration process are uniformly distributed in MV, while they are trapped in the breathing height by thermal stratifications in DV and UFAD, resulting in a high droplet concentration and a high exposure risk to other occupants. Sneezed/coughed droplets are diluted much slower in DV/UFAD than in MV. Low air speed in the breathing zone in DV/UFAD can lead to prolonged residence of droplets in the breathing zone.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Heart disease is attributed as the highest cause of death in the world. Although this could be alleviated by heart transplantation, there is a chronic shortage of donor hearts and so mechanical solutions are being considered. Currently, many Ventricular Assist Devices (VADs) are being developed worldwide in an effort to increase life expectancy and quality of life for end stage heart failure patients. Current pre-clinical testing methods for VADs involve laboratory testing using Mock Circulation Loops (MCLs), and in vivo testing in animal models. The research and development of highly accurate MCLs is vital to the continuous improvement of VAD performance. The first objective of this study was to develop and validate a mathematical model of a MCL. This model could then be used in the design and construction of a variable compliance chamber to improve the performance of an existing MCL as well as form the basis for a new miniaturised MCL. An extensive review of literature was carried out on MCLs and mathematical modelling of their function. A mathematical model of a MCL was then created in the MATLAB/SIMULINK environment. This model included variable features such as resistance, fluid inertia and volumes (resulting from the pipe lengths and diameters); compliance of Windkessel chambers, atria and ventricles; density of both fluid and compressed air applied to the system; gravitational effects on vertical columns of fluid; and accurately modelled actuators controlling the ventricle contraction. This model was then validated using the physical properties and pressure and flow traces produced from a previously developed MCL. A variable compliance chamber was designed to reproduce parameters determined by the mathematical model. The function of the variability was achieved by controlling the transmural pressure across a diaphragm to alter the compliance of the system. An initial prototype was tested in a previously developed MCL, and a variable level of arterial compliance was successfully produced; however, the complete range of compliance values required for accurate physiological representation was not able to be produced with this initial design. The mathematical model was then used to design a smaller physical mock circulation loop, with the tubing sizes adjusted to produce accurate pressure and flow traces whilst having an appropriate frequency response characteristic. The development of the mathematical model greatly assisted the general design of an in vitro cardiovascular device test rig, while the variable compliance chamber allowed simple and real-time manipulation of MCL compliance to allow accurate transition between a variety of physiological conditions. The newly developed MCL produced an accurate design of a mechanical representation of the human circulatory system for in vitro cardiovascular device testing and education purposes. The continued improvement of VAD test rigs is essential if VAD design is to improve, and hence improve quality of life and life expectancy for heart failure patients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite all attempts to prevent fraud, it continues to be a major threat to industry and government. Traditionally, organizations have focused on fraud prevention rather than detection, to combat fraud. In this paper we present a role mining inspired approach to represent user behaviour in Enterprise Resource Planning (ERP) systems, primarily aimed at detecting opportunities to commit fraud or potentially suspicious activities. We have adapted an approach which uses set theory to create transaction profiles based on analysis of user activity records. Based on these transaction profiles, we propose a set of (1) anomaly types to detect potentially suspicious user behaviour and (2) scenarios to identify inadequate segregation of duties in an ERP environment. In addition, we present two algorithms to construct a directed acyclic graph to represent relationships between transaction profiles. Experiments were conducted using a real dataset obtained from a teaching environment and a demonstration dataset, both using SAP R/3, presently the most predominant ERP system. The results of this empirical research demonstrate the effectiveness of the proposed approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite all attempts to prevent fraud, it continues to be a major threat to industry and government. Traditionally, organizations have focused on fraud prevention rather than detection, to combat fraud. In this paper we present a role mining inspired approach to represent user behaviour in Enterprise Resource Planning (ERP) systems, primarily aimed at detecting opportunities to commit fraud or potentially suspicious activities. We have adapted an approach which uses set theory to create transaction profiles based on analysis of user activity records. Based on these transaction profiles, we propose a set of (1) anomaly types to detect potentially suspicious user behaviour, and (2) scenarios to identify inadequate segregation of duties in an ERP environment. In addition, we present two algorithms to construct a directed acyclic graph to represent relationships between transaction profiles. Experiments were conducted using a real dataset obtained from a teaching environment and a demonstration dataset, both using SAP R/3, presently the predominant ERP system. The results of this empirical research demonstrate the effectiveness of the proposed approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Magnetic Resonance Imaging (MRI) offers a valuable research tool for the assessment of 3D spinal deformity in AIS, however the horizontal patient position imposed by conventional scanners removes the axial compressive loading on the spine which is an important determinant of deformity shape and magnitude in standing scoliosis patients. The objective of this study was to design, construct and test an MRI compatible compression device for research into the effect of axial loading on spinal deformity using supine MRI scans. The compression device was designed and constructed, consisting of a vest worn by the patient, which was attached via straps to a pneumatically actuated footplate. An applied load of 0.5 x bodyweight was remotely controlled by a unit in the scanner operator’s console. The entire device was constructed using non-metallic components for MRI compatibility. The device was evaluated by performing unloaded and loaded supine MRI scans on a series of 10 AIS patients. The study concluded that an MRI compatible compression device had been successfully designed and constructed, providing a research tool for studies into the effect of axial loading on 3D spinal deformity in scoliosis. The 3D axially loaded MR imaging capability developed in this study will allow future research investigations of the effect of axial loading on spinal rotation, and for imaging the response of scoliotic spinal tissues to axial loading.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modelling droplet movement on leaf surfaces is an important component in understanding how water, pesticide or nutrient is absorbed through the leaf surface. A simple mathematical model is proposed in this paper for generating a realistic, or natural looking trajectory of a water droplet traversing a virtual leaf surface. The virtual surface is comprised of a triangular mesh structure over which a hybrid Clough-Tocher seamed element interpolant is constructed from real-life scattered data captured by a laser scanner. The motion of the droplet is assumed to be affected by gravitational, frictional and surface resistance forces and the innovation of our approach is the use of thin-film theory to develop a stopping criterion for the droplet as it moves on the surface. The droplet model is verified and calibrated using experimental measurement; the results are promising and appear to capture reality quite well.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Forces of demand and supply are changing the dynamics of the higher education market. Transformation of institutions of higher learning into competitive enterprise is underway. Higher education institutions are seemingly under intense pressure to create value and focus their efforts and scarce funds on activities that drive up value for their respective customers and other stakeholders. Porter’s generic ‘value chain’ model for creating value requires that the activities of an organization be segregated in to discrete components for value chain analysis to be performed. Recent trends in higher education make such segregation possible. Therefore, it is proposed that the academic process can be unbundled into discrete components which have well developed measures. A reconfigured value chain for higher education, with its own value drivers and critical internal linkages is also proposed in this paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The current food practices around the world raises concerns for food insecurity in the future. Urban / suburban / and peri-urban environments are particularly problematic in their segregation from rural areas where the natural food sources are grown and harvested. Soaring urban population growth only deteriorates the lack of understanding in and access to fresh produce for the people who live, work, and play in the city. This paper explores the role of Human-Computer Interaction (HCI) design in encouraging individual users to participate in creating sustainable food cultures in urban environments. The paper takes a disciplinary perspective of urban informatics and presents five core constituents of the HCI design framework to encourage sustainable food culture in the city via ubiquitous technologies: the perspective of transdisciplinarity; the domains of interest of people, place, and technology; and the perspective of design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A protein-truncating variant of CHEK2, 1100delC, is associated with a moderate increase in breast cancer risk. We have determined the prevalence of this allele in index cases from 300 Australian multiple-case breast cancer families, 95% of which had been found to be negative for mutations in BRCA1 and BRCA2. Only two (0.6%) index cases heterozygous for the CHEK2 mutation were identified. All available relatives in these two families were genotyped, but there was no evidence of co-segregation between the CHEK2 variant and breast cancer. Lymphoblastoid cell lines established from a heterozygous carrier contained approximately 20% of the CHEK2 1100delC mRNA relative to wild-type CHEK2 transcript. However, no truncated CHK2 protein was detectable. Analyses of expression and phosphorylation of wild-type CHK2 suggest that the variant is likely to act by haploinsufficiency. Analysis of CDC25A degradation, a downstream target of CHK2, suggests that some compensation occurs to allow normal degradation of CDC25A. Such compensation of the 1100delC defect in CHEK2 might explain the rather low breast cancer risk associated with the CHEK2 variant, compared to that associated with truncating mutations in BRCA1 or BRCA2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Issues of equity and inequity have always been part of employment relations and are a fundamental part of the industrial landscape. For example, in most countries in the nineteenth century and a large part of the twentieth century women and members of ethnic groups (often a minority in the workforce) were barred from certain occupations, industries or work locations, and received less pay than the dominant male ethnic group for the same work. In recent decades attention has been focused on issues of equity between groups, predominantly women and different ethnic groups in the workforce. This has been embodied in industrial legislation, for example in equal pay for women and men, and frequently in specific equity legislation. In this way a whole new area of law and associated workplace practice has developed in many countries. Historically, employment relations and industrial relations research has not examined employment issues disaggregated by gender or ethnic group. Born out of concern with conflict and regulation at the workplace, studies tended to concentrate on white, male, unionized workers in manufacturing and heavy industry (Ackers, 2002, p. 4). The influential systems model crafted by Dunlop (1958) gave rise to The discipline’s preoccupation with the ‘problem of order’ [which] ensures the invisibility of women, not only because women have generally been less successful in mobilizing around their own needs and discontents, but more profoundly because this approach identifies the employment relationship as the ultimate source of power and conflict at work (Forrest, 1993, p. 410). While ‘the system approach does not deliberately exclude gender . . . by reproducing a very narrow research approach and understanding of issues of relevance for the research, gender is in general excluded or looked on as something of peripheral interest’ (Hansen, 2002, p. 198). However, long-lived patterns of gender segregation in occupations and industries, together with discriminatory access to work and social views about women and ethnic groups in the paid workforce, mean that the employment experience of women and ethnic groups is frequently quite different to that of men in the dominant ethnic group. Since the 1980s, research into women and employment has figured in the employment relations literature, but it is often relegated to a separate category in specific articles or book chapters, with women implicitly or explicitly seen as the atypical or exceptional worker (Hansen, 2002; Wajcman, 2000). The same conclusion can be reached for other groups with different labour force patterns and employment outcomes. This chapter proposes that awareness of equity issues is central to employment relations. Like industrial relations legislation and approaches, each country will have a unique set of equity policies and legislation, reflecting their history and culture. Yet while most books on employment and industrial relations deal with issues of equity in a separate chapter (most commonly on equity for women or more recently on ‘diversity’), the reality in the workplace is that all types of legislation and policies which impact on the wages and working conditions interact, and their impact cannot be disentangled one from another. When discussing equity in workplaces in the twenty-first century we are now faced with a plethora of different terms in English. Terms used include discrimination, equity, equal opportunity, affirmative action and diversity with all its variants (workplace diversity, managing diversity, and so on). There is a lack of agreed definitions, particularly when the terms are used outside of a legislative context. This ‘shifting linguistic terrain’ (Kennedy-Dubourdieu, 2006b, p. 3) varies from country to country and changes over time even within the one country. There is frequently a division made between equity and its related concepts and the range of expressions using the term ‘diversity’ (Wilson and Iles, 1999; Thomas and Ely, 1996). These present dilemmas for practitioners and researchers due to the amount and range of ideas prevalent – and the breadth of issues that are covered when we say ‘equity and diversity in employment’. To add to these dilemmas, the literature on equity and diversity has become bifurcated: the literature on workplace diversity/management diversity appears largely in the business literature while that on equity in employment appears frequently in legal and industrial relations journals. Workplaces of the twenty-first century differ from those of the nineteenth and twentieth century not only in the way they deal with individual and group differences but also in the way they interpret what are fair and equitable outcomes for different individuals and groups. These variations are the result of a range of social conditions, legislation and workplace constraints that have influenced the development of employment equity and the management of diversity. Attempts to achieve employment equity have primarily been dealt with through legislative means, and in the last fifty years this legislation has included elements of anti-discrimination, affirmative action, and equal employment opportunity in virtually all OECD countries (Mor Barak, 2005, pp. 17–52). Established on human rights and social justice principles, this legislation is based on the premise that systemic discrimination has and/or continues to exist in the labour force and particular groups of citizens have less advantageous employment outcomes. It is based on group identity, and employment equity programmes in general apply across all workplaces and are mandatory. The more recent notions of diversity in the workplace are based on ideas coming principally from the USA in the 1980s which have spread widely in the Western world since the 1990s. Broadly speaking, diversity ideas focus on individual differences either on their own or in concert with the idea of group differences. The diversity literature is based on a business case: that is diversity is profitable in a variety of ways for business, and generally lacks a social justice or human rights justification (Burgess et al., 2009, pp. 81–2). Managing diversity is represented at the organizational level as a voluntary and local programme. This chapter discusses some major models and theories for equity and diversity. It begins by charting the history of ideas about equity in employment and then briefly discusses what is meant by equality and equity. The chapter then analyses the major debates about the ways in which equity can be achieved. The more recent ideas about diversity are then discussed, including the history of these ideas and the principles which guide this concept. The following section discusses both major frameworks of equity and diversity. The chapter then raises some ways in which insights from the equity and diversity literature can inform employment relations. Finally, the future of equity and diversity ideas is discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ethernet is a key component of the standards used for digital process buses in transmission substations, namely IEC 61850 and IEEE Std 1588-2008 (PTPv2). These standards use multicast Ethernet frames that can be processed by more than one device. This presents some significant engineering challenges when implementing a sampled value process bus due to the large amount of network traffic. A system of network traffic segregation using a combination of Virtual LAN (VLAN) and multicast address filtering using managed Ethernet switches is presented. This includes VLAN prioritisation of traffic classes such as the IEC 61850 protocols GOOSE, MMS and sampled values (SV), and other protocols like PTPv2. Multicast address filtering is used to limit SV/GOOSE traffic to defined subsets of subscribers. A method to map substation plant reference designations to multicast address ranges is proposed that enables engineers to determine the type of traffic and location of the source by inspecting the destination address. This method and the proposed filtering strategy simplifies future changes to the prioritisation of network traffic, and is applicable to both process bus and station bus applications.