14 resultados para analisi non standard iperreali infinitesimi
em University of Queensland eSpace - Australia
Resumo:
Across the last four decades, the structure of the Australian labour market has changed profoundly as non-standard forms of employment have become more prevalent. According to many researchers, the growth of non-standard work has been driven by employee preferences, particularly among married women, for greater flexibility to balance paid work with domestic responsibilities and other non-work related pursuits. In contrast, other researchers argue that the increasing prevalence of non-standard employment reflects employer demands for greater staffing flexibility. From this perspective, non-standard forms of employment are considered to have a negative effect on work-family balance. This paper explores whether non-standard employment is associated with improved or poorer work-to-family conflict and tests whether experiences vary by gender. It concentrates on three common forms of non-standard employment: part-time employment, casual and fixed-term work contracts and flexible scheduling practices (such as evening work, weekend work and irregular rostering). Analysis is based on 2299 employed parents from the first wave of the Household, Income and Labour Dynamics on Australia (HILDA) project. Results show that few scheduling measures are significant determinants of work-family balance. However, part-time employment is associated with reduced work-to-family strain for both men and women, even after controlling for various other employment and household related characteristics. Casual employment, in contrast, incurs the cost of poorer work-family balance for men. Surprisingly, HILDA data show that overall men experience greater work-to-family strain than women.
Resumo:
Motivated by application of current superalgebras in the study of disordered systems such as the random XY and Dirac models, we investigate gl(2\2) current superalgebra at general level k. We construct its free field representation and corresponding Sugawara energy-momentum tensor in the non-standard basis. Three screen currents of the first kind are also presented. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
The research reported here draws on a study of five teenagers from a Dinka-speaking community of Sudanese settling in Australia. A range of factors including language proficiency, social network structure and language attitudes are examined as possible causes for the variability of language use. The results and discussion illustrate how the use of a triangular research approach captured the complexity of the participants' language situation and was critical to developing a full understanding of the interplay of factors influencing the teens' language maintenance and shift in a way that no single method could. Further, it shows that employment of different methodologies allowed for flexibility in data collection to ensure the fullest response from participants. Overall, this research suggests that for studies of non-standard communities, variability in research methods may prove more of a strength that the use of standardised instruments and approaches.
Resumo:
Generally employment has been studied in terms of changes in the types of goods and services that the economy is purchasing. Far less attention has been given to the occupational aggregates that go into producing these goods and services. The few studies that did investigate this area found that the mix of tabour inputs appear to have been changing over time in a systematic pattern. The increasing prevalence of white-collar, information workers gave rise to the assertion that many societies had entered a post-industrial information age. Deals first of aff with some issues of measurement in the context of the Australian labour force, then looks at trends in various occupational groups using a non-standard four-sector classification of the labour force. Finally suggests an application in relation to the link between education and training and its ability to reduce structural unemployment.
Resumo:
In microarray studies, the application of clustering techniques is often used to derive meaningful insights into the data. In the past, hierarchical methods have been the primary clustering tool employed to perform this task. The hierarchical algorithms have been mainly applied heuristically to these cluster analysis problems. Further, a major limitation of these methods is their inability to determine the number of clusters. Thus there is a need for a model-based approach to these. clustering problems. To this end, McLachlan et al. [7] developed a mixture model-based algorithm (EMMIX-GENE) for the clustering of tissue samples. To further investigate the EMMIX-GENE procedure as a model-based -approach, we present a case study involving the application of EMMIX-GENE to the breast cancer data as studied recently in van 't Veer et al. [10]. Our analysis considers the problem of clustering the tissue samples on the basis of the genes which is a non-standard problem because the number of genes greatly exceed the number of tissue samples. We demonstrate how EMMIX-GENE can be useful in reducing the initial set of genes down to a more computationally manageable size. The results from this analysis also emphasise the difficulty associated with the task of separating two tissue groups on the basis of a particular subset of genes. These results also shed light on why supervised methods have such a high misallocation error rate for the breast cancer data.
Resumo:
The mammalian transcriptome harbours shadowy entities that resist classification and analysis. In analogy with pseudogenes, we define pseudo-messenger RNA to be RNA molecules that resemble protein- coding mRNA, but cannot encode full-length proteins owing to disruptions of the reading frame. Using a rigorous computational pipeline, which rules out sequencing errors, we identify 10,679 pseudo - messenger RNAs ( approximately half of which are transposonassociated) among the 102,801 FANTOM3 mouse cDNAs: just over 10% of the FANTOM3 transcriptome. These comprise not only transcribed pseudogenes, but also disrupted splice variants of otherwise protein- coding genes. Some may encode truncated proteins, only a minority of which appear subject to nonsense- mediated decay. The presence of an excess of transcripts whose only disruptions are opal stop codons suggests that there are more selenoproteins than currently estimated. We also describe compensatory frameshifts, where a segment of the gene has changed frame but remains translatable. In summary, we survey a large class of non- standard but potentially functional transcripts that are likely to encode genetic information and effect biological processes in novel ways. Many of these transcripts do not correspond cleanly to any identifiable object in the genome, implying fundamental limits to the goal of annotating all functional elements at the genome sequence level.
Resumo:
The Direct Simulation Monte Carlo (DSMC) method is used to simulate the flow of rarefied gases. In the Macroscopic Chemistry Method (MCM) for DSMC, chemical reaction rates calculated from local macroscopic flow properties are enforced in each cell. Unlike the standard total collision energy (TCE) chemistry model for DSMC, the new method is not restricted to an Arrhenius form of the reaction rate coefficient, nor is it restricted to a collision cross-section which yields a simple power-law viscosity. For reaction rates of interest in aerospace applications, chemically reacting collisions are generally infrequent events and, as such, local equilibrium conditions are established before a significant number of chemical reactions occur. Hence, the reaction rates which have been used in MCM have been calculated from the reaction rate data which are expected to be correct only for conditions of thermal equilibrium. Here we consider artificially high reaction rates so that the fraction of reacting collisions is not small and propose a simple method of estimating the rates of chemical reactions which can be used in the Macroscopic Chemistry Method in both equilibrium and non-equilibrium conditions. Two tests are presented: (1) The dissociation rates under conditions of thermal non-equilibrium are determined from a zero-dimensional Monte-Carlo sampling procedure which simulates ‘intra-modal’ non-equilibrium; that is, equilibrium distributions in each of the translational, rotational and vibrational modes but with different temperatures for each mode; (2) The 2-D hypersonic flow of molecular oxygen over a vertical plate at Mach 30 is calculated. In both cases the new method produces results in close agreement with those given by the standard TCE model in the same highly nonequilibrium conditions. We conclude that the general method of estimating the non-equilibrium reaction rate is a simple means by which information contained within non-equilibrium distribution functions predicted by the DSMC method can be included in the Macroscopic Chemistry Method.
Resumo:
The objective of this study is to compare the accuracy of sonographic estimation of fetal weight of macrosomic babies in diabetic vs non-diabetic pregnancies. Ali babies weighing 4000 g or more at birth, and who had ultrasound scans performed within one week of delivery were included in this retrospective study. Pregnancies with diabetes mellitus were compared to those without diabetes mellitus. The mean simple error (actual birthweight - estimated fetal weight); mean standardised absolute error (absolute value of simple error (g)/actual birthweight (kg)); and the percentage of estimated birthweight falling within 15% of the actual birthweight between the two groups were compared. There were 9516 deliveries during the study period. Of this total 1211 (12.7 %) babies weighed 4000 g or more. A total of 56 non-diabetic pregnancies and 19 diabetic pregnancies were compared. The average sonographic estimation of fetal weight in diabetic pregnancies was 8 % less than the actual birthweight, compared to 0.2 % in the non-diabetic group (p < 0.01). The estimated fetal weight was within 15% of the birthweight in 74 % of the diabetic pregnancies, compared to 93 % of the non-diabetic pregnancies (p < 0.05). In the diabetic group, 26.3 % of the birthweights were underestimated by more than 15 %, compared to 5.4 % in the non-diabetic group (p < 0.05). In conclusion, the prediction accuracy of fetal weight estimation using standard formulae in macrosomic fetuses is significantly worse in diabetic pregnancies compared to non-diabetic pregnancies. When sonographic fetal weight estimation is used to influence the mode of delivery for diabetic women, a more conservative cut-off needs to be considered.
Resumo:
Field quantization in unstable optical systems is treated by expanding the vector potential in terms of non-Hermitean (Fox-Li) modes. We define non-Hermitean modes and their adjoints in both the cavity and external regions and make use of the important bi-orthogonality relationships that exist within each mode set. We employ a standard canonical quantization procedure involving the introduction of generalized coordinates and momenta for the electromagnetic (EM) field. Three-dimensional systems are treated, making use of the paraxial and monochromaticity approximations for the cavity non-Hermitean modes. We show that the quantum EM field is equivalent to a set of quantum harmonic oscillators (QHOs), associated with either the cavity or the external region non-Hermitean modes, and thus confirming the validity of the photon model in unstable optical systems. Unlike in the conventional (Hermitean mode) case, the annihilation and creation operators we define for each QHO are not Hermitean adjoints. It is shown that the quantum Hamiltonian for the EM field is the sum of non-commuting cavity and external region contributions, each of which can be expressed as a sum of independent QHO Hamiltonians for each non-Hermitean mode, except that the external field Hamiltonian also includes a coupling term responsible for external non-Hermitean mode photon exchange processes. The non-commutativity of certain cavity and external region annihilation and creation operators is associated with cavity energy gain and loss processes, and may be described in terms of surface integrals involving cavity and external region non-Hermitean mode functions on the cavity-external region boundary. Using the essential states approach and the rotating wave approximation, our results are applied to the spontaneous decay of a two-level atom inside an unstable cavity. We find that atomic transitions leading to cavity non-Hermitean mode photon absorption are associated with a different coupling constant to that for transitions leading to photon emission, a feature consequent on the use of non-Hermitean mode functions. We show that under certain conditions the spontaneous decay rate is enhanced by the Petermann factor.
Resumo:
To investigate the effects of different management strategies for non-localized prostate cancer on men's quality of life and cognitive functioning. Men with prostate cancer were randomly assigned to one of four treatment arms: leuprorelin, goserelin, cyproterone acetate (CPA), or close clinical monitoring. In a repeated-measures design, men were assessed before treatment (baseline) and after 6 and 12 months of treatment. A community comparison group of men of the same age with no prostate cancer participated for the same length of time. The men were recruited from public and private urology departments from university teaching hospitals. All those with prostate cancer who were eligible for hormonal therapy had no symptoms requiring immediate therapy. In all, 82 patients were randomized and 62 completed the 1-year study, and of the 20 community participants, 15 completed the study. The main outcome measures were obtained from questionnaires on emotional distress, existential satisfaction, physical function and symptoms, social and role function, subjective cognitive function, and sexual function, combined with standard neuropsychological tests of memory, attention, and executive functions. Sexual dysfunction increased for patients on androgen-suppressing therapies, and emotional distress increased in those assigned to CPA or close clinical monitoring. Compared with before treatment there was evidence of an adverse effect of leuprorelin, goserelin, and CPA on cognitive function. In deciding the timing of androgen suppression therapy for prostate cancer, consideration should be given to potential adverse effects on quality of life and cognitive function.
Resumo:
Vector error-correction models (VECMs) have become increasingly important in their application to financial markets. Standard full-order VECM models assume non-zero entries in all their coefficient matrices. However, applications of VECM models to financial market data have revealed that zero entries are often a necessary part of efficient modelling. In such cases, the use of full-order VECM models may lead to incorrect inferences. Specifically, if indirect causality or Granger non-causality exists among the variables, the use of over-parameterised full-order VECM models may weaken the power of statistical inference. In this paper, it is argued that the zero–non-zero (ZNZ) patterned VECM is a more straightforward and effective means of testing for both indirect causality and Granger non-causality. For a ZNZ patterned VECM framework for time series of integrated order two, we provide a new algorithm to select cointegrating and loading vectors that can contain zero entries. Two case studies are used to demonstrate the usefulness of the algorithm in tests of purchasing power parity and a three-variable system involving the stock market.
Resumo:
The non-semisimple gl(2)k current superalgebra in the standard basis and the corresponding non-unitary conformal field theory are investigated. Infinite families of primary fields corresponding to all finite-dimensional irreducible typical and atypical representations of gl(212) and three (two even and one odd) screening currents of the first kind are constructed explicitly in terms of ten free fields. (C) 2004 Elsevier B.V All rights reserved.
Estimation of pharmacokinetic parameters from non-compartmental variables using Microsoft Excel((R))
Resumo:
This study was conducted to develop a method, termed 'back analysis (BA)', for converting non-compartmental variables to compartment model dependent pharmacokinetic parameters for both one- and two-compartment models. A Microsoft Excel((R)) spreadsheet was implemented with the use of Solver((R)) and visual basic functions. The performance of the BA method in estimating pharmacokinetic parameter values was evaluated by comparing the parameter values obtained to a standard modelling software program, NONMEM, using simulated data. The results show that the BA method was reasonably precise and provided low bias in estimating fixed and random effect parameters for both one- and two-compartment models. The pharmacokinetic parameters estimated from the BA method were similar to those of NONMEM estimation.