52 resultados para true abnormal dta elimination
Resumo:
Black et al. (2004) identified a systematic difference between LA–ICP–MS and TIMS measurements of 206Pb/238U in zircons, which they correlated with the incompatible trace element content of the zircon. We show that the offset between the LA–ICP–MS and TIMS measured 206Pb/238U correlates more strongly with the total radiogenic Pb than with any incompatible trace element. This suggests that the cause of the 206Pb/238U offset is related to differences in the radiation damage (alpha dose) between the reference and unknowns. We test this hypothesis in two ways. First, we show that there is a strong correlation between the difference in the LA–ICP–MS and TIMS measured 206Pb/238U and the difference in the alpha dose received by unknown and reference zircons. The LA–ICP–MS ages for the zircons we have dated can be as much as 5.1% younger than their TIMS age to 2.1% older, depending on whether the unknown or reference received the higher alpha dose. Second, we show that by annealing both reference and unknown zircons at 850 °C for 48 h in air we can eliminate the alpha-dose-induced differences in measured 206Pb/238U. This was achieved by analyzing six reference zircons a minimum of 16 times in two round robin experiments: the first consisting of unannealed zircons and the second of annealed grains. The maximum offset between the LA–ICP–MS and TIMS measured 206Pb/238U for the unannealed zircons was 2.3%, which reduced to 0.5% for the annealed grains, as predicted by within-session precision based on counting statistics. Annealing unknown zircons and references to the same state prior to analysis holds the promise of reducing the 3% external error for the measurement of 206Pb/238U of zircon by LA–ICP–MS, indicated by Klötzli et al. (2009), to better than 1%, but more analyses of annealed zircons by other laboratories are required to evaluate the true potential of the annealing method.
Resumo:
Design as seen from the designer's perspective is a series of amazing imaginative jumps or creative leaps. But design as seen by the design historian is a smooth progression or evolution of ideas that they seem self-evident and inevitable after the event. But the next step is anything but obvious for the artist/creator/inventor/designer stuck at that point just before the creative leap. They know where they have come from and have a general sense of where they are going, but often do not have a precise target or goal. This is why it is misleading to talk of design as a problem-solving activity - it is better defined as a problem-finding activity. This has been very frustrating for those trying to assist the design process with computer-based, problem-solving techniques. By the time the problem has been defined, it has been solved. Indeed the solution is often the very definition of the problem. Design must be creative-or it is mere imitation. But since this crucial creative leap seem inevitable after the event, the question must arise, can we find some way of searching the space ahead? Of course there are serious problems of knowing what we are looking for and the vastness of the search space. It may be better to discard altogether the term "searching" in the context of the design process: Conceptual analogies such as search, search spaces and fitness landscapes aim to elucidate the design process. However, the vastness of the multidimensional spaces involved make these analogies misguided and they thereby actually result in further confounding the issue. The term search becomes a misnomer since it has connotations that imply that it is possible to find what you are looking for. In such vast spaces the term search must be discarded. Thus, any attempt at searching for the highest peak in the fitness landscape as an optimal solution is also meaningless. Futhermore, even the very existence of a fitness landscape is fallacious. Although alternatives in the same region of the vast space can be compared to one another, distant alternatives will stem from radically different roots and will therefore not be comparable in any straightforward manner (Janssen 2000). Nevertheless we still have this tantalizing possibility that if a creative idea seems inevitable after the event, then somehow might the process be rserved? This may be as improbable as attempting to reverse time. A more helpful analogy is from nature, where it is generally assumed that the process of evolution is not long-term goal directed or teleological. Dennett points out a common minsunderstanding of Darwinism: the idea that evolution by natural selection is a procedure for producing human beings. Evolution can have produced humankind by an algorithmic process, without its being true that evolution is an algorithm for producing us. If we were to wind the tape of life back and run this algorithm again, the likelihood of "us" being created again is infinitesimally small (Gould 1989; Dennett 1995). But nevertheless Mother Nature has proved a remarkably successful, resourceful, and imaginative inventor generating a constant flow of incredible new design ideas to fire our imagination. Hence the current interest in the potential of the evolutionary paradigm in design. These evolutionary methods are frequently based on techniques such as the application of evolutionary algorithms that are usually thought of as search algorithms. It is necessary to abandon such connections with searching and see the evolutionary algorithm as a direct analogy with the evolutionary processes of nature. The process of natural selection can generate a wealth of alternative experiements, and the better ones survive. There is no one solution, there is no optimal solution, but there is continuous experiment. Nature is profligate with her prototyping and ruthless in her elimination of less successful experiments. Most importantly, nature has all the time in the world. As designers we cannot afford prototyping and ruthless experiment, nor can we operate on the time scale of the natural design process. Instead we can use the computer to compress space and time and to perform virtual prototyping and evaluation before committing ourselves to actual prototypes. This is the hypothesis underlying the evolutionary paradigm in design (1992, 1995).
Resumo:
Investment in residential property in Australia is not dominated by the major investment institutions in to the same degree as the commercial, industrial and retail property markets. As at December 2001, the Property Council of Australia Investment Performance Index contained residential property with a total value of $235 million, which represents only 0.3% of the total PCA Performance Index value. The majority of investment in the Australian residential property market is by small investment companies and individual investors. The limited exposure of residential property in the institutional investment portfolios has also limited the research that has been undertaken in relation to residential property performance. However the importance of individual investment in residential property is continuing to gain importance as both individuals are now taking control of their own superannuation portfolios and the various State Governments of Australia are decreasing their involvement in the construction of public housing by subsidizing low-income families into the private residential property market. This paper will: • Provide a comparison of the cost to initially purchase residential property in the various capital city residential property markets in Australia, and • Analyse the true cost and investment performance of residential property in the main residential property markets in Australia based on a standard investment portfolio in each of the State capital cities and relate these results to real estate marketing and agency practice.
Resumo:
Homophobic hatred: these words summarise online commentary made by people in support of a school that banned gay students from taking their same sex partners to a school formal. With the growing popularity of online news sites, it seems appropriate to critically examine how these sites are becoming a new arena in which people can express personal opinions about controversial topics. While commentators equally expressed two dominant viewpoints about the school ban (homophobic hatred and human rights), this paper focuses on homophobic hatred as a discursive position and how the comments work to confirm the legitimacy of the schools’ decision. Drawing on the work of Foucault and others, the paper examines how the comments constitute certain types of subjectivity drawing on dominant ideas about what it means to be homophobic. The analysis demonstrates the complex and competing skein of strategies that constitute queering school social spaces as a social problem.
Resumo:
Programs written in languages of the Oberon family usually contain runtime tests on the dynamic type of variables. In some cases it may be desirable to reduce the number of such tests. Typeflow analysis is a static method of determining bounds on the types that objects may possess at runtime. We show that this analysis is able to reduce the number of tests in certain plausible circumstances. Furthermore, the same analysis is able to detect certain program errors at compile time, which would normally only be detected at program execution. This paper introduces the concepts of typeflow analysis and details its use in the reduction of runtime overhead in Oberon-2.
Resumo:
In the extant literature, adult-onset offending has usually been identified using official sources. It is possible, however, that many of the individuals identified would have had unofficial histories of prior offending. To investigate this issue, the men from the Cambridge Study in Delinquent Development (CSDD) were examined. The CSDD is a prospective longitudinal study of men from inner-city London, followed from age 8 to age 48. Onset of offending was identified using official records and then the self-reported offending of the adult-onset offender group (with a first conviction at age 21 or later) was compared to others. All the adult-onset offenders self-reported some previous offending in childhood and adolescence but most of this offending was not sufficiently frequent or serious to lead to a conviction in practice. About one-third of adult-onset offenders were considered to be self-reported delinquents who were realistically in danger of being convicted because of the frequency of their offending. For some, the adjudication by the criminal justice system was simply the first time that their ongoing pattern of offending had been detected. Their lack of detection was because the types of offences they were committing had lower detection rates.
Resumo:
From September 2000 to June 2003, a community-based program for dengue control using local predacious copepods of the genus Mesocyclops was conducted in three rural communes in the central Vietnam provinces of Quang Nam, Quang Ngai, and Khanh Hoa. Post-project, three subsequent entomologic surveys were conducted until March 2004. The number of households and residents in the communes were 5,913 and 27,167, respectively, and dengue notification rates for these communes from 1996 were as high as 2,418.5 per 100,000 persons. Following knowledge, attitude, and practice evaluations, surveys of water storage containers indicated that Mesocyclops spp. already occurred in 3-17% and that large tanks up to 2,000 liters, 130-300-liter jars, wells, and some 220-liter metal drums were the most productive habitats for Aedes aegypti. With technical support, the programs were driven by communal management committees, health collaborators, schoolteachers, and pupils. From quantitative estimates of the standing crop of third and fourth instars from 100 households, Ae. aegypti were reduced by approximately 90% by year 1, 92.3-98.6% by year 2, and Ae. aegypti immature forms had been eliminated from two of three communes by June 2003. Similarly, from resting adult collections from 100 households, densities were reduced to 0-1 per commune. By March 2004, two communes with no larvae had small numbers but the third was negative; one adult was collected in each of two communes while one became negative. Absolute estimates of third and fourth instars at the three intervention communes and one left untreated had significant correlations (P = 0.009-< 0.001) with numbers of adults aspirated from inside houses on each of 15 survey periods. By year 1, the incidence of dengue disease in the treated communes was reduced by 76.7% compared with non-intervention communes within the same districts, and no dengue was evident in 2002 and 2003, compared with 112.8 and 14.4 cases per 100,000 at district level. Since we had similar success in northern Vietnam from 1998 to 2000, this study demonstrates that this control model is broadly acceptable and achievable at community level but vigilance is required post-project to prevent reinfestation.
Resumo:
This paper addresses a previously unconsidered history — that of Aboriginal characters in Australian soap operas. Rejecting critical approaches which have obtained even into the 1990s, it refuses to judge these characters as 'good' or 'bad' manifestations of indigeneity. Rather, using the idea that genre is a way of closing down interpretive possibilities, the paper looks at the manner in which generic expectations around soap operas produce particular valences for these representations of Aboriginality. It points to the many ways in which these indigenous characters are insistently constructed as liminal in soap operas' structural communities - simultaneously inside and outside of the group. This is seen to accord with the suggestions of Jakubowicz et al about the ways in which Aboriginal people are positioned by wider social discourses.
Resumo:
The CDKN2A gene encodes p16 (CDKN2A), a cell-cycle inhibitor protein which prevents inappropriate cell cycling and, hence, proliferation. Germ-line mutations in CDKN2A predispose to the familial atypical multiple-mole melanoma (FAMMM) syndrome but also have been seen in rare families in which only 1 or 2 individuals are affected by cutaneous malignant melanoma (CMM). We therefore sequenced exons 1alpha and 2 of CDKN2A using lymphocyte DNA isolated from index cases from 67 families with cancers at multiple sites, where the patterns of cancer did not resemble those attributable to known genes such as hMLH1, hMLH2, BRCA1, BRCA2, TP53 or other cancer susceptibility genes. We found one mutation, a mis-sense mutation resulting in a methionine to isoleucine change at codon 53 (M531) of exon 2. The individual tested had developed 2 CMMs but had no dysplastic nevi and lacked a family history of dysplastic nevi or CMM. Other family members had been diagnosed with oral cancer (2 persons), bladder cancer (1 person) and possibly gall-bladder cancer. While this mutation has been reported in Australian and North American melanoma kindreds, we did not observe it in 618 chromosomes from Scottish and Canadian controls. Functional studies revealed that the CDKN2A variant carrying the M531 change was unable to bind effectively to CDK4, showing that this mutation is of pathological significance. Our results have confirmed that CDKN2A mutations are not limited to FAMMM kindreds but also demonstrate that multi-site cancer families without melanoma are very unlikely to contain CDKN2A mutations.
Resumo:
This study of photocatalytic oxidation of phenol over titanium dioxide films presents a method for the evaluation of true reaction kinetics. A flat plate reactor was designed for the specific purpose of investigating the influence of various reaction parameters, specifically photocatalytic film thickness, solution flow rate (1–8 l min−1), phenol concentration (20, 40 and 80 ppm), and irradiation intensity (70.6, 57.9, 37.1and 20.4 W m−2), in order to further understand their impact on the reaction kinetics. Special attention was given to the mass transfer phenomena and the influence of film thickness. The kinetics of phenol degradation were investigated with different irradiation levels and initial pollutant concentration. Photocatalytic degradation experiments were performed to evaluate the influence of mass transfer on the reaction and, in addition, the benzoic acid method was applied for the evaluation of mass transfer coefficient. For this study the reactor was modelled as a batch-recycle reactor. A system of equations that accounts for irradiation, mass transfer and reaction rate was developed to describe the photocatalytic process, to fit the experimental data and to obtain kinetic parameters. The rate of phenol photocatalytic oxidation was described by a Langmuir–Hinshelwood type law that included competitive adsorption and degradation of phenol and its by-products. The by-products were modelled through their additive effect on the solution total organic carbon.
Resumo:
This paper examines parents' responses to key factors associated with mode choices for school trips. The research was conducted with parents of elementary school students in Denver Colorado as part of a larger investigation of school travel. School-based active travel programs aim to encourage students to walk or bike to school more frequently. To that end, planning research has identified an array of factors associated with parents' decisions to drive children to school. Many findings are interpreted as ‘barriers’ to active travel, implying that parents have similar objectives with respect to travel mode choices and that parents respond similarly and consistently to external conditions. While the conclusions are appropriate in forecasting demand and mode share with large populations, they are generally too coarse for programs that aim to influence travel behavior with individuals and small groups. This research uses content analysis of interview transcripts to examine the contexts of factors associated with parents' mode choices for trips to and from elementary school. Short, semi-structured interviews were conducted with 65 parents from 12 Denver Public Elementary Schools that had been selected to receive 2007–08 Safe Routes to School non-infrastructure grants. Transcripts were analyzed using Nvivo 8.0 to find out how parents respond to selected factors that are often described in planning literature as ‘barriers’ to active travel. Two contrasting themes emerged from the analysis: barrier elimination and barrier negotiation. Regular active travel appears to diminish parents' perceptions of barriers so that negotiation becomes second nature. Findings from this study suggest that intervention should build capacity and inclination in order to increase rates of active travel.
Resumo:
The Web Service Business Process Execution Language (BPEL) lacks any standard graphical notation. Various efforts have been undertaken to visualize BPEL using the Business Process Modelling Notation (BPMN). Although this is straightforward for the majority of concepts, it is tricky for the full BPEL standard, partly due to the insufficiently specified BPMN execution semantics. The upcoming BPMN 2.0 revision will provide this clear semantics. In this paper, we show how the dead path elimination (DPE) capabilities of BPEL can be expressed with this new semantics and discuss the limitations. We provide a generic formal definition of DPE and discuss resulting control flow requirements independent of specific process description languages.
Resumo:
The crosstalk between fibroblasts and keratinocytes is a vital component of the wound healing process, and involves the activity of a number of growth factors and cytokines. In this work, we develop a mathematical model of this crosstalk in order to elucidate the effects of these interactions on the regeneration of collagen in a wound that heals by second intention. We consider the role of four components that strongly affect this process: transforming growth factor-beta, platelet-derived growth factor, interleukin-1 and keratinocyte growth factor. The impact of this network of interactions on the degradation of an initial fibrin clot, as well as its subsequent replacement by a matrix that is mainly comprised of collagen, is described through an eight-component system of nonlinear partial differential equations. Numerical results, obtained in a two-dimensional domain, highlight key aspects of this multifarious process such as reepithelialisation. The model is shown to reproduce many of the important features of normal wound healing. In addition, we use the model to simulate the treatment of two pathological cases: chronic hypoxia, which can lead to chronic wounds; and prolonged inflammation, which has been shown to lead to hypertrophic scarring. We find that our model predictions are qualitatively in agreement with previously reported observations, and provide an alternative pathway for gaining insight into this complex biological process.
Resumo:
Purpose Managers generally have discretion in determining how components of earnings are presented in financial statements in distinguishing between ‘normal’ earnings and items classified as unusual, special, significant, exceptional or abnormal. Prior research has found that such intra-period classificatory choice is used as a form of earnings management. Prior to 2001, Australian accounting standards mandated that unusually large items of revenue and expense be classified as ‘abnormal items’ for financial reporting, but this classification was removed from accounting standards from 2001. This move by the regulators was partly in response to concerns that the abnormal classification was being used opportunistically to manage reported pre-abnormal earnings. This study extends the earnings management literature by examining the reporting of abnormal items for evidence of intra-period classificatory earnings management in the unique Australian setting. Design/methodology/approach This study investigates associations between reporting of abnormal items and incentives in the form of analyst following and the earnings benchmarks of analysts’ forecasts, earnings levels, and earnings changes, for a sample of Australian top-500 firms for the seven-year period from 1994 to 2000. Findings The findings suggest there are systematic differences between firms reporting abnormal items and those with no abnormal items. Results show evidence that, on average, firms shifted expense items from pre-abnormal earnings to bottom line net income through reclassification as abnormal losses. Originality/value These findings suggest that the standard setters were justified in removing the ‘abnormal’ classification from the accounting standard. However, it cannot be assumed that all firms acted opportunistically in the classification of items as abnormal. With the removal of the standardised classification of items outside normal operations as ‘abnormal’, firms lost the opportunity to use such disclosures as a signalling device, with the consequential effect of limiting the scope of effectively communicating information about the nature of items presented in financial reports.