90 resultados para Armington Assumption
Resumo:
This paper presents a simple Bayesian approach to sample size determination in clinical trials. It is required that the trial should be large enough to ensure that the data collected will provide convincing evidence either that an experimental treatment is better than a control or that it fails to improve upon control by some clinically relevant difference. The method resembles standard frequentist formulations of the problem, and indeed in certain circumstances involving 'non-informative' prior information it leads to identical answers. In particular, unlike many Bayesian approaches to sample size determination, use is made of an alternative hypothesis that an experimental treatment is better than a control treatment by some specified magnitude. The approach is introduced in the context of testing whether a single stream of binary observations are consistent with a given success rate p(0). Next the case of comparing two independent streams of normally distributed responses is considered, first under the assumption that their common variance is known and then for unknown variance. Finally, the more general situation in which a large sample is to be collected and analysed according to the asymptotic properties of the score statistic is explored. Copyright (C) 2007 John Wiley & Sons, Ltd.
Resumo:
The use of nucleotide and amino acid sequences allows improved understanding of the timing of evolutionary events of life on earth. Molecular estimates of divergence times are, however, controversial and are generally much more ancient than suggested by the fossil record. The limited number of genes and species explored and pervasive variations in evolutionary rates are the most likely sources of such discrepancies. Here we compared concatenated amino acid sequences of 129 proteins from 36 eukaryotes to determine the divergence times of several major clades, including animals, fungi, plants, and various protists. Due to significant variations in their evolutionary rates, and to handle the uncertainty of the fossil record, we used a Bayesian relaxed molecular clock simultaneously calibrated by six paleontological constraints. We show that, according to 95% credibility intervals, the eukaryotic kingdoms diversified 950-1,259 million years ago (Mya), animals diverged from choanoflagellates 761-957 Mya, and the debated age of the split between protostomes and deuterostomes occurred 642-761 Mya. The divergence times appeared to be robust with respect to prior assumptions and paleontological calibrations. Interestingly, these relaxed clock time estimates are much more recent than those obtained under the assumption of a global molecular clock, yet bilaterian diversification appears to be approximate to100 million years more ancient than the Cambrian boundary.
Resumo:
Columnar mesophases based on alternating triphenylene and hexaphenyltriphenylene moieties are exceptionally stable and able to accommodate bulky side-chain substituents within the alkyl chain continuum between the columns. This paper presents a system in which the triphenylene bears a fullerene on its side-chain and the hexaphenyltriphenylene equivalent is the aza-derivative hexakis(4-nonylphenyl)dipyrazino[2,3-f : 2'3'-h] quinoxalene, PDQ9. The mesophase formed was identified as hexagonal columnar (Col(h)) by X-ray diffraction (a = 25.2 angstrom and c = 3.5 angstrom) but, in addition to the expected peaks, there is indication of a two-dimensional hexagonal superlattice with d-spacing 59 angstrom. This superlattice is believed to arise from ordering of the fullerenes within the liquid crystal matrix. It can be explained on the assumption that, to maximise fullerene-fullerene contact, the fullerenes form chains which wrap around the central column in every group of seven columns of the triphenylene : PDQ9 Col(h) array.
Resumo:
More than half the world's rainforest has been lost to agriculture since the Industrial Revolution. Among the most widespread tropical crops is oil palm (Elaeis guineensis): global production now exceeds 35 million tonnes per year. In Malaysia, for example, 13% of land area is now oil palm plantation, compared with 1% in 1974. There are enormous pressures to increase palm oil production for food, domestic products, and, especially, biofuels. Greater use of palm oil for biofuel production is predicated on the assumption that palm oil is an "environmentally friendly'' fuel feedstock. Here we show, using measurements and models, that oil palm plantations in Malaysia directly emit more oxides of nitrogen and volatile organic compounds than rainforest. These compounds lead to the production of ground-level ozone (O-3), an air pollutant that damages human health, plants, and materials, reduces crop productivity, and has effects on the Earth's climate. Our measurements show that, at present, O-3 concentrations do not differ significantly over rainforest and adjacent oil palm plantation landscapes. However, our model calculations predict that if concentrations of oxides of nitrogen in Borneo are allowed to reach those currently seen over rural North America and Europe, ground-level O-3 concentrations will reach 100 parts per billion (10(9)) volume (ppbv) and exceed levels known to be harmful to human health. Our study provides an early warning of the urgent need to develop policies that manage nitrogen emissions if the detrimental effects of palm oil production on air quality and climate are to be avoided.
Resumo:
The assumption that negligible work is involved in the formation of new surfaces in the machining of ductile metals, is re-examined in the light of both current Finite Element Method (FEM) simulations of cutting and modern ductile fracture mechanics. The work associated with separation criteria in FEM models is shown to be in the kJ/m2 range rather than the few J/m2 of the surface energy (surface tension) employed by Shaw in his pioneering study of 1954 following which consideration of surface work has been omitted from analyses of metal cutting. The much greater values of surface specific work are not surprising in terms of ductile fracture mechanics where kJ/m2 values of fracture toughness are typical of the ductile metals involved in machining studies. This paper shows that when even the simple Ernst–Merchant analysis is generalised to include significant surface work, many of the experimental observations for which traditional ‘plasticity and friction only’ analyses seem to have no quantitative explanation, are now given meaning. In particular, the primary shear plane angle φ becomes material-dependent. The experimental increase of φ up to a saturated level, as the uncut chip thickness is increased, is predicted. The positive intercepts found in plots of cutting force vs. depth of cut, and in plots of force resolved along the primary shear plane vs. area of shear plane, are shown to be measures of the specific surface work. It is demonstrated that neglect of these intercepts in cutting analyses is the reason why anomalously high values of shear yield stress are derived at those very small uncut chip thicknesses at which the so-called size effect becomes evident. The material toughness/strength ratio, combined with the depth of cut to form a non-dimensional parameter, is shown to control ductile cutting mechanics. The toughness/strength ratio of a given material will change with rate, temperature, and thermomechanical treatment and the influence of such changes, together with changes in depth of cut, on the character of machining is discussed. Strength or hardness alone is insufficient to describe machining. The failure of the Ernst–Merchant theory seems less to do with problems of uniqueness and the validity of minimum work, and more to do with the problem not being properly posed. The new analysis compares favourably and consistently with the wide body of experimental results available in the literature. Why considerable progress in the understanding of metal cutting has been achieved without reference to significant surface work is also discussed.
Resumo:
This commentary seeks to complement the contribution of the Building Research & Information special issue on 'Developing Theories for the Built Environment' (2008) by highlighting the important role of middle-range theories within the context of professional practice. Middle-range theories provide a form of theorizing that lies between abstract grand theorizing and atheoretical local descriptions. They are also characterized by the way in which they directly engage with the concerns of practitioners. In the context of professional practice, any commitment to theorizing should habitually be combined with an equivalent commitment to empirical research; rarely is it appropriate to neglect one in favour of the other. Any understanding of the role that theory plays in professional practice must further be informed by Schon's seminal ideas on reflective practice. Practitioners are seen to utilize theories as inputs to a process of continuous reflection, thereby guarding against complacency and routinization. The authors would challenge any assumption that academics alone are responsible for generating theories, thereby limiting the role of practitioners to their application. Such a dichotomized view is contrary to established ideas on Mode 2 knowledge production and current trends towards co-production research in the context of the built environment.
Resumo:
A fully automated procedure to extract and to image local fibre orientation in biological tissues from scanning X-ray diffraction is presented. The preferred chitin fibre orientation in the flow sensing system of crickets is determined with high spatial resolution by applying synchrotron radiation based X-ray microbeam diffraction in conjunction with advanced sample sectioning using a UV micro-laser. The data analysis is based on an automated detection of azimuthal diffraction maxima after 2D convolution filtering (smoothing) of the 2D diffraction patterns. Under the assumption of crystallographic fibre symmetry around the morphological fibre axis, the evaluation method allows mapping the three-dimensional orientation of the fibre axes in space. The resulting two-dimensional maps of the local fibre orientations - together with the complex shape of the flow sensing system - may be useful for a better understanding of the mechanical optimization of such tissues.
Resumo:
This paper draws on ethnographic case-study research conducted amongst a group of first and second generation immigrant children in six inner-city schools in London. It focuses on language attitudes and language choice in relation to cultural maintenance, on the one hand, and career aspirations on the other. It seeks to provide insight into some of the experiences and dilemmatic choices encountered and negotiations engaged in by transmigratory groups, how they define cultural capital, and the processes through which new meanings are shaped as part of the process of defining a space within the host society. Underlying this discussion is the assumption that alternative cultural spaces in which multiple identities and possibilities can be articulated already exist in the rich texture of everyday life amongst transmigratory groups. The argument that whilst the acquisition of 'world languages' is a key variable in accumulating cultural capital, the maintenance of linguistic diversity retains potent symbolic power in sustaining cohesive identities is a recurring theme.
Resumo:
The mere exposure effect is defined as enhanced attitude toward a stimulus that has been repeatedly exposed. Repetition priming is defined as facilitated processing of a previously exposed stimulus. We conducted a direct comparison between the two phenomena to test the assumption that the mere exposure effect represents an example of repetition priming. In two experiments, having studied a set of words or nonwords, participants were given a repetition priming task (perceptual identification) or one of two mere exposure (affective liking or preference judgment) tasks. Repetition printing was obtained for both words and nonwords, but only nonwords produced a mere exposure effect. This demonstrates a key boundary for observing the mere exposure effect, one not readily accommodated by a perceptual representation systems (Tulving & Schacter, 1990) account, which assumes that both phenomena should show some sensitivity to nonwords and words.
Resumo:
Over the last two decades interest in implicit memory, most notably repetition priming, has grown considerably. During the same period, research has also focused on the mere exposure effect. Although the two areas have developed relatively independently, a number of studies has described the mere exposure effect as an example of implicit memory. Tacit in their comparisons is the assumption that the effect is more specifically a demonstration of repetition priming. Having noted that this assumption has attracted relatively little attention, this paper reviews current evidence and shows that it is by no means conclusive. Although some evidence is suggestive of a common underlying mechanism, even a modified repetition priming (perceptual fluency/attribution) framework cannot accommodate all of the differences between the two phenomena. Notwithstanding this, it seems likely that a version of this theoretical framework still offers the best hope of a comprehensive explanation for the mere exposure effect and its relationship to repetition priming. As such, the paper finishes by offering some initial guidance as to ways in which the perceptual fluency/attribution framework might be extended, as well as outlining important areas for future research.
Resumo:
The assumption that ignoring irrelevant sound in a serial recall situation is identical to ignoring a non-target channel in dichotic listening is challenged. Dichotic listening is open to moderating effects of working memory capacity (Conway et al., 2001) whereas irrelevant sound effects (ISE) are not (Beaman, 2004). A right ear processing bias is apparent in dichotic listening, whereas the bias is to the left ear in the ISE (Hadlington et al., 2004). Positron emission tomography (PET) imaging data (Scott et al., 2004, submitted) show bilateral activation of the superior temporal gyrus (STG) in the presence of intelligible, but ignored, background speech and right hemisphere activation of the STG in the presence of unintelligible background speech. It is suggested that the right STG may be involved in the ISE and a particularly strong left ear effect might occur because of the contralateral connections in audition. It is further suggested that left STG activity is associated with dichotic listening effects and may be influenced by working memory span capacity. The relationship of this functional and neuroanatomical model to known neural correlates of working memory is considered.
Resumo:
Time/frequency and temporal analyses have been widely used in biomedical signal processing. These methods represent important characteristics of a signal in both time and frequency domain. In this way, essential features of the signal can be viewed and analysed in order to understand or model the physiological system. Historically, Fourier spectral analyses have provided a general method for examining the global energy/frequency distributions. However, an assumption inherent to these methods is the stationarity of the signal. As a result, Fourier methods are not generally an appropriate approach in the investigation of signals with transient components. This work presents the application of a new signal processing technique, empirical mode decomposition and the Hilbert spectrum, in the analysis of electromyographic signals. The results show that this method may provide not only an increase in the spectral resolution but also an insight into the underlying process of the muscle contraction.
Resumo:
There are established methods for calculating optical constants from measurements using a broadband terahertz (THz) source. Applications to ultrafast THz spectroscopy have adopted the key assumption that the THz beam is treated as a normal incidence plane-wave. We show that this assumption results in a frequency-dependent systematic error, which is compounded by distortion of the beam on introduction of the sample.
Resumo:
Most research on D-STBC has assumed that cooperative relay nodes are perfectly synchronised. Since such an assumption is difficult to achieve in many practical systems, this paper proposes a simple yet optimum detector for the case of two relay nodes, which proves to be much more robust against timing misalignment than the conventional STBC detector.
Resumo:
The popularity of wireless local area networks (WLANs) has resulted in their dense deployments around the world. While this increases capacity and coverage, the problem of increased interference can severely degrade the performance of WLANs. However, the impact of interference on throughput in dense WLANs with multiple access points (APs) has had very limited prior research. This is believed to be due to 1) the inaccurate assumption that throughput is always a monotonically decreasing function of interference and 2) the prohibitively high complexity of an accurate analytical model. In this work, firstly we provide a useful classification of commonly found interference scenarios. Secondly, we investigate the impact of interference on throughput for each class based on an approach that determines the possibility of parallel transmissions. Extensive packet-level simulations using OPNET have been performed to support the observations made. Interestingly, results have shown that in some topologies, increased interference can lead to higher throughput and vice versa.