74 resultados para Presence-absence Data
Resumo:
The differential phase (ΦDP) measured by polarimetric radars is recognized to be a very good indicator of the path integrated by rain. Moreover, if a linear relationship is assumed between the specific differential phase (KDP) and the specific attenuation (AH) and specific differential attenuation (ADP), then attenuation can easily be corrected. The coefficients of proportionality, γH and γDP, are, however, known to be dependent in rain upon drop temperature, drop shapes, drop size distribution, and the presence of large drops causing Mie scattering. In this paper, the authors extensively apply a physically based method, often referred to as the “Smyth and Illingworth constraint,” which uses the constraint that the value of the differential reflectivity ZDR on the far side of the storm should be low to retrieve the γDP coefficient. More than 30 convective episodes observed by the French operational C-band polarimetric Trappes radar during two summers (2005 and 2006) are used to document the variability of γDP with respect to the intrinsic three-dimensional characteristics of the attenuating cells. The Smyth and Illingworth constraint could be applied to only 20% of all attenuated rays of the 2-yr dataset so it cannot be considered the unique solution for attenuation correction in an operational setting but is useful for characterizing the properties of the strongly attenuating cells. The range of variation of γDP is shown to be extremely large, with minimal, maximal, and mean values being, respectively, equal to 0.01, 0.11, and 0.025 dB °−1. Coefficient γDP appears to be almost linearly correlated with the horizontal reflectivity (ZH), differential reflectivity (ZDR), and specific differential phase (KDP) and correlation coefficient (ρHV) of the attenuating cells. The temperature effect is negligible with respect to that of the microphysical properties of the attenuating cells. Unusually large values of γDP, above 0.06 dB °−1, often referred to as “hot spots,” are reported for 15%—a nonnegligible figure—of the rays presenting a significant total differential phase shift (ΔϕDP > 30°). The corresponding strongly attenuating cells are shown to have extremely high ZDR (above 4 dB) and ZH (above 55 dBZ), very low ρHV (below 0.94), and high KDP (above 4° km−1). Analysis of 4 yr of observed raindrop spectra does not reproduce such low values of ρHV, suggesting that (wet) ice is likely to be present in the precipitation medium and responsible for the attenuation and high phase shifts. Furthermore, if melting ice is responsible for the high phase shifts, this suggests that KDP may not be uniquely related to rainfall rate but can result from the presence of wet ice. This hypothesis is supported by the analysis of the vertical profiles of horizontal reflectivity and the values of conventional probability of hail indexes.
Resumo:
Uranium series dating has been carried out on secondary uranyl silicate minerals formed during sub-glacial and post-glacial weathering of Proterozoic uraninite ores in south west Finland. The samples were obtained from two sites adjacent to the Salpauselkä III ice marginal formation and cover a range of depths, from the surface to more than 60 m. Measured ages fall into three distinct groups, 70–100 ka, 28–36 ka and < 2500 yr. The youngest set is associated with surface exposures and the crystals display clear evidence of re-working. The most likely trigger for uranium release at depths below the surface weathering zone is intrusion of oxidising glacial melt water. The latter is often characterised by very high discharge rates along channels, which close once the overpressure generated at the ice margin is released. There is excellent correspondence between the two Finnish sites and published data for similar deposits over a large area of southern and central Sweden. None of the seventy samples analysed gave a U–Th age between 40 and 70 ka; a second hiatus is apparent at 20 ka, coinciding with the Last Glacial Maximum. Thus, the process responsible for uranyl silicate formation was halted for significant periods, owing to a change in geochemical conditions or the hydrogeological regime. These data support the presence of interstadial conditions during the Early and Middle Weichselian since in the absence of major climatic perturbations the uranium phases at depth are stable. When viewed in conjunction with proxy data from mammoth remains it would appear that the region was ice-free prior to the Last Glacial Maximum.
Resumo:
Lateral epicondylitis (LE) is hypothesized to occur as a result of repetitive, strenuous and abnormal postural activities of the elbow and wrist. There is still a lack of understanding of how wrist and forearm positions contribute to this condition during common manual tasks. In this study the wrist kinematics and the wrist extensors’ musculotendon patterns were investigated during a manual task believed to elicit LE symptoms in susceptible subjects. A 42-year-old right-handed male, with no history of LE, performed a repetitive movement involving pushing and turning a spring-loaded mechanism. Motion capture data were acquired for the upper limb and an inverse kinematic and dynamic analysis was subsequently carried out. Results illustrated the presence of eccentric contractions sustained by the extensor carpi radialis longus (ECRL), together with an almost constant level of tendon strain of both extensor carpi radialis brevis (ECRB) and extensor digitorum communis lateral (EDCL) branch. It is believed that these factors may partly contribute to the onset of LE as they are both responsible for the creation of microtears at the tendons’ origins. The methodology of this study can be used to explore muscle actions during movements that might cause or exacerbate LE.
Resumo:
Optimal state estimation from given observations of a dynamical system by data assimilation is generally an ill-posed inverse problem. In order to solve the problem, a standard Tikhonov, or L2, regularization is used, based on certain statistical assumptions on the errors in the data. The regularization term constrains the estimate of the state to remain close to a prior estimate. In the presence of model error, this approach does not capture the initial state of the system accurately, as the initial state estimate is derived by minimizing the average error between the model predictions and the observations over a time window. Here we examine an alternative L1 regularization technique that has proved valuable in image processing. We show that for examples of flow with sharp fronts and shocks, the L1 regularization technique performs more accurately than standard L2 regularization.
Resumo:
Background: Microarray based comparative genomic hybridisation (CGH) experiments have been used to study numerous biological problems including understanding genome plasticity in pathogenic bacteria. Typically such experiments produce large data sets that are difficult for biologists to handle. Although there are some programmes available for interpretation of bacterial transcriptomics data and CGH microarray data for looking at genetic stability in oncogenes, there are none specifically to understand the mosaic nature of bacterial genomes. Consequently a bottle neck still persists in accurate processing and mathematical analysis of these data. To address this shortfall we have produced a simple and robust CGH microarray data analysis process that may be automated in the future to understand bacterial genomic diversity. Results: The process involves five steps: cleaning, normalisation, estimating gene presence and absence or divergence, validation, and analysis of data from test against three reference strains simultaneously. Each stage of the process is described and we have compared a number of methods available for characterising bacterial genomic diversity, for calculating the cut-off between gene presence and absence or divergence, and shown that a simple dynamic approach using a kernel density estimator performed better than both established, as well as a more sophisticated mixture modelling technique. We have also shown that current methods commonly used for CGH microarray analysis in tumour and cancer cell lines are not appropriate for analysing our data. Conclusion: After carrying out the analysis and validation for three sequenced Escherichia coli strains, CGH microarray data from 19 E. coli O157 pathogenic test strains were used to demonstrate the benefits of applying this simple and robust process to CGH microarray studies using bacterial genomes.
Resumo:
There has been a considerable critical interest in the representation of death in Children's Literature, with an increasingly prevalent move to read it as granting the child the status of object. Thus, for example, Judith Plotz takes it to 'increase [the] presence' of the child. Through a detailed reading of one late C19th school story, I suggest that such readings proceed through a resistance to textuality. This essay offers a reading of death as bound up with the play of the text, deferred, shifting and retrospectively constructed rather than a state of simple, recoverable objecthood.
Resumo:
Variational data assimilation in continuous time is revisited. The central techniques applied in this paper are in part adopted from the theory of optimal nonlinear control. Alternatively, the investigated approach can be considered as a continuous time generalization of what is known as weakly constrained four-dimensional variational assimilation (4D-Var) in the geosciences. The technique allows to assimilate trajectories in the case of partial observations and in the presence of model error. Several mathematical aspects of the approach are studied. Computationally, it amounts to solving a two-point boundary value problem. For imperfect models, the trade-off between small dynamical error (i.e. the trajectory obeys the model dynamics) and small observational error (i.e. the trajectory closely follows the observations) is investigated. This trade-off turns out to be trivial if the model is perfect. However, even in this situation, allowing for minute deviations from the perfect model is shown to have positive effects, namely to regularize the problem. The presented formalism is dynamical in character. No statistical assumptions on dynamical or observational noise are imposed.
Resumo:
It was recently proposed that feelings of contamination can arise in the absence of physical contact with a contaminant. Currently, there are limited data regarding this construct of ‘mental contamination’ although it is hypothesised to be relevant to obsessive compulsive disorder(OCD) where compulsive washing in response to contamination fear is a common presentation (Rachman,2006). This research examined the presence of mental contamination in OCD. Participants (N=177) with obsessive compulsive symptoms completed questionnaires to assess mental contamination, OCD symptoms and thought-action fusion (TAF). Findings indicated that 46% of participants experienced mental contamination, and severity was associated with severity of OCD symptoms and TAF. Mental contamination in the absence of contact contamination was reported by 10.2% of participants. Similar findings were reported in a sub-sample of participants who had received a formal diagnosis of OCD (N=54). These findings suggest that mental contamination is a distinct construct that overlaps with, but is separate from, contact contamination, and provide preliminary empirical support for the construct.
Resumo:
The interannual variability of the stratospheric polar vortex during winter in both hemispheres is observed to correlate strongly with the phase of the quasi-biennial oscillation (QBO) in tropical stratospheric winds. It follows that the lack of a spontaneously generated QBO in most atmospheric general circulation models (AGCMs) adversely affects the nature of polar variability in such models. This study examines QBO–vortex coupling in an AGCM in which a QBO is spontaneously induced by resolved and parameterized waves. The QBO–vortex coupling in the AGCM compares favorably to that seen in reanalysis data [from the 40-yr ECMWF Re-Analysis (ERA-40)], provided that careful attention is given to the definition of QBO phase. A phase angle representation of the QBO is employed that is based on the two leading empirical orthogonal functions of equatorial zonal wind vertical profiles. This yields a QBO phase that serves as a proxy for the vertical structure of equatorial winds over the whole depth of the stratosphere and thus provides a means of subsampling the data to select QBO phases with similar vertical profiles of equatorial zonal wind. Using this subsampling, it is found that the QBO phase that induces the strongest polar vortex response in early winter differs from that which induces the strongest late-winter vortex response. This is true in both hemispheres and for both the AGCM and ERA-40. It follows that the strength and timing of QBO influence on the vortex may be affected by the partial seasonal synchronization of QBO phase transitions that occurs both in observations and in the model. This provides a mechanism by which changes in the strength of QBO–vortex correlations may exhibit variability on decadal time scales. In the model, such behavior occurs in the absence of external forcings or interannual variations in sea surface temperatures.
Resumo:
Key point summary • Cerebellar ataxias are progressive debilitating diseases with no known treatment and are associated with defective motor function and, in particular, abnormalities to Purkinje cells. • Mutant mice with deficits in Ca2+ channel auxiliary α2δ-2 subunits are used as models of cerebellar ataxia. • Our data in the du2J mouse model shows an association between the ataxic phenotype exhibited by homozygous du2J/du2J mice and increased irregularity of Purkinje cell firing. • We show that both heterozygous +/du2J and homozygous du2J/du2J mice completely lack the strong presynaptic modulation of neuronal firing by cannabinoid CB1 receptors which is exhibited by litter-matched control mice. • These results show that the du2J ataxia model is associated with deficits in CB1 receptor signalling in the cerebellar cortex, putatively linked with compromised Ca2+ channel activity due to reduced α2δ-2 subunit expression. Knowledge of such deficits may help design therapeutic agents to combat ataxias. Abstract Cerebellar ataxias are a group of progressive, debilitating diseases often associated with abnormal Purkinje cell (PC) firing and/or degeneration. Many animal models of cerebellar ataxia display abnormalities in Ca2+ channel function. The ‘ducky’ du2J mouse model of ataxia and absence epilepsy represents a clean knock-out of the auxiliary Ca2+ channel subunit, α2δ-2, and has been associated with deficient Ca2+ channel function in the cerebellar cortex. Here, we investigate effects of du2J mutation on PC layer (PCL) and granule cell (GC) layer (GCL) neuronal spiking activity and, also, inhibitory neurotransmission at interneurone-Purkinje cell(IN-PC) synapses. Increased neuronal firing irregularity was seen in the PCL and, to a less marked extent, in the GCL in du2J/du2J, but not +/du2J, mice; these data suggest that the ataxic phenotype is associated with lack of precision of PC firing, that may also impinge on GC activity and requires expression of two du2J alleles to manifest fully. du2J mutation had no clear effect on spontaneous inhibitory postsynaptic current (sIPSC) frequency at IN-PC synapses, but was associated with increased sIPSC amplitudes. du2J mutation ablated cannabinoid CB1 receptor (CB1R)-mediated modulation of spontaneous neuronal spike firing and CB1Rmediated presynaptic inhibition of synaptic transmission at IN-PC synapses in both +/du2J and du2J/du2J mutants; effects that occurred in the absence of changes in CB1R expression. These results demonstrate that the du2J ataxia model is associated with deficient CB1R signalling in the cerebellar cortex, putatively linked with compromised Ca2+ channel activity and the ataxic phenotype.
Resumo:
Forgetting immediate physical reality and having awareness of one�s location in the simulated world is critical to enjoyment and performance in virtual environments be it an interactive 3D game such as Quake or an online virtual 3d community space such as Second Life. Answer to the question "where am I?" at two levels, whether the locus is in the immediate real world as opposed to the virtual world and whether one is aware of the spatial co-ordinates of that locus, hold the key to any virtual 3D experience. While 3D environments, especially virtual environments and their impact on spatial comprehension has been studied in disciplines such as architecture, it is difficult to determine the relative contributions of specific attributes such as screen size or stereoscopy towards spatial comprehension since most of them treat the technology as monolith (box-centered). Using a variable-centered approach put forth by Nass and Mason (1990) which breaks down the technology into its component variables and their corresponding values as its theoretical basis, this paper looks at the contributions of five variables (Stereoscopy, screen size, field of view, level of realism and level of detail) common to most virtual environments on spatial comprehension and presence. The variable centered approach can be daunting as the increase in the number of variables can exponentially increase the number of conditions and resources required. We overcome this drawback posed by adoption of such a theoretical approach by the use of a fractional factorial design for the experiment. This study has completed the first wave of data collection and starting the next phase in January 2007 and expected to complete by February 2007. Theoretical and practical implications of the study are discussed.
Resumo:
Aim Species distribution models (SDMs) based on current species ranges underestimate the potential distribution when projected in time and/or space. A multi-temporal model calibration approach has been suggested as an alternative, and we evaluate this using 13,000 years of data. Location Europe. Methods We used fossil-based records of presence for Picea abies, Abies alba and Fagus sylvatica and six climatic variables for the period 13,000 to 1000 yr bp. To measure the contribution of each 1000-year time step to the total niche of each species (the niche measured by pooling all the data), we employed a principal components analysis (PCA) calibrated with data over the entire range of possible climates. Then we projected both the total niche and the partial niches from single time frames into the PCA space, and tested if the partial niches were more similar to the total niche than random. Using an ensemble forecasting approach, we calibrated SDMs for each time frame and for the pooled database. We projected each model to current climate and evaluated the results against current pollen data. We also projected all models into the future. Results Niche similarity between the partial and the total-SDMs was almost always statistically significant and increased through time. SDMs calibrated from single time frames gave different results when projected to current climate, providing evidence of a change in the species realized niches through time. Moreover, they predicted limited climate suitability when compared with the total-SDMs. The same results were obtained when projected to future climates. Main conclusions The realized climatic niche of species differed for current and future climates when SDMs were calibrated considering different past climates. Building the niche as an ensemble through time represents a way forward to a better understanding of a species' range and its ecology in a changing climate.
Resumo:
We discuss public policy towards vertical relations, comparing different types of contracts between a manufacturer and a maximum of two retailers. Together with (potential) price competition between the retailers, we study the role of a (sunk) differentiation cost paid by them in order to relax competition in the retail market and broaden the market potential of the distributed product. This non-price competition element in the downstream market is responsible for our conclusion that, unlike in standard policy guidelines and previous theoretical analysis, restrictions in intra-brand competition may deserve a permissive treatment even in the absence of inter-brand competition, if retailer differentiation is costly.
Resumo:
Using panel data for 111 countries over the period 1982–2002, we employ two indexes that cover a wide range of human rights to empirically analyze whether and to what extent terrorism affects human rights. According to our results,terrorism significantly, but not dramatically, diminishes governments’ respect for basic human rights such as the absence of extrajudicial killings, political imprisonment, and torture. The result is robust to how we measure terrorist attacks, to the method of estimation, and to the choice of countries in our sample. However, we find no effect of terrorism on empowerment rights.
Resumo:
A number of methods of evaluating the validity of interval forecasts of financial data are analysed, and illustrated using intraday FTSE100 index futures returns. Some existing interval forecast evaluation techniques, such as the Markov chain approach of Christoffersen (1998), are shown to be inappropriate in the presence of periodic heteroscedasticity. Instead, we consider a regression-based test, and a modified version of Christoffersen's Markov chain test for independence, and analyse their properties when the financial time series exhibit periodic volatility. These approaches lead to different conclusions when interval forecasts of FTSE100 index futures returns generated by various GARCH(1,1) and periodic GARCH(1,1) models are evaluated.