960 resultados para Kernel density estimates
Resumo:
Analytical expressions are derived for the mean and variance, of estimates of the bispectrum of a real-time series assuming a cosinusoidal model. The effects of spectral leakage, inherent in discrete Fourier transform operation when the modes present in the signal have a nonintegral number of wavelengths in the record, are included in the analysis. A single phase-coupled triad of modes can cause the bispectrum to have a nonzero mean value over the entire region of computation owing to leakage. The variance of bispectral estimates in the presence of leakage has contributions from individual modes and from triads of phase-coupled modes. Time-domain windowing reduces the leakage. The theoretical expressions for the mean and variance of bispectral estimates are derived in terms of a function dependent on an arbitrary symmetric time-domain window applied to the record. the number of data, and the statistics of the phase coupling among triads of modes. The theoretical results are verified by numerical simulations for simple test cases and applied to laboratory data to examine phase coupling in a hypothesis testing framework
Resumo:
In the multi-view approach to semisupervised learning, we choose one predictor from each of multiple hypothesis classes, and we co-regularize our choices by penalizing disagreement among the predictors on the unlabeled data. We examine the co-regularization method used in the co-regularized least squares (CoRLS) algorithm, in which the views are reproducing kernel Hilbert spaces (RKHS's), and the disagreement penalty is the average squared difference in predictions. The final predictor is the pointwise average of the predictors from each view. We call the set of predictors that can result from this procedure the co-regularized hypothesis class. Our main result is a tight bound on the Rademacher complexity of the co-regularized hypothesis class in terms of the kernel matrices of each RKHS. We find that the co-regularization reduces the Rademacher complexity by an amount that depends on the distance between the two views, as measured by a data dependent metric. We then use standard techniques to bound the gap between training error and test error for the CoRLS algorithm. Experimentally, we find that the amount of reduction in complexity introduced by co regularization correlates with the amount of improvement that co-regularization gives in the CoRLS algorithm.
Resumo:
Resolving a noted open problem, we show that the Undirected Feedback Vertex Set problem, parameterized by the size of the solution set of vertices, is in the parameterized complexity class Poly(k), that is, polynomial-time pre-processing is sufficient to reduce an initial problem instance (G, k) to a decision-equivalent simplified instance (G', k') where k' � k, and the number of vertices of G' is bounded by a polynomial function of k. Our main result shows an O(k11) kernelization bound.
Resumo:
Increasing the population density of urban areas is a key policy strategy to sustainably manage growth, but many residents often view higher density living as an undesirable long-term housing option. Thus, this research explores the predictors of residential satisfaction in inner urban higher-density (IUHD) environments, surveying 636 IUHD residents in Brisbane, Australia about the importance of dwelling, neighbours and neighbourhood. Relationships with immediate neighbours did not predict residential satisfaction, but features of the neighbourhood and dwelling were critical, specifically satisfaction with dwelling position, design and facilities, and social contacts (family and friends) in the neighbourhood. Identifying the factors that influence residential satisfaction in IUHD will assist with both planning and design, helping ensure a lower resident turnover rate and greater uptake of high density living.
Resumo:
Introduction: An observer, looking sideways from a moving vehicle, while wearing a neutral density filter over one eye, can have a distorted perception of speed, known as the Enright phenomenon. The purpose of this study was to determine how the Enright phenomenon influences driving behaviour. Methods: A geometric model of the Enright phenomenon was developed. Ten young, visually normal, participants (mean age = 25.4 years) were tested on a straight section of a closed driving circuit and instructed to look out of the right side of the vehicle and drive at either 40 Km/h or 60 Km/h under the following binocular viewing conditions: with a 0.9 ND filter over the left eye (leading eye); 0.9 ND filter over the right eye (trailing eye); 0.9 ND filters over both eyes, and with no filters over either eye. The order of filter conditions was randomised and the speed driven recorded for each condition. Results: Speed judgments did not differ significantly between the two baseline conditions (no filters and both eyes filtered) for either speed tested. For the baseline conditions, when subjects were asked to drive at 60 Km/h they matched this speed well (61 ± 10.2 Km/h) but drove significantly faster than requested (51.6 ± 9.4 Km/h) when asked to drive at 40 Km/h. Subjects significantly exceeded baseline speeds by 8.7± 5.0 Km/h, when the trailing eye was filtered and travelled slower than baseline speeds by 3.7± 4.6 Km/h when the leading eye was filtered. Conclusions: This is the first quantitative study demonstrating how the Enright effect can influence perceptions of driving speed, and demonstrates that monocular filtering of an eye can significantly impact driving speeds, albeit to a lesser extent than predicted by geometric models of the phenomenon.
Impact of the Charge Density of Phospholipid Bilayers on Lubrication of Articular Cartilage Surfaces
Resumo:
The case study 3 team viewed the mitigation of noise and air pollution generated in the transport corridor that borders the study site to be a paramount driver of the urban design solution. These key urban planning strategies were adopted: * Spatial separation from transport corridor pollution source. A linear green zone and environmental buffer was proposed adjacent to the transport corridor to mitigate the environmental noise and air quality impacts of the corridor, and to offer residents opportunities for recreation * Open space forming the key structural principle for neighbourhood design. A significant open space system underpins the planning and manages surface water flows. * Urban blocks running on east-west axis. The open space rationale emphasises an east-west pattern for local streets. Street alignment allows for predominantly north-south facing terrace type buildings which both face the street and overlook the green courtyard formed by the perimeter buildings. The results of the ESD assessment of the typologies conclude that the design will achieve good outcomes through: * Lower than average construction costs compared with other similar projects * Thermal comfort; A good balance between daylight access and solar gains is achieved * The energy rating achieved for the units is 8.5 stars.
Resumo:
Homo-and heteronuclear meso,meso-(E)-ethene-1,2-diyl-linked diporphyrins have been prepared by the Suzuki coupling of porphyrinylboronates and iodovinylporphyrins. Combinations comprising 5,10,15-triphenylporphyrin (TriPP) on both ends of the ethene-1,2-diyl bridge M 210 (M 2=H 2/Ni, Ni 2, Ni/Zn, H 4, H 2Zn, Zn 2) and 5,15-bis(3,5-di-tert-butylphenyl)porphyrinato-nickel(II) on one end and H 2, Ni, and ZnTriPP on the other (M 211), enable the first studies of this class of compounds possessing intrinsic polarity. The compounds were characterized by electronic absorption and steady state emission spectra, 1H NMR spectra, and for the Ni 2 bis(TriPP) complex Ni 210, single crystal X-ray structure determination. The crystal structure shows ruffled distortions of the porphyrin rings, typical of Ni II porphyrins, and the (E)-C 2H 2 bridge makes a dihedral angle of 50° with the mean planes of the macrocycles. The result is a stepped parallel arrangement of the porphyrin rings. The dihedral angles in the solid state reflect the interplay of steric and electronic effects of the bridge on interporphyrin communication. The emission spectra in particular, suggest energy transfer across the bridge is fast in conformations in which the bridge is nearly coplanar with the rings. Comparisons of the fluorescence behaviour of H 410 and H 2Ni10 show strong quenching of the free base fluorescence when the complex is excited at the lower energy component of the Soret band, a feature associated in the literature with more planar conformations. TDDFT calculations on the gas-phase optimized geometry of Ni 210 reproduce the features of the experimental electronic absorption spectrum within 0.1 eV. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Time dependency of molecular rate estimates and systematic overestimation of recent divergence times
Resumo:
Studies of molecular evolutionary rates have yielded a wide range of rate estimates for various genes and taxa. Recent studies based on population-level and pedigree data have produced remarkably high estimates of mutation rate, which strongly contrast with substitution rates inferred in phylogenetic (species-level) studies. Using Bayesian analysis with a relaxed-clock model, we estimated rates for three groups of mitochondrial data: avian protein-coding genes, primate protein-coding genes, and primate d-loop sequences. In all three cases, we found a measurable transition between the high, short-term (<1–2 Myr) mutation rate and the low, long-term substitution rate. The relationship between the age of the calibration and the rate of change can be described by a vertically translated exponential decay curve, which may be used for correcting molecular date estimates. The phylogenetic substitution rates in mitochondria are approximately 0.5% per million years for avian protein-coding sequences and 1.5% per million years for primate protein-coding and d-loop sequences. Further analyses showed that purifying selection offers the most convincing explanation for the observed relationship between the estimated rate and the depth of the calibration. We rule out the possibility that it is a spurious result arising from sequence errors, and find it unlikely that the apparent decline in rates over time is caused by mutational saturation. Using a rate curve estimated from the d-loop data, several dates for last common ancestors were calculated: modern humans and Neandertals (354 ka; 222–705 ka), Neandertals (108 ka; 70–156 ka), and modern humans (76 ka; 47–110 ka). If the rate curve for a particular taxonomic group can be accurately estimated, it can be a useful tool for correcting divergence date estimates by taking the rate decay into account. Our results show that it is invalid to extrapolate molecular rates of change across different evolutionary timescales, which has important consequences for studies of populations, domestication, conservation genetics, and human evolution.
Resumo:
Long-term changes in the genetic composition of a population occur by the fixation of new mutations, a process known as substitution. The rate at which mutations arise in a population and the rate at which they are fixed are expected to be equal under neutral conditions (Kimura, 1968). Between the appearance of a new mutation and its eventual fate of fixation or loss, there will be a period in which it exists as a transient polymorphism in the population (Kimura and Ohta, 1971). If the majority of mutations are deleterious (and nonlethal), the fixation probabilities of these transient polymorphisms are reduced and the mutation rate will exceed the substitution rate (Kimura, 1983). Consequently, different apparent rates may be observed on different time scales of the molecular evolutionary process (Penny, 2005; Penny and Holmes, 2001). The substitution rate of the mitochondrial protein-coding genes of birds and mammals has been traditionally recognized to be about 0.01 substitutions/site/million years (Myr) (Brown et al., 1979; Ho, 2007; Irwin et al., 1991; Shields and Wilson, 1987), with the noncoding D-loop evolving several times more quickly (e.g., Pesole et al., 1992; Quinn, 1992). Over the past decade, there has been mounting evidence that instantaneous mutation rates substantially exceed substitution rates, in a range of organisms (e.g., Denver et al., 2000; Howell et al., 2003; Lambert et al., 2002; Mao et al., 2006; Mumm et al., 1997; Parsons et al., 1997; Santos et al., 2005). The immediate reaction to the first of these findings was that the polymorphisms generated by the elevated mutation rate are short-lived, perhaps extending back only a few hundred years (Gibbons, 1998; Macaulay et al., 1997). That is, purifying selection was thought to remove these polymorphisms very rapidly.
Resumo:
Sequence data often have competing signals that are detected by network programs or Lento plots. Such data can be formed by generating sequences on more than one tree, and combining the results, a mixture model. We report that with such mixture models, the estimates of edge (branch) lengths from maximum likelihood (ML) methods that assume a single tree are biased. Based on the observed number of competing signals in real data, such a bias of ML is expected to occur frequently. Because network methods can recover competing signals more accurately, there is a need for ML methods allowing a network. A fundamental problem is that mixture models can have more parameters than can be recovered from the data, so that some mixtures are not, in principle, identifiable. We recommend that network programs be incorporated into best practice analysis, along with ML and Bayesian trees.