82 resultados para 2D correlation plot
Resumo:
We use sunspot group observations from the Royal Greenwich Observatory (RGO) to investigate the effects of intercalibrating data from observers with different visual acuities. The tests are made by counting the number of groups RB above a variable cut-off threshold of observed total whole-spot area (uncorrected for foreshortening) to simulate what a lower acuity observer would have seen. The synthesised annual means of RB are then re-scaled to the full observed RGO group number RA using a variety of regression techniques. It is found that a very high correlation between RA and RB (rAB > 0.98) does not prevent large errors in the intercalibration (for example sunspot maximum values can be over 30 % too large even for such levels of rAB). In generating the backbone sunspot number (RBB), Svalgaard and Schatten (2015, this issue) force regression fits to pass through the scatter plot origin which generates unreliable fits (the residuals do not form a normal distribution) and causes sunspot cycle amplitudes to be exaggerated in the intercalibrated data. It is demonstrated that the use of Quantile-Quantile (“Q Q”) plots to test for a normal distribution is a useful indicator of erroneous and misleading regression fits. Ordinary least squares linear fits, not forced to pass through the origin, are sometimes reliable (although the optimum method used is shown to be different when matching peak and average sunspot group numbers). However, other fits are only reliable if non-linear regression is used. From these results it is entirely possible that the inflation of solar cycle amplitudes in the backbone group sunspot number as one goes back in time, relative to related solar-terrestrial parameters, is entirely caused by the use of inappropriate and non-robust regression techniques to calibrate the sunspot data.
Resumo:
Ten females presenting with muscle weakness and a raised serum creatine kinase revealed abnormalities in the expression of dystrophin in their muscle biopsies and were diagnosed as manifesting carriers of Xp21 Duchenne/Becker muscular dystrophy. Seven cases, aged 3-22 yr at the time of biopsy, had a variable proportion of dystrophin-deficient fibres and an abnormal expression on immunoblot. These were confidently diagnosed as manifesting carriers. Results in the remaining three cases, aged 8-10 yr, were less clear-cut. Dystrophin expression on immunoblots was slightly reduced and some unevenness and reduction of immunolabelling was seen on sections, but dystrophin-deficient fibres were not a feature of these cases. The weakness in the ten carriers ranged from minimal to severe and there was no correlation between the degree of weakness and the number of dystrophin-deficient fibres. Two minimally weak girls had a high proportion of dystrophin-deficient fibres. Our results show that analysis of dystrophin expression is useful for the differential diagnosis of carriers of Xp21 dystrophy and autosomal muscular dystrophy, but that dystrophin expression does not correlate directly with the degree of clinical weakness.
Resumo:
Human observers exhibit large systematic distance-dependent biases when estimating the three-dimensional (3D) shape of objects defined by binocular image disparities. This has led some to question the utility of disparity as a cue to 3D shape and whether accurate estimation of 3D shape is at all possible. Others have argued that accurate perception is possible, but only with large continuous perspective transformations of an object. Using a stimulus that is known to elicit large distance-dependent perceptual bias (random dot stereograms of elliptical cylinders) we show that contrary to these findings the simple adoption of a more naturalistic viewing angle completely eliminates this bias. Using behavioural psychophysics, coupled with a novel surface-based reverse correlation methodology, we show that it is binocular edge and contour information that allows for accurate and precise perception and that observers actively exploit and sample this information when it is available.
Resumo:
Extreme weather events such as heat waves are becoming more frequent and intense. Populations can cope with elevated heat stress by evolving higher basal heat tolerance (evolutionary response) and/or stronger induced heat tolerance (plastic response). However, there is ongoing debate about whether basal and induced heat tolerance are negatively correlated and whether adaptive potential in heat tolerance is sufficient under ongoing climate warming. To evaluate the evolutionary potential of basal and induced heat tolerance, we performed experimental evolution on a temperate source 4 population of the dung fly Sepsis punctum. Offspring of flies adapted to three thermal selection regimes (Hot, Cold and Reference) were subjected to acute heat stress after having been exposed to either a hot-acclimation or non-acclimation pretreatment. As different traits may respond differently to temperature stress, several physiological and life history traits were assessed. Condition dependence of the response was evaluated by exposing juveniles to different levels of developmental (food restriction/rearing density) stress. Heat knockdown times were highest, whereas acclimation effects were lowest in the Hot selection regime, indicating a negative association between basal and induced heat tolerance. However, survival, adult longevity, fecundity and fertility did not show such a pattern. Acclimation had positive effects in heat-shocked flies, but in the absence of heat stress hot-acclimated flies had reduced life spans relative to nonacclimated ones, thereby revealing a potential cost of acclimation. Moreover, body size positively affected heat tolerance and unstressed individuals were less prone to heat stress than stressed flies, offering support for energetic costs associated with heat tolerance. Overall, our results indicate that heat tolerance of temperate insects can evolve under rising temperatures, but this response could be limited by a negative relationship between basal and induced thermotolerance, and may involve some but not other fitness-related traits.
Resumo:
In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved.
Resumo:
The co-polar correlation coefficient (ρhv) has many applications, including hydrometeor classification, ground clutter and melting layer identification, interpretation of ice microphysics and the retrieval of rain drop size distributions (DSDs). However, we currently lack the quantitative error estimates that are necessary if these applications are to be fully exploited. Previous error estimates of ρhv rely on knowledge of the unknown "true" ρhv and implicitly assume a Gaussian probability distribution function of ρhv samples. We show that frequency distributions of ρhv estimates are in fact highly negatively skewed. A new variable: L = -log10(1 - ρhv) is defined, which does have Gaussian error statistics, and a standard deviation depending only on the number of independent radar pulses. This is verified using observations of spherical drizzle drops, allowing, for the first time, the construction of rigorous confidence intervals in estimates of ρhv. In addition, we demonstrate how the imperfect co-location of the horizontal and vertical polarisation sample volumes may be accounted for. The possibility of using L to estimate the dispersion parameter (µ) in the gamma drop size distribution is investigated. We find that including drop oscillations is essential for this application, otherwise there could be biases in retrieved µ of up to ~8. Preliminary results in rainfall are presented. In a convective rain case study, our estimates show µ to be substantially larger than 0 (an exponential DSD). In this particular rain event, rain rate would be overestimated by up to 50% if a simple exponential DSD is assumed.
Resumo:
Background: Accurate dietary assessment is key to understanding nutrition-related outcomes and is essential for estimating dietary change in nutrition-based interventions. Objective: The objective of this study was to assess the pan-European reproducibility of the Food4Me food-frequency questionnaire (FFQ) in assessing the habitual diet of adults. Methods: Participantsfromthe Food4Me study, a 6-mo,Internet-based, randomizedcontrolled trial of personalized nutrition conducted in the United Kingdom, Ireland, Spain, Netherlands, Germany, Greece, and Poland were included. Screening and baseline data (both collected before commencement of the intervention) were used in the present analyses, and participants were includedonly iftheycompleted FFQs at screeningand at baselinewithin a 1-mo timeframebeforethe commencement oftheintervention. Sociodemographic (e.g., sex andcountry) andlifestyle[e.g.,bodymass index(BMI,inkg/m2)and physical activity] characteristics were collected. Linear regression, correlation coefficients, concordance (percentage) in quartile classification, and Bland-Altman plots for daily intakes were used to assess reproducibility. Results: In total, 567 participants (59% female), with a mean 6 SD age of 38.7 6 13.4 y and BMI of 25.4 6 4.8, completed bothFFQswithin 1 mo(mean 6 SD: 19.26 6.2d).Exact plus adjacent classification oftotal energy intakeinparticipants was highest in Ireland (94%) and lowest in Poland (81%). Spearman correlation coefficients (r) in total energy intake between FFQs ranged from 0.50 for obese participants to 0.68 and 0.60 in normal-weight and overweight participants, respectively. Bland-Altman plots showed a mean difference between FFQs of 210 kcal/d, with the agreement deteriorating as energy intakes increased. There was little variation in reproducibility of total energy intakes between sex and age groups. Conclusions: The online Food4Me FFQ was shown to be reproducible across 7 European countries when administered within a 1-mo period to a large number of participants. The results support the utility of the online Food4Me FFQ as a reproducible tool across multiple European populations. This trial was registered at clinicaltrials.gov as NCT01530139.