42 resultados para Classical orthogonal polynomials of a discrete variable
Resumo:
The paper concerns the design and analysis of serial dilution assays to estimate the infectivity of a sample of tissue when it is assumed that the sample contains a finite number of indivisible infectious units such that a subsample will be infectious if it contains one or more of these units. The aim of the study is to estimate the number of infectious units in the original sample. The standard approach to the analysis of data from such a study is based on the assumption of independence of aliquots both at the same dilution level and at different dilution levels, so that the numbers of infectious units in the aliquots follow independent Poisson distributions. An alternative approach is based on calculation of the expected value of the total number of samples tested that are not infectious. We derive the likelihood for the data on the basis of the discrete number of infectious units, enabling calculation of the maximum likelihood estimate and likelihood-based confidence intervals. We use the exact probabilities that are obtained to compare the maximum likelihood estimate with those given by the other methods in terms of bias and standard error and to compare the coverage of the confidence intervals. We show that the methods have very similar properties and conclude that for practical use the method that is based on the Poisson assumption is to be recommended, since it can be implemented by using standard statistical software. Finally we consider the design of serial dilution assays, concluding that it is important that neither the dilution factor nor the number of samples that remain untested should be too large.
Resumo:
The problem of estimating the individual probabilities of a discrete distribution is considered. The true distribution of the independent observations is a mixture of a family of power series distributions. First, we ensure identifiability of the mixing distribution assuming mild conditions. Next, the mixing distribution is estimated by non-parametric maximum likelihood and an estimator for individual probabilities is obtained from the corresponding marginal mixture density. We establish asymptotic normality for the estimator of individual probabilities by showing that, under certain conditions, the difference between this estimator and the empirical proportions is asymptotically negligible. Our framework includes Poisson, negative binomial and logarithmic series as well as binomial mixture models. Simulations highlight the benefit in achieving normality when using the proposed marginal mixture density approach instead of the empirical one, especially for small sample sizes and/or when interest is in the tail areas. A real data example is given to illustrate the use of the methodology.
Resumo:
We have developed a new method for the analysis of voids in proteins (defined as empty cavities not accessible to solvent). This method combines analysis of individual discrete voids with analysis of packing quality. While these are different aspects of the same effect, they have traditionally been analysed using different approaches. The method has been applied to the calculation of total void volume and maximum void size in a non-redundant set of protein domains and has been used to examine correlations between thermal stability and void size. The tumour-suppressor protein p53 has then been compared with the non-redundant data set to determine whether its low thermal stability results from poor packing. We found that p53 has average packing, but the detrimental effects of some previously unexplained mutations to p53 observed in cancer can be explained by the creation of unusually large voids. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
We have studied growth and estimated recruitment of massive coral colonies at three sites, Kaledupa, Hoga and Sampela, separated by about 1.5 km in the Wakatobi Marine National Park, S.E. Sulawesi, Indonesia. There was significantly higher species richness (P<0.05), coral cover (P<0.05) and rugosity (P<0.01) at Kaledupa than at Sampela. A model for coral reef growth has been developed based on a rational polynomial function, where dx/dt is an index of coral growth with time; W is the variable (for example, coral weight, coral length or coral area), up to the power of n in the numerator and m in the denominator; a1……an and b1…bm are constants. The values for n and m represent the degree of the polynomial, and can relate to the morphology of the coral. The model was used to simulate typical coral growth curves, and tested using published data obtained by weighing coral colonies underwater in reefs on the south-west coast of Curaçao [‘Neth. J. Sea Res. 10 (1976) 285’]. The model proved an accurate fit to the data, and parameters were obtained for a number of coral species. Surface area data was obtained on over 1200 massive corals at three different sites in the Wakatobi Marine National Park, S.E. Sulawesi, Indonesia. The year of an individual's recruitment was calculated from knowledge of the growth rate modified by application of the rational polynomial model. The estimated pattern of recruitment was variable, with little numbers of massive corals settling and growing before 1950 at the heavily used site, Sampela, relative to the reef site with little or no human use, Kaledupa, and the intermediate site, Hoga. There was a significantly greater sedimentation rate at Sampela than at either Kaledupa (P<0.0001) or Hoga (P<0.0005). The relative mean abundance of fish families present at the reef crests at the three sites, determined using digital video photography, did not correlate with sedimentation rates, underwater visibility or lack of large non-branching coral colonies. Radial growth rates of three genera of non-branching corals were significantly lower at Sampela than at Kaledupa or at Hoga, and there was a high correlation (r=0.89) between radial growth rates and underwater visibility. Porites spp. was the most abundant coral over all the sites and at all depths followed by Favites (P<0.04) and Favia spp. (P<0.03). Colony ages of Porites corals were significantly lower at the 5 m reef flat on the Sampela reef than at the same depth on both other reefs (P<0.005). At Sampela, only 2.8% of corals on the 5 m reef crest are of a size to have survived from before 1950. The Scleractinian coral community of Sampela is severely impacted by depositing sediments which can lead to the suffocation of corals, whilst also decreasing light penetration resulting in decreased growth and calcification rates. The net loss of material from Sampela, if not checked, could result in the loss of this protective barrier which would be to the detriment of the sublittoral sand flats and hence the Sampela village.
Resumo:
Three new linear trinuclear nickel(II) complexes, [Ni-3(salpen)(2)(OAc)(2)(H2O)(2)]center dot 4H(2)O (1) (OAc = acetate, CH3COO-), [Ni-3(salpen)(2)(OBz)(2)] (2) (OBz=benzoate, PhCOO-) and [Ni-3(salpen)(2)(OCn)(2)(CH3CN)(2)] (4) (OCn = cinnamate, PhCH=CHCOO-), H(2)salpen = tetradentate ligand, N,N'-bis(salicylidene)-1,3-pentanediamine have been synthesized and characterized structurally and magnetically. The choice of solvent for growing single crystal was made by inspecting the morphology of the initially obtained solids with the help of SEM study. The magnetic properties of a closely related complex, [Ni-3(salpen)(2)(OPh)(2)(EtOH)] (3) (OPh = phenyl acetate, PhCH2COO-) whose structure and solution properties have been reported recently, has also been studied here. The structural analyses reveal that both phenoxo and carboxylate bridging are present in all the complexes and the three Ni(II) atoms remain in linear disposition. Although the Schiff base ligand and the syn-syn bridging bidentate mode of the carboxylate group remain the same in complexes 1-4, the change of alkyl/aryl group of the carboxylates brings about systematic variations between six- and five-coordination in the geometry of the terminal Ni(II) centres of the trinuclear units. The steric demand as well as hydrophobic nature of the alkyl/aryl group of the carboxylate is found to play a crucial role in the tuning of the geometry. Variable-temperature (2-300 K) magnetic susceptibility measurements show that complexes 1-4 are antiferromagnetically coupled (J = -3.2(1), -4.6(1). -3.2(1) and -2.8(1) cm(-1) in 1-4, respectively). Calculations of the zero-field splitting parameter indicate that the values of D for complexes 1-4 are in the high range (D = +9.1(2), +14.2(2), +9.8(2) and +8.6(1) cm(-1) for 1-4, respectively). The highest D value of +14.2(2) and +9.8(2) cm(-1) for complexes 2 and 3, respectively, are consistent with the pentacoordinated geometry of the two terminal nickel(II) ions in 2 and one terminal nickel(II) ion in 3. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
We previously reported sequence determination of neutral oligosaccharides by negative ion electrospray tandem mass spectrometry on a quadrupole-orthogonal time-of-flight instrument with high sensitivity and without the need of derivatization. In the present report, we extend our strategies to sialylated oligosaccharides for analysis of chain and blood group types together with branching patterns. A main feature in the negative ion mass spectrometry approach is the unique double glycosidic cleavage induced by 3-glycosidic substitution, producing characteristic D-type fragments which can be used to distinguish the type 1 and type 2 chains, the blood group related Lewis determinants, 3,6-disubstituted core branching patterns, and to assign the structural details of each of the branches. Twenty mono- and disialylated linear and branched oligosaccharides were used for the investigation, and the sensitivity achieved is in the femtomole range. To demonstrate the efficacy of the strategy, we have determined a novel complex disialylated and monofucosylated tridecasaccharide that is based on the lacto-N-decaose core. The structure and sequence assignment was corroborated by :methylation analysis and H-1 NMR spectroscopy.
Resumo:
Purpose. Hyperopic retinal defocus (blur) is thought to be a cause of myopia. If the retinal image of an object is not clearly focused, the resulting blur is thought to cause the continuing lengthening of the eyeball during development causing a permanent refractive error. Both lag of accommodation, especially for near targets, and greater variability in the accommodative response, have been suggested as causes of increased hyperopic retinal blur. Previous studies of lag of accommodation show variable findings. In comparison, greater variability in the accommodative response has been demonstrated in adults with late onset myopia but has not been tested in children. This study looked at the lag and variability of accommodation in children with early onset myopia. Methods. Twenty-one myopic and 18 emmetropic children were tested. Dynamic measures of accommodation and pupil size were made using eccentric photorefraction (Power Refractor) while children viewed targets set at three different accommodative demands (0.25, 2, and 4 D). Results. We found no difference in accommodative lag between groups. However, the accommodative response was more variable in the myopes than emmetropes when viewing both the near (4 D) and far (0.25 D) targets. Since pupil size and variability also varied, we analyzed the data to determine whether this could account for the inter-group differences in accommodation variability. Variation in these factors was not found to be sufficient to explain these differences. Changes in the accommodative response variability with target distance were similar to patterns reported previously in adult emmetropes and late onset myopes. Conclusions. Children with early onset myopia demonstrate greater accommodative variability than emmetropic children, and have similar patterns of response to adult late onset myopes. This increased variability could result in an increase in retinal blur for both near and far targets. The role of accommodative variability in the etiology of myopia is discussed.
Resumo:
Atmosphere–ocean general circulation models (AOGCMs) predict a weakening of the Atlantic meridional overturning circulation (AMOC) in response to anthropogenic forcing of climate, but there is a large model uncertainty in the magnitude of the predicted change. The weakening of the AMOC is generally understood to be the result of increased buoyancy input to the north Atlantic in a warmer climate, leading to reduced convection and deep water formation. Consistent with this idea, model analyses have shown empirical relationships between the AMOC and the meridional density gradient, but this link is not direct because the large-scale ocean circulation is essentially geostrophic, making currents and pressure gradients orthogonal. Analysis of the budget of kinetic energy (KE) instead of momentum has the advantage of excluding the dominant geostrophic balance. Diagnosis of the KE balance of the HadCM3 AOGCM and its low-resolution version FAMOUS shows that KE is supplied to the ocean by the wind and dissipated by viscous forces in the global mean of the steady-state control climate, and the circulation does work against the pressure-gradient force, mainly in the Southern Ocean. In the Atlantic Ocean, however, the pressure-gradient force does work on the circulation, especially in the high-latitude regions of deep water formation. During CO2-forced climate change, we demonstrate a very good temporal correlation between the AMOC strength and the rate of KE generation by the pressure-gradient force in 50–70°N of the Atlantic Ocean in each of nine contemporary AOGCMs, supporting a buoyancy-driven interpretation of AMOC changes. To account for this, we describe a conceptual model, which offers an explanation of why AOGCMs with stronger overturning in the control climate tend to have a larger weakening under CO2 increase.
Resumo:
This paper considers the use of a discrete-time deadbeat control action on systems affected by noise. Variations on the standard controller form are discussed and comparisons are made with controllers in which noise rejection is a higher priority objective. Both load and random disturbances are considered in the system description, although the aim of the deadbeat design remains as a tailoring of reference input variations. Finally, the use of such a deadbeat action within a self-tuning control framework is shown to satisfy, under certain conditions, the self-tuning property, generally though only when an extended form of least-squares estimation is incorporated.
Resumo:
A novel approach is presented for the evaluation of circulation type classifications (CTCs) in terms of their capability to predict surface climate variations. The approach is analogous to that for probabilistic meteorological forecasts and is based on the Brier skill score. This score is shown to take a particularly simple form in the context of CTCs and to quantify the resolution of a climate variable by the classifications. The sampling uncertainty of the skill can be estimated by means of nonparametric bootstrap resampling. The evaluation approach is applied for a systematic intercomparison of 71 CTCs (objective and manual, from COST Action 733) with respect to their ability to resolve daily precipitation in the Alpine region. For essentially all CTCs, the Brier skill score is found to be higher for weak and moderate compared to intense precipitation, for winter compared to summer, and over the north and west of the Alps compared to the south and east. Moreover, CTCs with a higher number of types exhibit better skill than CTCs with few types. Among CTCs with comparable type number, the best automatic classifications are found to outperform the best manual classifications. It is not possible to single out one ‘best’ classification for Alpine precipitation, but there is a small group showing particularly high skill.
Resumo:
Atmosphere–ocean general circulation models (AOGCMs) predict a weakening of the Atlantic meridional overturning circulation (AMOC) in response to anthropogenic forcing of climate, but there is a large model uncertainty in the magnitude of the predicted change. The weakening of the AMOC is generally understood to be the result of increased buoyancy input to the north Atlantic in a warmer climate, leading to reduced convection and deep water formation. Consistent with this idea, model analyses have shown empirical relationships between the AMOC and the meridional density gradient, but this link is not direct because the large-scale ocean circulation is essentially geostrophic, making currents and pressure gradients orthogonal. Analysis of the budget of kinetic energy (KE) instead of momentum has the advantage of excluding the dominant geostrophic balance. Diagnosis of the KE balance of the HadCM3 AOGCM and its low-resolution version FAMOUS shows that KE is supplied to the ocean by the wind and dissipated by viscous forces in the global mean of the steady-state control climate, and the circulation does work against the pressure-gradient force, mainly in the Southern Ocean. In the Atlantic Ocean, however, the pressure-gradient force does work on the circulation, especially in the high-latitude regions of deep water formation. During CO2-forced climate change, we demonstrate a very good temporal correlation between the AMOC strength and the rate of KE generation by the pressure-gradient force in 50–70°N of the Atlantic Ocean in each of nine contemporary AOGCMs, supporting a buoyancy-driven interpretation of AMOC changes. To account for this, we describe a conceptual model, which offers an explanation of why AOGCMs with stronger overturning in the control climate tend to have a larger weakening under CO2 increase
Resumo:
Pardo, Patie, and Savov derived, under mild conditions, a Wiener-Hopf type factorization for the exponential functional of proper Lévy processes. In this paper, we extend this factorization by relaxing a finite moment assumption as well as by considering the exponential functional for killed Lévy processes. As a by-product, we derive some interesting fine distributional properties enjoyed by a large class of this random variable, such as the absolute continuity of its distribution and the smoothness, boundedness or complete monotonicity of its density. This type of results is then used to derive similar properties for the law of maxima and first passage time of some stable Lévy processes. Thus, for example, we show that for any stable process with $\rho\in(0,\frac{1}{\alpha}-1]$, where $\rho\in[0,1]$ is the positivity parameter and $\alpha$ is the stable index, then the first passage time has a bounded and non-increasing density on $\mathbb{R}_+$. We also generate many instances of integral or power series representations for the law of the exponential functional of Lévy processes with one or two-sided jumps. The proof of our main results requires different devices from the one developed by Pardo, Patie, Savov. It relies in particular on a generalization of a transform recently introduced by Chazal et al together with some extensions to killed Lévy process of Wiener-Hopf techniques. The factorizations developed here also allow for further applications which we only indicate here also allow for further applications which we only indicate here.
Resumo:
Wernicke’s aphasia (WA) is the classical neurological model of comprehension impairment and, as a result, the posterior temporal lobe is assumed to be critical to semantic cognition. This conclusion is potentially confused by (a) the existence of patient groups with semantic impairment following damage to other brain regions (semantic dementia and semantic aphasia) and (b) an ongoing debate about the underlying causes of comprehension impairment in WA. By directly comparing these three patient groups for the first time, we demonstrate that the comprehension impairment in Wernicke’s aphasia is best accounted for by dual deficits in acoustic-phonological analysis (associated with pSTG) and semantic cognition (associated with pMTG and angular gyrus). The WA group were impaired on both nonverbal and verbal comprehension assessments consistent with a generalised semantic impairment. This semantic deficit was most similar in nature to that of the semantic aphasia group suggestive of a disruption to semantic control processes. In addition, only the WA group showed a strong effect of input modality on comprehension, with accuracy decreasing considerably as acoustic-phonological requirements increased. These results deviate from traditional accounts which emphasise a single impairment and, instead, implicate two deficits underlying the comprehension disorder in WA.
Resumo:
The interannual variability of the stratospheric polar vortex during winter in both hemispheres is observed to correlate strongly with the phase of the quasi-biennial oscillation (QBO) in tropical stratospheric winds. It follows that the lack of a spontaneously generated QBO in most atmospheric general circulation models (AGCMs) adversely affects the nature of polar variability in such models. This study examines QBO–vortex coupling in an AGCM in which a QBO is spontaneously induced by resolved and parameterized waves. The QBO–vortex coupling in the AGCM compares favorably to that seen in reanalysis data [from the 40-yr ECMWF Re-Analysis (ERA-40)], provided that careful attention is given to the definition of QBO phase. A phase angle representation of the QBO is employed that is based on the two leading empirical orthogonal functions of equatorial zonal wind vertical profiles. This yields a QBO phase that serves as a proxy for the vertical structure of equatorial winds over the whole depth of the stratosphere and thus provides a means of subsampling the data to select QBO phases with similar vertical profiles of equatorial zonal wind. Using this subsampling, it is found that the QBO phase that induces the strongest polar vortex response in early winter differs from that which induces the strongest late-winter vortex response. This is true in both hemispheres and for both the AGCM and ERA-40. It follows that the strength and timing of QBO influence on the vortex may be affected by the partial seasonal synchronization of QBO phase transitions that occurs both in observations and in the model. This provides a mechanism by which changes in the strength of QBO–vortex correlations may exhibit variability on decadal time scales. In the model, such behavior occurs in the absence of external forcings or interannual variations in sea surface temperatures.
Resumo:
A fingerprint method for detecting anthropogenic climate change is applied to new simulations with a coupled ocean-atmosphere general circulation model (CGCM) forced by increasing concentrations of greenhouse gases and aerosols covering the years 1880 to 2050. In addition to the anthropogenic climate change signal, the space-time structure of the natural climate variability for near-surface temperatures is estimated from instrumental data over the last 134 years and two 1000 year simulations with CGCMs. The estimates are compared with paleoclimate data over 570 years. The space-time information on both the signal and the noise is used to maximize the signal-to-noise ratio of a detection variable obtained by applying an optimal filter (fingerprint) to the observed data. The inclusion of aerosols slows the predicted future warming. The probability that the observed increase in near-surface temperatures in recent decades is of natural origin is estimated to be less than 5%. However, this number is dependent on the estimated natural variability level, which is still subject to some uncertainty.