982 resultados para ISODOSE CURVES


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We explicitly construct simple, piecewise minimizing geodesic, arbitrarily fine interpolation of simple and Jordan curves on a Riemannian manifold. In particular, a finite sequence of partition points can be specified in advance to be included in our construction. Then we present two applications of our main results: the generalized Green’s theorem and the uniqueness of signature for planar Jordan curves with finite p -variation for 1⩽p<2.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let L be a number field and let E/L be an elliptic curve with complex multiplication by the ring of integers O_K of an imaginary quadratic field K. We use class field theory and results of Skorobogatov and Zarhin to compute the transcendental part of the Brauer group of the abelian surface ExE. The results for the odd order torsion also apply to the Brauer group of the K3 surface Kum(ExE). We describe explicitly the elliptic curves E/Q with complex multiplication by O_K such that the Brauer group of ExE contains a transcendental element of odd order. We show that such an element gives rise to a Brauer-Manin obstruction to weak approximation on Kum(ExE), while there is no obstruction coming from the algebraic part of the Brauer group.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let C be a smooth, absolutely irreducible genus 3 curve over a number field M. Suppose that the Jacobian of C has complex multiplication by a sextic CM-field K. Suppose further that K contains no imaginary quadratic subfield. We give a bound on the primes p of M such that the stable reduction of C at p contains three irreducible components of genus 1.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let E/Q be an elliptic curve and p a rational prime of good ordinary reduction. For every imaginary quadratic field K/Q satisfying the Heegner hypothesis for E we have a corresponding line in E(K)\otimes Q_p, known as a shadow line. When E/Q has analytic rank 2 and E/K has analytic rank 3, shadow lines are expected to lie in E(Q)\otimes Qp. If, in addition, p splits in K/Q, then shadow lines can be determined using the anticyclotomic p-adic height pairing. We develop an algorithm to compute anticyclotomic p-adic heights which we then use to provide an algorithm to compute shadow lines. We conclude by illustrating these algorithms in a collection of examples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Surveys for exoplanetary transits are usually limited not by photon noise but rather by the amount of red noise in their data. In particular, although the CoRoT space-based survey data are being carefully scrutinized, significant new sources of systematic noises are still being discovered. Recently, a magnitude-dependant systematic effect was discovered in the CoRoT data by Mazeh et al. and a phenomenological correction was proposed. Here we tie the observed effect to a particular type of effect, and in the process generalize the popular Sysrem algorithm to include external parameters in a simultaneous solution with the unknown effects. We show that a post-processing scheme based on this algorithm performs well and indeed allows for the detection of new transit-like signals that were not previously detected.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We construct indecomposable and noncrossed product division algebras over function fields of connected smooth curves X over Z(p). This is done by defining an index preserving morphism s: Br(<(K(X))over cap>)` --> Br(K(X))` which splits res : Br(K (X)) --> Br(<(K(X))over cap>), where <(K(X))over cap> is the completion of K (X) at the special fiber, and using it to lift indecomposable and noncrossed product division algebras over <(K(X))over cap>. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a functional form, linear in the parameters, to deal with income distribution and Lorenz curves. The function is fitted to the Brazilian income distribution. Data standard deviations were estimated from year-to-year variation of the income share. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the transport properties (IxV curves and zero bias transmittance) of pristine graphene nanoribbons (GNRs) as well as doped with boron and nitrogen using an approach that combines nonequilibrium Green`s functions and density functional theory (DFT) [NEGF-DFT]. Even for a pristine nanoribbon we verify a spin-filter effect under finite bias voltage when the leads have an antiparallel magnetization. The presence of the impurities at the edges of monohydrogenated zigzag GNRs changes dramatically the charge transport properties inducing a spin-polarized conductance. The IxV curves for these systems show that depending on the bias voltage the spin polarization can be inverted. (C) 2010 Wiley Periodicals, Inc. Int J Quantum Chem 111: 1379-1386, 2011

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have considered a Bayesian approach for the nonlinear regression model by replacing the normal distribution on the error term by some skewed distributions, which account for both skewness and heavy tails or skewness alone. The type of data considered in this paper concerns repeated measurements taken in time on a set of individuals. Such multiple observations on the same individual generally produce serially correlated outcomes. Thus, additionally, our model does allow for a correlation between observations made from the same individual. We have illustrated the procedure using a data set to study the growth curves of a clinic measurement of a group of pregnant women from an obstetrics clinic in Santiago, Chile. Parameter estimation and prediction were carried out using appropriate posterior simulation schemes based in Markov Chain Monte Carlo methods. Besides the deviance information criterion (DIC) and the conditional predictive ordinate (CPO), we suggest the use of proper scoring rules based on the posterior predictive distribution for comparing models. For our data set, all these criteria chose the skew-t model as the best model for the errors. These DIC and CPO criteria are also validated, for the model proposed here, through a simulation study. As a conclusion of this study, the DIC criterion is not trustful for this kind of complex model.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the fundamental machine learning tasks is that of predictive classification. Given that organisations collect an ever increasing amount of data, predictive classification methods must be able to effectively and efficiently handle large amounts of data. However, it is understood that present requirements push existing algorithms to, and sometimes beyond, their limits since many classification prediction algorithms were designed when currently common data set sizes were beyond imagination. This has led to a significant amount of research into ways of making classification learning algorithms more effective and efficient. Although substantial progress has been made, a number of key questions have not been answered. This dissertation investigates two of these key questions. The first is whether different types of algorithms to those currently employed are required when using large data sets. This is answered by analysis of the way in which the bias plus variance decomposition of predictive classification error changes as training set size is increased. Experiments find that larger training sets require different types of algorithms to those currently used. Some insight into the characteristics of suitable algorithms is provided, and this may provide some direction for the development of future classification prediction algorithms which are specifically designed for use with large data sets. The second question investigated is that of the role of sampling in machine learning with large data sets. Sampling has long been used as a means of avoiding the need to scale up algorithms to suit the size of the data set by scaling down the size of the data sets to suit the algorithm. However, the costs of performing sampling have not been widely explored. Two popular sampling methods are compared with learning from all available data in terms of predictive accuracy, model complexity, and execution time. The comparison shows that sub-sampling generally products models with accuracy close to, and sometimes greater than, that obtainable from learning with all available data. This result suggests that it may be possible to develop algorithms that take advantage of the sub-sampling methodology to reduce the time required to infer a model while sacrificing little if any accuracy. Methods of improving effective and efficient learning via sampling are also investigated, and now sampling methodologies proposed. These methodologies include using a varying-proportion of instances to determine the next inference step and using a statistical calculation at each inference step to determine sufficient sample size. Experiments show that using a statistical calculation of sample size can not only substantially reduce execution time but can do so with only a small loss, and occasional gain, in accuracy. One of the common uses of sampling is in the construction of learning curves. Learning curves are often used to attempt to determine the optimal training size which will maximally reduce execution time while nut being detrimental to accuracy. An analysis of the performance of methods for detection of convergence of learning curves is performed, with the focus of the analysis on methods that calculate the gradient, of the tangent to the curve. Given that such methods can be susceptible to local accuracy plateaus, an investigation into the frequency of local plateaus is also performed. It is shown that local accuracy plateaus are a common occurrence, and that ensuring a small loss of accuracy often results in greater computational cost than learning from all available data. These results cast doubt over the applicability of gradient of tangent methods for detecting convergence, and of the viability of learning curves for reducing execution time in general.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Corrosion testing (half-cell and LPR) was carried out on a number reinforced concrete panels which had been taken from the fascia of a twenty five year old high rise building in Melbourne, Australia. Corrosion, predominantly as a result of carbonation of the concrete, was associated with a limited amount of cracking. A monitoring technique was established in which probe electrodes (reference and counter) were retro-fitted into the concrete. The probe electrode setup was identical for all panels tested. It was found that the corrosion behaviour of all panels tested closely fitted a family of results when the corrosion potential is plotted against the polarisation resistance (Rp). This enabled the development of a so-called 'control curve' relating the corrosion potential to the Rp for all of the panels under investigation. This relationship was also confirmed on laboratory samples, indicating that for a fixed geometry and experimental conditions a relationship between the potential and polarisation resistance of steel can be established for the steel-concrete system. Experimental results will be presented which indicate that for a given monitoring cell geometry, it may be possible to propose criteria for the point at which remediation measures should be considered. The establishment of such a control curve has enabled the development of a powerful monitoring tool for the assessment of a number of proposed corrosion remediation techniques. The actual effect of any corrosion remediation technique becomes clearly apparent via the type and magnitude of deviation of post remediation data from the original (preremediation) control curve.