933 resultados para Curve fitting
Resumo:
2000 Mathematics Subject Classification: Primary 14H55; Secondary 14H30, 14H40, 20M14.
Resumo:
This paper presents a novel approach to the computation of primitive geometrical structures, where no prior knowledge about the visual scene is available and a high level of noise is expected. We based our work on the grouping principles of proximity and similarity, of points and preliminary models. The former was realized using Minimum Spanning Trees (MST), on which we apply a stable alignment and goodness of fit criteria. As for the latter, we used spectral clustering of preliminary models. The algorithm can be generalized to various model fitting settings, without tuning of run parameters. Experiments demonstrate the significant improvement in the localization accuracy of models in plane, homography and motion segmentation examples. The efficiency of the algorithm is not dependent on fine tuning of run parameters like most others in the field.
Resumo:
In the proof of Lemma 3.1 in [1] we need to show that we may take the two points p and q with p ≠ q such that p+q+(b-2)g21(C′)∼2(q1+… +qb-1) where q1,…,qb-1 are points of C′, but in the paper [1] we did not show that p ≠ q. Moreover, we hadn't been able to prove this using the method of our paper [1]. So we must add some more assumption to Lemma 3.1 and rewrite the statements of our paper after Lemma 3.1. The following is the correct version of Lemma 3.1 in [1] with its proof.
Resumo:
2000 Mathematics Subject Classification: Primary 14H55; Secondary 14H30, 14J26.
Resumo:
Keratoconus is a bilateral degenerative disease characterized by a non-inflammatory, progressive central corneal ectasia (typically asymmetric) and decreased vision. In its early stages it may be managed with spectacles and soft contact lenses but more commonly it is managed with rigid contact lenses. In advanced stages, when contact lenses can no longer be fit, have become intolerable, or corneal damage is severe, a penetrating keratoplasty is commonly performed. Alternative surgical techniques, such as the use of intra-stromal corneal ring segments (INTACS) have been developed to try and improve the fit of rigid contact lenses in keratoconic patients and avoid penetrating keratoplasties. This case report follows through the fitting of rigid contact lenses in an advanced keratoconic cornea after an INTACS procedure and discusses clinical findings, treatment options, and the use of mini-scleral and scleral lens designs as they relate to the challenges encountered in managing such a patient. Mini-scleral and scleral lenses are relatively easy to fit, and can be of benefit to many patients, including advanced keratoconic patients, post-INTAC patients and post-penetrating keratoplasty patients. © 2011 British Contact Lens Association.
Resumo:
Prices of U.S. Treasury securities vary over time and across maturities. When the market in Treasurys is sufficiently complete and frictionless, these prices may be modeled by a function time and maturity. A cross-section of this function for time held fixed is called the yield curve; the aggregate of these sections is the evolution of the yield curve. This dissertation studies aspects of this evolution. ^ There are two complementary approaches to the study of yield curve evolution here. The first is principal components analysis; the second is wavelet analysis. In both approaches both the time and maturity variables are discretized. In principal components analysis the vectors of yield curve shifts are viewed as observations of a multivariate normal distribution. The resulting covariance matrix is diagonalized; the resulting eigenvalues and eigenvectors (the principal components) are used to draw inferences about the yield curve evolution. ^ In wavelet analysis, the vectors of shifts are resolved into hierarchies of localized fundamental shifts (wavelets) that leave specified global properties invariant (average change and duration change). The hierarchies relate to the degree of localization with movements restricted to a single maturity at the base and general movements at the apex. Second generation wavelet techniques allow better adaptation of the model to economic observables. Statistically, the wavelet approach is inherently nonparametric while the wavelets themselves are better adapted to describing a complete market. ^ Principal components analysis provides information on the dimension of the yield curve process. While there is no clear demarkation between operative factors and noise, the top six principal components pick up 99% of total interest rate variation 95% of the time. An economically justified basis of this process is hard to find; for example a simple linear model will not suffice for the first principal component and the shape of this component is nonstationary. ^ Wavelet analysis works more directly with yield curve observations than principal components analysis. In fact the complete process from bond data to multiresolution is presented, including the dedicated Perl programs and the details of the portfolio metrics and specially adapted wavelet construction. The result is more robust statistics which provide balance to the more fragile principal components analysis. ^
Resumo:
Scopo di questo elaborato è affrontare lo studio di luoghi geometrici piani partendo dagli esempi più semplici che gli studenti incontrano nel loro percorso scolastico, per poi passare a studiare alcune curve celebri che sono definite come luoghi geometrici. Le curve nell'elaborato vengono disegnate con l'ausilio di Geogebra, con il quale sono state preparate delle animazioni da mostrare agli studenti. Di alcuni luoghi si forniscono dapprima le equazioni parametriche e successivamente, attraverso il teorema di eliminazione e il software Singular, viene ricavata l'equazione cartesiana.
Resumo:
Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.
Resumo:
Other
Resumo:
La diminution des doses administrées ou même la cessation complète d'un traitement chimiothérapeutique est souvent la conséquence de la réduction du nombre de neutrophiles, qui sont les globules blancs les plus fréquents dans le sang. Cette réduction dans le nombre absolu des neutrophiles, aussi connue sous le nom de myélosuppression, est précipitée par les effets létaux non spécifiques des médicaments anti-cancéreux, qui, parallèlement à leur effet thérapeutique, produisent aussi des effets toxiques sur les cellules saines. Dans le but d'atténuer cet impact myélosuppresseur, on administre aux patients un facteur de stimulation des colonies de granulocytes recombinant humain (rhG-CSF), une forme exogène du G-CSF, l'hormone responsable de la stimulation de la production des neutrophiles et de leurs libération dans la circulation sanguine. Bien que les bienfaits d'un traitement prophylactique avec le G-CSF pendant la chimiothérapie soient bien établis, les protocoles d'administration demeurent mal définis et sont fréquemment déterminés ad libitum par les cliniciens. Avec l'optique d'améliorer le dosage thérapeutique et rationaliser l'utilisation du rhG-CSF pendant le traitement chimiothérapeutique, nous avons développé un modèle physiologique du processus de granulopoïèse, qui incorpore les connaissances actuelles de pointe relatives à la production des neutrophiles des cellules souches hématopoïétiques dans la moelle osseuse. À ce modèle physiologique, nous avons intégré des modèles pharmacocinétiques/pharmacodynamiques (PK/PD) de deux médicaments: le PM00104 (Zalypsis®), un médicament anti-cancéreux, et le rhG-CSF (filgrastim). En se servant des principes fondamentaux sous-jacents à la physiologie, nous avons estimé les paramètres de manière exhaustive sans devoir recourir à l'ajustement des données, ce qui nous a permis de prédire des données cliniques provenant de 172 patients soumis au protocol CHOP14 (6 cycles de chimiothérapie avec une période de 14 jours où l'administration du rhG-CSF se fait du jour 4 au jour 13 post-chimiothérapie). En utilisant ce modèle physio-PK/PD, nous avons démontré que le nombre d'administrations du rhG-CSF pourrait être réduit de dix (pratique actuelle) à quatre ou même trois administrations, à condition de retarder le début du traitement prophylactique par le rhG-CSF. Dans un souci d'applicabilité clinique de notre approche de modélisation, nous avons investigué l'impact de la variabilité PK présente dans une population de patients, sur les prédictions du modèle, en intégrant des modèles PK de population (Pop-PK) des deux médicaments. En considérant des cohortes de 500 patients in silico pour chacun des cinq scénarios de variabilité plausibles et en utilisant trois marqueurs cliniques, soient le temps au nadir des neutrophiles, la valeur du nadir, ainsi que l'aire sous la courbe concentration-effet, nous avons établi qu'il n'y avait aucune différence significative dans les prédictions du modèle entre le patient-type et la population. Ceci démontre la robustesse de l'approche que nous avons développée et qui s'apparente à une approche de pharmacologie quantitative des systèmes (QSP). Motivés par l'utilisation du rhG-CSF dans le traitement d'autres maladies, comme des pathologies périodiques telles que la neutropénie cyclique, nous avons ensuite soumis l'étude du modèle au contexte des maladies dynamiques. En mettant en évidence la non validité du paradigme de la rétroaction des cytokines pour l'administration exogène des mimétiques du G-CSF, nous avons développé un modèle physiologique PK/PD novateur comprenant les concentrations libres et liées du G-CSF. Ce nouveau modèle PK a aussi nécessité des changements dans le modèle PD puisqu’il nous a permis de retracer les concentrations du G-CSF lié aux neutrophiles. Nous avons démontré que l'hypothèse sous-jacente de l'équilibre entre la concentration libre et liée, selon la loi d'action de masse, n'est plus valide pour le G-CSF aux concentrations endogènes et mènerait en fait à la surestimation de la clairance rénale du médicament. En procédant ainsi, nous avons réussi à reproduire des données cliniques obtenues dans diverses conditions (l'administration exogène du G-CSF, l'administration du PM00104, CHOP14). Nous avons aussi fourni une explication logique des mécanismes responsables de la réponse physiologique aux deux médicaments. Finalement, afin de mettre en exergue l’approche intégrative en pharmacologie adoptée dans cette thèse, nous avons démontré sa valeur inestimable pour la mise en lumière et la reconstruction des systèmes vivants complexes, en faisant le parallèle avec d’autres disciplines scientifiques telles que la paléontologie et la forensique, où une approche semblable a largement fait ses preuves. Nous avons aussi discuté du potentiel de la pharmacologie quantitative des systèmes appliquées au développement du médicament et à la médecine translationnelle, en se servant du modèle physio-PK/PD que nous avons mis au point.
Resumo:
Understanding the overall catalytic activity trend for rational catalyst design is one of the core goals in heterogeneous catalysis. In the past two decades, the development of density functional theory (DFT) and surface kinetics make it feasible to theoretically evaluate and predict the catalytic activity variation of catalysts within a descriptor-based framework. Thereinto, the concept of the volcano curve, which reveals the general activity trend, usually constitutes the basic foundation of catalyst screening. However, although it is a widely accepted concept in heterogeneous catalysis, its origin lacks a clear physical picture and definite interpretation. Herein, starting with a brief review of the development of the catalyst screening framework, we use a two-step kinetic model to refine and clarify the origin of the volcano curve with a full analytical analysis by integrating the surface kinetics and the results of first-principles calculations. It is mathematically demonstrated that the volcano curve is an essential property in catalysis, which results from the self-poisoning effect accompanying the catalytic adsorption process. Specifically, when adsorption is strong, it is the rapid decrease of surface free sites rather than the augmentation of energy barriers that inhibits the overall reaction rate and results in the volcano curve. Some interesting points and implications in assisting catalyst screening are also discussed based on the kinetic derivation. Moreover, recent applications of the volcano curve for catalyst design in two important photoelectrocatalytic processes (the hydrogen evolution reaction and dye-sensitized solar cells) are also briefly discussed.
Resumo:
La tesi si prefigge di definire la molteplicità dell’intersezione tra due curve algebriche piane. La trattazione sarà sviluppata in termini algebrici, per mezzo dello studio degli anelli locali. In seguito, saranno discusse alcune proprietà e sarà proposto qualche esempio di calcolo. Nel terzo capitolo, l’interesse volgerà all’intersezione tra una varietà e un’ipersuperficie di uno spazio proiettivo n-dimensionale. Verrà definita un’ulteriore di molteplicità dell’intersezione, che costituirà una generalizzazione di quella menzionata nei primi due capitoli. A partire da questa definizione, sarà possibile enunciare una versione estesa del Teorema di Bezout. L’ultimo capitolo focalizza l’attenzione nuovamente sulle curve piane, con l’intento di studiarne la topologia in un intorno di un punto singolare. Si introduce, in particolare, l’importante nozione di link di un punto singolare.
Resumo:
Nel presente lavoro è affrontato lo studio delle curve ellittiche viste come curve algebriche piane, più precisamente come cubiche lisce nel piano proiettivo complesso. Dopo aver introdotto nella prima parte le nozioni di Superfici compatte e orientabili e curve algebriche, tramite il teorema di classificazione delle Superfici compatte, se ne fornisce una preliminare classificazione basata sul genere della superficie e della curva, rispettivamente. Da qui, segue la definizione di curve ellittiche e uno studio più dettagliato delle loro pricipali proprietà, quali la possibilità di definirle tramite un'equazione affine nota come equazione di Weierstrass e la loro struttura intrinseca di gruppo abeliano. Si fornisce quindi un'ulteriore classificazione delle cubiche lisce, totalmente differente da quella precedente, che si basa invece sul modulo della cubica, invariante per trasformazioni proiettive. Infine, si considera un aspetto computazionale delle curve ellittiche, ovvero la loro applicazione nel campo della Crittografia. Grazie alla struttura che esse assumono sui campi finiti, sotto opportune ipotesi, i crittosistemi a chiave pubblica basati sul problema del logaritmo discreto definiti sulle curve ellittiche, a parità di sicurezza rispetto ai crittosistemi classici, permettono l'utilizzo di chiavi più corte, e quindi meno costose computazionalmente. Si forniscono quindi le definizioni di problema del logaritmo discreto classico e sulle curve ellittiche, ed alcuni esempi di algoritmi crittografici classici definiti su quest'ultime.