920 resultados para Interval analysis (Mathematics)
Resumo:
PURPOSE: To quantitatively analyze and compare the fundoscopic features between fellow eyes of retinal angiomatous proliferation and typical exudative age-related macular degeneration and to identify possible predictors of neovascularization. METHODS: Retrospective case-control study. Seventy-nine fellow eyes of unilateral retinal angiomatous proliferation (n = 40) and typical exudative age-related macular degeneration (n = 39) were included. Fundoscopic features of the fellow eyes were assessed using digital color fundus photographs taken at the time of diagnosis of neovascularization in the first affected eye. Grading was performed by two independent graders using RetmarkerAMD, a computer-assisted grading software based on the International Classification and Grading System for age-related macular degeneration. RESULTS: Baseline total number and area (square micrometers) of drusen in the central 1,000, 3,000, and 6,000 μm were considerably inferior in the fellow eyes of retinal angiomatous proliferation, with statistically significant differences (P < 0.05) observed in virtually every location (1,000, 3,000, and 6,000 μm). A soft drusen (≥125 μm) area >510,196 μm2 in the central 6,000 μm was associated with an increased risk of neovascularization (hazard ratio, 4.35; 95% confidence interval [1.56-12.15]; P = 0.005). CONCLUSION: Baseline fundoscopic features of the fellow eye differ significantly between retinal angiomatous proliferation and typical exudative age-related macular degeneration. A large area (>510,196 μm2) of soft drusen in the central 6,000 μm confers a significantly higher risk of neovascularization and should be considered as a phenotypic risk factor.
Resumo:
The graph Laplacian operator is widely studied in spectral graph theory largely due to its importance in modern data analysis. Recently, the Fourier transform and other time-frequency operators have been defined on graphs using Laplacian eigenvalues and eigenvectors. We extend these results and prove that the translation operator to the i’th node is invertible if and only if all eigenvectors are nonzero on the i’th node. Because of this dependency on the support of eigenvectors we study the characteristic set of Laplacian eigenvectors. We prove that the Fiedler vector of a planar graph cannot vanish on large neighborhoods and then explicitly construct a family of non-planar graphs that do exhibit this property. We then prove original results in modern analysis on graphs. We extend results on spectral graph wavelets to create vertex-dyanamic spectral graph wavelets whose support depends on both scale and translation parameters. We prove that Spielman’s Twice-Ramanujan graph sparsifying algorithm cannot outperform his conjectured optimal sparsification constant. Finally, we present numerical results on graph conditioning, in which edges of a graph are rescaled to best approximate the complete graph and reduce average commute time.
Resumo:
The dissertation is devoted to the study of problems in calculus of variation, free boundary problems and gradient flows with respect to the Wasserstein metric. More concretely, we consider the problem of characterizing the regularity of minimizers to a certain interaction energy. Minimizers of the interaction energy have a somewhat surprising relationship with solutions to obstacle problems. Here we prove and exploit this relationship to obtain novel regularity results. Another problem we tackle is describing the asymptotic behavior of the Cahn-Hilliard equation with degenerate mobility. By framing the Cahn-Hilliard equation with degenerate mobility as a gradient flow in Wasserstein metric, in one space dimension, we prove its convergence to a degenerate parabolic equation under the framework recently developed by Sandier-Serfaty.
Resumo:
The Reverend Joseph McKeen (1757-1807) was the first president of Bowdoin College, Brunswick, Maine, USA, (founded 1794). McKeen is famous for his inaugural address in which he calls students to serve the common good. His view of common good is a deeply theological view, coloured by the theological era in which he lived and worked. This study examines the idea of common in the light of McKeen’s college sermons, taking note of the following subjects: Scottish Common Sense Realism; The Nature of True Virtue; The Controversy with Unitarianism; and Science and Mathematics. McKeen’s view of common good is not simply a political view. He is not merely a republican, expressing his views on the future of the republic in a classical political way. He is also, indeed primarily, a pastor and theologian.
Resumo:
This dissertation investigates the connection between spectral analysis and frame theory. When considering the spectral properties of a frame, we present a few novel results relating to the spectral decomposition. We first show that scalable frames have the property that the inner product of the scaling coefficients and the eigenvectors must equal the inverse eigenvalues. From this, we prove a similar result when an approximate scaling is obtained. We then focus on the optimization problems inherent to the scalable frames by first showing that there is an equivalence between scaling a frame and optimization problems with a non-restrictive objective function. Various objective functions are considered, and an analysis of the solution type is presented. For linear objectives, we can encourage sparse scalings, and with barrier objective functions, we force dense solutions. We further consider frames in high dimensions, and derive various solution techniques. From here, we restrict ourselves to various frame classes, to add more specificity to the results. Using frames generated from distributions allows for the placement of probabilistic bounds on scalability. For discrete distributions (Bernoulli and Rademacher), we bound the probability of encountering an ONB, and for continuous symmetric distributions (Uniform and Gaussian), we show that symmetry is retained in the transformed domain. We also prove several hyperplane-separation results. With the theory developed, we discuss graph applications of the scalability framework. We make a connection with graph conditioning, and show the in-feasibility of the problem in the general case. After a modification, we show that any complete graph can be conditioned. We then present a modification of standard PCA (robust PCA) developed by Cand\`es, and give some background into Electron Energy-Loss Spectroscopy (EELS). We design a novel scheme for the processing of EELS through robust PCA and least-squares regression, and test this scheme on biological samples. Finally, we take the idea of robust PCA and apply the technique of kernel PCA to perform robust manifold learning. We derive the problem and present an algorithm for its solution. There is also discussion of the differences with RPCA that make theoretical guarantees difficult.
Resumo:
Verbal fluency is the ability to produce a satisfying sequence of spoken words during a given time interval. The core of verbal fluency lies in the capacity to manage the executive aspects of language. The standard scores of the semantic verbal fluency test are broadly used in the neuropsychological assessment of the elderly, and different analytical methods are likely to extract even more information from the data generated in this test. Graph theory, a mathematical approach to analyze relations between items, represents a promising tool to understand a variety of neuropsychological states. This study reports a graph analysis of data generated by the semantic verbal fluency test by cognitively healthy elderly (NC), patients with Mild Cognitive Impairment – subtypes amnestic(aMCI) and amnestic multiple domain (a+mdMCI) - and patients with Alzheimer’s disease (AD). Sequences of words were represented as a speech graph in which every word corresponded to a node and temporal links between words were represented by directed edges. To characterize the structure of the data we calculated 13 speech graph attributes (SGAs). The individuals were compared when divided in three (NC – MCI – AD) and four (NC – aMCI – a+mdMCI – AD) groups. When the three groups were compared, significant differences were found in the standard measure of correct words produced, and three SGA: diameter, average shortest path, and network density. SGA sorted the elderly groups with good specificity and sensitivity. When the four groups were compared, the groups differed significantly in network density, except between the two MCI subtypes and NC and aMCI. The diameter of the network and the average shortest path were significantly different between the NC and AD, and between aMCI and AD. SGA sorted the elderly in their groups with good specificity and sensitivity, performing better than the standard score of the task. These findings provide support for a new methodological frame to assess the strength of semantic memory through the verbal fluency task, with potential to amplify the predictive power of this test. Graph analysis is likely to become clinically relevant in neurology and psychiatry, and may be particularly useful for the differential diagnosis of the elderly.
Resumo:
We consider the a posteriori error analysis and hp-adaptation strategies for hp-version interior penalty discontinuous Galerkin methods for second-order partial differential equations with nonnegative characteristic form on anisotropically refined computational meshes with anisotropically enriched elemental polynomial degrees. In particular, we exploit duality based hp-error estimates for linear target functionals of the solution and design and implement the corresponding adaptive algorithms to ensure reliable and efficient control of the error in the prescribed functional to within a given tolerance. This involves exploiting both local isotropic and anisotropic mesh refinement and isotropic and anisotropic polynomial degree enrichment. The superiority of the proposed algorithm in comparison with standard hp-isotropic mesh refinement algorithms and an h-anisotropic/p-isotropic adaptive procedure is illustrated by a series of numerical experiments.
Resumo:
This paper deals with the development and the analysis of asymptotically stable and consistent schemes in the joint quasi-neutral and fluid limits for the collisional Vlasov-Poisson system. In these limits, the classical explicit schemes suffer from time step restrictions due to the small plasma period and Knudsen number. To solve this problem, we propose a new scheme stable for choices of time steps independent from the small scales dynamics and with comparable computational cost with respect to standard explicit schemes. In addition, this scheme reduces automatically to consistent discretizations of the underlying asymptotic systems. In this first work on this subject, we propose a first order in time scheme and we perform a relative linear stability analysis to deal with such problems. The framework we propose permits to extend this approach to high order schemes in the next future. We finally show the capability of the method in dealing with small scales through numerical experiments.
Resumo:
In this work the split-field finite-difference time-domain method (SF-FDTD) has been extended for the analysis of two-dimensionally periodic structures with third-order nonlinear media. The accuracy of the method is verified by comparisons with the nonlinear Fourier Modal Method (FMM). Once the formalism has been validated, examples of one- and two-dimensional nonlinear gratings are analysed. Regarding the 2D case, the shifting in resonant waveguides is corroborated. Here, not only the scalar Kerr effect is considered, the tensorial nature of the third-order nonlinear susceptibility is also included. The consideration of nonlinear materials in this kind of devices permits to design tunable devices such as variable band filters. However, the third-order nonlinear susceptibility is usually small and high intensities are needed in order to trigger the nonlinear effect. Here, a one-dimensional CBG is analysed in both linear and nonlinear regime and the shifting of the resonance peaks in both TE and TM are achieved numerically. The application of a numerical method based on the finite- difference time-domain method permits to analyse this issue from the time domain, thus bistability curves are also computed by means of the numerical method. These curves show how the nonlinear effect modifies the properties of the structure as a function of variable input pump field. When taking the nonlinear behaviour into account, the estimation of the electric field components becomes more challenging. In this paper, we present a set of acceleration strategies based on parallel software and hardware solutions.
Resumo:
Common bottlenose dolphins (Tursiops truncatus), produce a wide variety of vocal emissions for communication and echolocation, of which the pulsed repertoire has been the most difficult to categorize. Packets of high repetition, broadband pulses are still largely reported under a general designation of burst-pulses, and traditional attempts to classify these emissions rely mainly in their aural characteristics and in graphical aspects of spectrograms. Here, we present a quantitative analysis of pulsed signals emitted by wild bottlenose dolphins, in the Sado estuary, Portugal (2011-2014), and test the reliability of a traditional classification approach. Acoustic parameters (minimum frequency, maximum frequency, peak frequency, duration, repetition rate and inter-click-interval) were extracted from 930 pulsed signals, previously categorized using a traditional approach. Discriminant function analysis revealed a high reliability of the traditional classification approach (93.5% of pulsed signals were consistently assigned to their aurally based categories). According to the discriminant function analysis (Wilk's Λ = 0.11, F3, 2.41 = 282.75, P < 0.001), repetition rate is the feature that best enables the discrimination of different pulsed signals (structure coefficient = 0.98). Classification using hierarchical cluster analysis led to a similar categorization pattern: two main signal types with distinct magnitudes of repetition rate were clustered into five groups. The pulsed signals, here described, present significant differences in their time-frequency features, especially repetition rate (P < 0.001), inter-click-interval (P < 0.001) and duration (P < 0.001). We document the occurrence of a distinct signal type-short burst-pulses, and highlight the existence of a diverse repertoire of pulsed vocalizations emitted in graded sequences. The use of quantitative analysis of pulsed signals is essential to improve classifications and to better assess the contexts of emission, geographic variation and the functional significance of pulsed signals.
Resumo:
Objectives: Our aim was to study the effect of combination therapy with aspirin and dipyridamole (A+D) over aspirin alone (ASA) in secondary prevention after transient ischemic attack or minor stroke of presumed arterial origin and to perform subgroup analyses to identify patients that might benefit most from secondary prevention with A+D. Data sources: The previously published meta-analysis of individual patient data was updated with data from ESPRIT (N=2,739); trials without data on the comparison of A+D versus ASA were excluded. Review methods: A meta-analysis was performed using Cox regression, including several subgroup analyses and following baseline risk stratification. Results: A total of 7,612 patients (5 trials) were included in the analyses, 3,800 allocated to A+D and 3,812 to ASA alone. The trial-adjusted hazard ratio for the composite event of vascular death, non-fatal myocardial infarction and non-fatal stroke was 0.82 (95% confidence interval 0.72-0.92). Hazard ratios did not differ in subgroup analyses based on age, sex, qualifying event, hypertension, diabetes, previous stroke, ischemic heart disease, aspirin dose, type of vessel disease and dipyridamole formulation, nor across baseline risk strata as assessed with two different risk scores. A+D were also more effective than ASA alone in preventing recurrent stroke, HR 0.78 (95% CI 0.68 – 0.90). Conclusion: The combination of aspirin and dipyridamole is more effective than aspirin alone in patients with TIA or ischemic stroke of presumed arterial origin in the secondary prevention of stroke and other vascular events. This superiority was found in all subgroups and was independent of baseline risk. ---------------------------7dc3521430776 Content-Disposition: form-data; name="c14_creators_1_name_family" Halkes
Resumo:
Background: The -819C/T polymorphism in interleukin 10 (IL-10) gene has been reported to be associated with inflammatory bowel disease (IBD) ,but the previous results are conflicting. Materials and Methods: The present study aimed at investigating the association between this polymorphism and risk of IBD using a meta-analysis.PubMed,Web of Science,EMBASE,google scholar and China National Knowledge Infrastructure (CNKI) databases were systematically searched to identify relevant publications from their inception to April 2016.Pooled odds ratio (OR) with 95% confidence interval (CI) was calculated using fixed- or random-effects models. Results: A total of 7 case-control studies containing 1890 patients and 2929 controls were enrolled into this meta-analysis, and our results showed no association between IL-10 gene -819C/T polymorphism and IBD risk(TT vs. CC:OR=0.81,95%CI 0.64- 1.04;CT vs. CC:OR=0.92,95%CI 0.81-1.05; Dominant model: OR=0.90,95%CI 0.80-1.02; Recessive model: OR=0.84,95%CI 0.66-1.06). In a subgroup analysis by nationality, the -819C/T polymorphism was not associated with IBD in both Asians and Caucasians. In the subgroup analysis stratified by IBD type, significant association was found in Crohn’s disease(CD)(CT vs. CC:OR=0.68,95%CI 0.48-0.97). Conclusion: In summary, the present meta-analysis suggests that the IL-10 gene -819C/T polymorphism may be associated with CD risk.
Resumo:
In quantitative risk analysis, the problem of estimating small threshold exceedance probabilities and extreme quantiles arise ubiquitously in bio-surveillance, economics, natural disaster insurance actuary, quality control schemes, etc. A useful way to make an assessment of extreme events is to estimate the probabilities of exceeding large threshold values and extreme quantiles judged by interested authorities. Such information regarding extremes serves as essential guidance to interested authorities in decision making processes. However, in such a context, data are usually skewed in nature, and the rarity of exceedance of large threshold implies large fluctuations in the distribution's upper tail, precisely where the accuracy is desired mostly. Extreme Value Theory (EVT) is a branch of statistics that characterizes the behavior of upper or lower tails of probability distributions. However, existing methods in EVT for the estimation of small threshold exceedance probabilities and extreme quantiles often lead to poor predictive performance in cases where the underlying sample is not large enough or does not contain values in the distribution's tail. In this dissertation, we shall be concerned with an out of sample semiparametric (SP) method for the estimation of small threshold probabilities and extreme quantiles. The proposed SP method for interval estimation calls for the fusion or integration of a given data sample with external computer generated independent samples. Since more data are used, real as well as artificial, under certain conditions the method produces relatively short yet reliable confidence intervals for small exceedance probabilities and extreme quantiles.
Resumo:
International audience
Resumo:
We define epistemic order as the way in which the exchange and development of knowledge takes place in the classroom, breaking this down into a system of three components: epistemic initiative relating to who sets the agenda in classroom dialogue, and how; epistemic appraisal relating to who judges contributions to classroom dialogue, and how; and epistemic framing relating to the terms in which development and exchange of knowledge are represented, particularly in reflexive talk. These components are operationalised in terms of various types of structural and semantic analysis of dialogue. It is shown that a lesson segment displays a multi-layered epistemic order differing from that of conventional classroom recitation.