994 resultados para Recurrence Analysis
Resumo:
We present a novel general resource analysis for logic programs based on sized types. Sized types are representations that incorporate structural (shape) information and allow expressing both lower and upper bounds on the size of a set of terms and their subterms at any position and depth. They also allow relating the sizes of terms and subterms occurring at different argument positions in logic predicates. Using these sized types, the resource analysis can infer both lower and upper bounds on the resources used by all the procedures in a program as functions on input term (and subterm) sizes, overcoming limitations of existing resource analyses and enhancing their precision. Our new resource analysis has been developed within the abstract interpretation framework, as an extension of the sized types abstract domain, and has been integrated into the Ciao preprocessor, CiaoPP. The abstract domain operations are integrated with the setting up and solving of recurrence equations for inferring both size and resource usage functions. We show that the analysis is an improvement over the previous resource analysis present in CiaoPP and compares well in power to state of the art systems.
Resumo:
Nonlinear analysis tools for studying and characterizing the dynamics of physiological signals have gained popularity, mainly because tracking sudden alterations of the inherent complexity of biological processes might be an indicator of altered physiological states. Typically, in order to perform an analysis with such tools, the physiological variables that describe the biological process under study are used to reconstruct the underlying dynamics of the biological processes. For that goal, a procedure called time-delay or uniform embedding is usually employed. Nonetheless, there is evidence of its inability for dealing with non-stationary signals, as those recorded from many physiological processes. To handle with such a drawback, this paper evaluates the utility of non-conventional time series reconstruction procedures based on non uniform embedding, applying them to automatic pattern recognition tasks. The paper compares a state of the art non uniform approach with a novel scheme which fuses embedding and feature selection at once, searching for better reconstructions of the dynamics of the system. Moreover, results are also compared with two classic uniform embedding techniques. Thus, the goal is comparing uniform and non uniform reconstruction techniques, including the one proposed in this work, for pattern recognition in biomedical signal processing tasks. Once the state space is reconstructed, the scheme followed characterizes with three classic nonlinear dynamic features (Largest Lyapunov Exponent, Correlation Dimension and Recurrence Period Density Entropy), while classification is carried out by means of a simple k-nn classifier. In order to test its generalization capabilities, the approach was tested with three different physiological databases (Speech Pathologies, Epilepsy and Heart Murmurs). In terms of the accuracy obtained to automatically detect the presence of pathologies, and for the three types of biosignals analyzed, the non uniform techniques used in this work lightly outperformed the results obtained using the uniform methods, suggesting their usefulness to characterize non-stationary biomedical signals in pattern recognition applications. On the other hand, in view of the results obtained and its low computational load, the proposed technique suggests its applicability for the applications under study.
Resumo:
Resource analysis aims at inferring the cost of executing programs for any possible input, in terms of a given resource, such as the traditional execution steps, time ormemory, and, more recently energy consumption or user defined resources (e.g., number of bits sent over a socket, number of database accesses, number of calls to particular procedures, etc.). This is performed statically, i.e., without actually running the programs. Resource usage information is useful for a variety of optimization and verification applications, as well as for guiding software design. For example, programmers can use such information to choose different algorithmic solutions to a problem; program transformation systems can use cost information to choose between alternative transformations; parallelizing compilers can use cost estimates for granularity control, which tries to balance the overheads of task creation and manipulation against the benefits of parallelization. In this thesis we have significatively improved an existing prototype implementation for resource usage analysis based on abstract interpretation, addressing a number of relevant challenges and overcoming many limitations it presented. The goal of that prototype was to show the viability of casting the resource analysis as an abstract domain, and howit could overcome important limitations of the state-of-the-art resource usage analysis tools. For this purpose, it was implemented as an abstract domain in the abstract interpretation framework of the CiaoPP system, PLAI.We have improved both the design and implementation of the prototype, for eventually allowing an evolution of the tool to the industrial application level. The abstract operations of such tool heavily depend on the setting up and finding closed-form solutions of recurrence relations representing the resource usage behavior of program components and the whole program as well. While there exist many tools, such as Computer Algebra Systems (CAS) and libraries able to find closed-form solutions for some types of recurrences, none of them alone is able to handle all the types of recurrences arising during program analysis. In addition, there are some types of recurrences that cannot be solved by any existing tool. This clearly constitutes a bottleneck for this kind of resource usage analysis. Thus, one of the major challenges we have addressed in this thesis is the design and development of a novel modular framework for solving recurrence relations, able to combine and take advantage of the results of existing solvers. Additionally, we have developed and integrated into our novel solver a technique for finding upper-bound closed-form solutions of a special class of recurrence relations that arise during the analysis of programs with accumulating parameters. Finally, we have integrated the improved resource analysis into the CiaoPP general framework for resource usage verification, and specialized the framework for verifying energy consumption specifications of embedded imperative programs in a real application, showing the usefulness and practicality of the resulting tool.---ABSTRACT---El Análisis de recursos tiene como objetivo inferir el coste de la ejecución de programas para cualquier entrada posible, en términos de algún recurso determinado, como pasos de ejecución, tiempo o memoria, y, más recientemente, el consumo de energía o recursos definidos por el usuario (por ejemplo, número de bits enviados a través de un socket, el número de accesos a una base de datos, cantidad de llamadas a determinados procedimientos, etc.). Ello se realiza estáticamente, es decir, sin necesidad de ejecutar los programas. La información sobre el uso de recursos resulta muy útil para una gran variedad de aplicaciones de optimización y verificación de programas, así como para asistir en el diseño de los mismos. Por ejemplo, los programadores pueden utilizar dicha información para elegir diferentes soluciones algorítmicas a un problema; los sistemas de transformación de programas pueden utilizar la información de coste para elegir entre transformaciones alternativas; los compiladores paralelizantes pueden utilizar las estimaciones de coste para realizar control de granularidad, el cual trata de equilibrar el coste debido a la creación y gestión de tareas, con los beneficios de la paralelización. En esta tesis hemos mejorado de manera significativa la implementación de un prototipo existente para el análisis del uso de recursos basado en interpretación abstracta, abordando diversos desafíos relevantes y superando numerosas limitaciones que éste presentaba. El objetivo de dicho prototipo era mostrar la viabilidad de definir el análisis de recursos como un dominio abstracto, y cómo se podían superar las limitaciones de otras herramientas similares que constituyen el estado del arte. Para ello, se implementó como un dominio abstracto en el marco de interpretación abstracta presente en el sistema CiaoPP, PLAI. Hemos mejorado tanto el diseño como la implementación del mencionado prototipo para posibilitar su evolución hacia una herramienta utilizable en el ámbito industrial. Las operaciones abstractas de dicha herramienta dependen en gran medida de la generación, y posterior búsqueda de soluciones en forma cerrada, de relaciones recurrentes, las cuales modelizan el comportamiento, respecto al consumo de recursos, de los componentes del programa y del programa completo. Si bien existen actualmente muchas herramientas capaces de encontrar soluciones en forma cerrada para ciertos tipos de recurrencias, tales como Sistemas de Computación Algebraicos (CAS) y librerías de programación, ninguna de dichas herramientas es capaz de tratar, por sí sola, todos los tipos de recurrencias que surgen durante el análisis de recursos. Existen incluso recurrencias que no las puede resolver ninguna herramienta actual. Esto constituye claramente un cuello de botella para este tipo de análisis del uso de recursos. Por lo tanto, uno de los principales desafíos que hemos abordado en esta tesis es el diseño y desarrollo de un novedoso marco modular para la resolución de relaciones recurrentes, combinando y aprovechando los resultados de resolutores existentes. Además de ello, hemos desarrollado e integrado en nuestro nuevo resolutor una técnica para la obtención de cotas superiores en forma cerrada de una clase característica de relaciones recurrentes que surgen durante el análisis de programas lógicos con parámetros de acumulación. Finalmente, hemos integrado el nuevo análisis de recursos con el marco general para verificación de recursos de CiaoPP, y hemos instanciado dicho marco para la verificación de especificaciones sobre el consumo de energía de programas imperativas embarcados, mostrando la viabilidad y utilidad de la herramienta resultante en una aplicación real.
Resumo:
Trabalho Final do Curso de Mestrado Integrado em Medicina, Faculdade de Medicina, Universidade de Lisboa, 2014
Resumo:
Previous research into formulaic language has focussed on specialised groups of people (e.g. L1 acquisition by infants and adult L2 acquisition) with ordinary adult native speakers of English receiving less attention. Additionally, whilst some features of formulaic language have been used as evidence of authorship (e.g. the Unabomber’s use of you can’t eat your cake and have it too) there has been no systematic investigation into this as a potential marker of authorship. This thesis reports the first full-scale study into the use of formulaic sequences by individual authors. The theory of formulaic language hypothesises that formulaic sequences contained in the mental lexicon are shaped by experience combined with what each individual has found to be communicatively effective. Each author’s repertoire of formulaic sequences should therefore differ. To test this assertion, three automated approaches to the identification of formulaic sequences are tested on a specially constructed corpus containing 100 short narratives. The first approach explores a limited subset of formulaic sequences using recurrence across a series of texts as the criterion for identification. The second approach focuses on a word which frequently occurs as part of formulaic sequences and also investigates alternative non-formulaic realisations of the same semantic content. Finally, a reference list approach is used. Whilst claiming authority for any reference list can be difficult, the proposed method utilises internet examples derived from lists prepared by others, a procedure which, it is argued, is akin to asking large groups of judges to reach consensus about what is formulaic. The empirical evidence supports the notion that formulaic sequences have potential as a marker of authorship since in some cases a Questioned Document was correctly attributed. Although this marker of authorship is not universally applicable, it does promise to become a viable new tool in the forensic linguist’s tool-kit.
Resumo:
Aims To review the role of cardiovascular disease and therapy in the onset and recurrence of preretinal/vitreous haemorrhage in diabetic patients. Methods Retrospective case note analysis of diabetic patients with vitreous haemorrhage from the Diabetic Eye Clinic at Birmingham Heartlands Hospital. Results In total, 54 patients (mean age 57.1, 37 males, 20 type I vs34 type II diabetic patients) were included. The mean (SD) duration of diagnosed diabetes at first vitreous haemorrhage was significantly longer, 21.9 (7.6) years for type I and 14.8 (9.3) years for type II diabetic patients (P<0.01, unpaired t-test, two-tailed). Aspirin administration was not associated with a significantly later onset of vitreous haemorrhage. Four episodes were associated with ACE-inhibitor cough. There was a trend towards HMGCoA reductase inhibitor (statin) use being associated with a delayed onset of vitreous haemorrhage: 21.4 years until vitreous haemorrhage (treatment group) vs 16.2 years (nontreatment group) (P=0.09, two-tailed, unpaired t-test, not statistically significant). During follow-up 56 recurrences occurred, making a total of 110 episodes of vitreous haemorrhage in 79 eyes of 54 patients. The mean (range) follow-up post haemorrhage was 1067 (77–3842) days, with an average of 1.02 recurrences. Age, gender, diabetes type (I or II) or control, presence of hypertension or hypercholesterolaemia, and macrovascular complications were not associated with a significant effect on the 1-year recurrence rate. Aspirin (and other antiplatelet or anticoagulant agents) and ACE- inhibitors appeared to neither increase nor decrease the 1-year recurrence rate. However, statin use was significantly associated with a reduction in recurrence (Fisher exact P<0.05; two-tailed) with an odds ratio (95% CI) of 0.25 (0.1–0.95). Conclusion In this retrospective analysis, the onset of preretinal/vitreous haemorrhage was not found to be accelerated by gender, hypertension, hypercholesterolaemia, evidence of macrovascular disease, or HbA1c. Neither aspirin nor ACE-inhibitor administration accelerated the onset or recurrence of first vitreous haemorrhage. Statins may have a protective role, both delaying and reducing the recurrence of haemorrhage.
Resumo:
PURPOSE: Conventional staging methods are inadequate to identify patients with stage II colon cancer (CC) who are at high risk of recurrence after surgery with curative intent. ColDx is a gene expression, microarray-based assay shown to be independently prognostic for recurrence-free interval (RFI) and overall survival in CC. The objective of this study was to further validate ColDx using formalin-fixed, paraffin-embedded specimens collected as part of the Alliance phase III trial, C9581.
PATIENTS AND METHODS: C9581 evaluated edrecolomab versus observation in patients with stage II CC and reported no survival benefit. Under an initial case-cohort sampling design, a randomly selected subcohort (RS) comprised 514 patients from 901 eligible patients with available tissue. Forty-nine additional patients with recurrence events were included in the analysis. Final analysis comprised 393 patients: 360 RS (58 events) and 33 non-RS events. Risk status was determined for each patient by ColDx. The Self-Prentice method was used to test the association between the resulting ColDx risk score and RFI adjusting for standard prognostic variables.
RESULTS: Fifty-five percent of patients (216 of 393) were classified as high risk. After adjustment for prognostic variables that included mismatch repair (MMR) deficiency, ColDx high-risk patients exhibited significantly worse RFI (multivariable hazard ratio, 2.13; 95% CI, 1.3 to 3.5; P < .01). Age and MMR status were marginally significant. RFI at 5 years for patients classified as high risk was 82% (95% CI, 79% to 85%), compared with 91% (95% CI, 89% to 93%) for patients classified as low risk.
CONCLUSION: ColDx is associated with RFI in the C9581 subsample in the presence of other prognostic factors, including MMR deficiency. ColDx could be incorporated with the traditional clinical markers of risk to refine patient prognosis.
Resumo:
PURPOSE: This study sought to establish whether functional analysis of the ATM-p53-p21 pathway adds to the information provided by currently available prognostic factors in patients with chronic lymphocytic leukemia (CLL) requiring frontline chemotherapy. EXPERIMENTAL DESIGN: Cryopreserved blood mononuclear cells from 278 patients entering the LRF CLL4 trial comparing chlorambucil, fludarabine, and fludarabine plus cyclophosphamide were analyzed for ATM-p53-p21 pathway defects using an ex vivo functional assay that uses ionizing radiation to activate ATM and flow cytometry to measure upregulation of p53 and p21 proteins. Clinical endpoints were compared between groups of patients defined by their pathway status. RESULTS: ATM-p53-p21 pathway defects of four different types (A, B, C, and D) were identified in 194 of 278 (70%) samples. The type A defect (high constitutive p53 expression combined with impaired p21 upregulation) and the type C defect (impaired p21 upregulation despite an intact p53 response) were each associated with short progression-free survival. The type A defect was associated with chemoresistance, whereas the type C defect was associated with early relapse. As expected, the type A defect was strongly associated with TP53 deletion/mutation. In contrast, the type C defect was not associated with any of the other prognostic factors examined, including TP53/ATM deletion, TP53 mutation, and IGHV mutational status. Detection of the type C defect added to the prognostic information provided by TP53/ATM deletion, TP53 mutation, and IGHV status. CONCLUSION: Our findings implicate blockade of the ATM-p53-p21 pathway at the level of p21 as a hitherto unrecognized determinant of early disease recurrence following successful cytoreduction.
Resumo:
The present study was aimed at assessing the experience of a single referral center with recurrent varicose veins of the legs (RVL) over the period 1993-2008. Among a total of 846 procedures for Leg Varices (LV), 74 procedures were for RVL (8.7%). The causes of recurrence were classified as classic: insufficient crossectomy (13); incompetent perforating veins (13); reticular phlebectasia (22); small saphenous vein insufficiency (9); accessory saphenous veins (4); and particular: post-hemodynamic treatment (5); incomplete stripping (1); Sapheno-Femoral Junction (SFJ) vascularization (5); post-thermal ablation (2). For the “classic” RVL the treatment consisted essentially of completing the previous treatment, both if the problem was linked to an insufficient earlier treatment and if it was due to a later onset. The most common cause in our series was reticular phlebectasia; when the simple sclerosing injections are not sufficient, this was treated by phlebectomy according to Mueller. The “particular” cases classified as 1, 2 and 4 were also treated by completing the traditional stripping procedure (+ crossectomy if this had not been done previously), considered to be the gold standard. In the presence of a SFJ neo-vascularization, with or without cavernoma, approximately 5 cm of femoral vein were explored, the afferent vessels ligated and, if cavernoma was present, it was removed. Although inguinal neo-angiogenesis is a possible mechanism, some doubt can be raised as to its importance as a primary factor in causing recurrent varicose veins, rather than their being due to a preexisting vein left in situ because it was ignored, regarded as insignificant, or poorly evident. In conclusion, we stress that LV is a progressive disease, so the treatment is unlikely to be confined to a single procedure. It is important to plan adequate monitoring during follow-up, and to be ready to reoperate when new problems present that, if left, could lead the patient to doubt the validity and efficacy of the original treatment.
Resumo:
Objective: To compare efficacy and safety of primaquine regimens currently used to prevent relapses by Plasmodium vivax. Methods: A systematic review was carried out to identify clinical trials evaluating efficacy and safety to prevent malaria recurrences by P. vivax of primaquine regimen 0.5 mg/kg/day for 7 or 14 days compared to standard regimen of 0.25 mg/kg/day for 14 days. Efficacy of primaquine according to cumulative incidence of recurrences after 28 days was determined. The overall relative risk with fixed-effects meta-analysis was estimated. Results: For the regimen 0.5 mg/kg/day/7 days were identified 7 studies, which showed an incidence of recurrence between 0% and 20% with follow-up 60-210 days; only 4 studies comparing with the standard regimen 0.25 mg/kg/day/14 days and no difference in recurrences between both regimens (RR= 0.977, 95% CI= 0.670 to 1.423) were found. 3 clinical trials using regimen 0.5 mg/kg/day/14 days with an incidence of recurrences between 1.8% and 18.0% during 330-365 days were identified; only one study comparing with the standard regimen (RR= 0.846, 95% CI= 0.484 to 1.477). High risk of bias and differences in handling of included studies were found. Conclusion: Available evidence is insufficient to determine whether currently PQ regimens used as alternative rather than standard treatment have better efficacy and safety in preventing relapse of P. vivax. Clinical trials are required to guide changes in treatment regimen of malaria vivax.
Resumo:
Objective: To compare efficacy and safety of primaquine regimens currently used to prevent relapses by Plasmodium vivax. Methods: A systematic review was carried out to identify clinical trials evaluating efficacy and safety to prevent malaria recurrences by P. vivax of primaquine regimen 0.5 mg/kg/day for 7 or 14 days compared to standard regimen of 0.25 mg/kg/day for 14 days. Efficacy of primaquine according to cumulative incidence of recurrences after 28 days was determined. The overall relative risk with fixed-effects meta-analysis was estimated. Results: For the regimen 0.5 mg/kg/day/7 days were identified 7 studies, which showed an incidence of recurrence between 0% and 20% with follow-up 60-210 days; only 4 studies comparing with the standard regimen 0.25 mg/kg/day/14 days and no difference in recurrences between both regimens (RR= 0.977, 95% CI= 0.670 to 1.423) were found. 3 clinical trials using regimen 0.5 mg/kg/day/14 days with an incidence of recurrences between 1.8% and 18.0% during 330-365 days were identified; only one study comparing with the standard regimen (RR= 0.846, 95% CI= 0.484 to 1.477). High risk of bias and differences in handling of included studies were found. Conclusion: Available evidence is insufficient to determine whether currently PQ regimens used as alternative rather than standard treatment have better efficacy and safety in preventing relapse of P. vivax. Clinical trials are required to guide changes in treatment regimen of malaria vivax.
Resumo:
The Cancer Genome Atlas (TCGA) collaborative project identified four distinct prognostic groups of endometrial carcinoma (EC) based on molecular alterations: (i) the ultramutated subtype that encompassed POLE mutated (POLE) cases; (ii) the hypermutated subtype, characterized by MisMatch Repair deficiency (MMRd); (iii) the copy-number high subtype, with p53 abnormal/mutated features (p53abn); (iv) the copy-number low subtype, known as No Specific Molecular Profile (NSMP). Although the prognostic value of TCGA molecular classification, NSMP tumors present a wide variability in molecular alterations and biological aggressiveness. This study aims to investigate the impact of ARID1A and CTNNB1/β-catenin alterations by targeted Next-generation sequencing (NGS) and immunohistochemistry (IHC) in a consecutive series of 125 molecularly classified ECs. NGS and IHC were used to assign surrogate TCGA groups and to identify molecular alterations of multiple target genes including POLE, PTEN, ARID1A, CTNNB1, TP53. Associations with clinicopathologic parameters, molecular subtypes, and outcomes identified NSMP category as the most heterogeneous group in terms of clinicopathologic features and outcome. Integration of surrogate TCGA molecular classification with ARID1A and β-catenin analysis showed NSMP cases with ARID1A mutation characterized by the worst outcome with early recurrence, while NSMP tumors with ARID1A wild-type and β-catenin alteration had indolent clinicopathologic features and no recurrence. This study indicates how the identification of ARID1A and β-catenin alterations in EC represents a simple and effective way to characterize NSMP tumor aggressiveness and metastatic potential.
Resumo:
The Fourier transform-infrared (FT-IR) signature of dry samples of DNA and DNA-polypeptide complexes, as studied by IR microspectroscopy using a diamond attenuated total reflection (ATR) objective, has revealed important discriminatory characteristics relative to the PO2(-) vibrational stretchings. However, DNA IR marks that provide information on the sample's richness in hydrogen bonds have not been resolved in the spectral profiles obtained with this objective. Here we investigated the performance of an all reflecting objective (ARO) for analysis of the FT-IR signal of hydrogen bonds in DNA samples differing in base richness types (salmon testis vs calf thymus). The results obtained using the ARO indicate prominent band peaks at the spectral region representative of the vibration of nitrogenous base hydrogen bonds and of NH and NH2 groups. The band areas at this spectral region differ in agreement with the DNA base richness type when using the ARO. A peak assigned to adenine was more evident in the AT-rich salmon DNA using either the ARO or the ATR objective. It is concluded that, for the discrimination of DNA IR hydrogen bond vibrations associated with varying base type proportions, the use of an ARO is recommended.
Resumo:
Although various abutment connections and materials have recently been introduced, insufficient data exist regarding the effect of stress distribution on their mechanical performance. The purpose of this study was to investigate the effect of different abutment materials and platform connections on stress distribution in single anterior implant-supported restorations with the finite element method. Nine experimental groups were modeled from the combination of 3 platform connections (external hexagon, internal hexagon, and Morse tapered) and 3 abutment materials (titanium, zirconia, and hybrid) as follows: external hexagon-titanium, external hexagon-zirconia, external hexagon-hybrid, internal hexagon-titanium, internal hexagon-zirconia, internal hexagon-hybrid, Morse tapered-titanium, Morse tapered-zirconia, and Morse tapered-hybrid. Finite element models consisted of a 4×13-mm implant, anatomic abutment, and lithium disilicate central incisor crown cemented over the abutment. The 49 N occlusal loading was applied in 6 steps to simulate the incisal guidance. Equivalent von Mises stress (σvM) was used for both the qualitative and quantitative evaluation of the implant and abutment in all the groups and the maximum (σmax) and minimum (σmin) principal stresses for the numerical comparison of the zirconia parts. The highest abutment σvM occurred in the Morse-tapered groups and the lowest in the external hexagon-hybrid, internal hexagon-titanium, and internal hexagon-hybrid groups. The σmax and σmin values were lower in the hybrid groups than in the zirconia groups. The stress distribution concentrated in the abutment-implant interface in all the groups, regardless of the platform connection or abutment material. The platform connection influenced the stress on abutments more than the abutment material. The stress values for implants were similar among different platform connections, but greater stress concentrations were observed in internal connections.
Resumo:
Current guidelines have advised against the performance of (131)I-iodide diagnostic whole body scintigraphy (dxWBS) to minimize the occurrence of stunning, and to guarantee the efficiency of radioiodine therapy (RIT). The aim of the study was to evaluate the impact of stunning on the efficacy of RIT and disease outcome. This retrospective analysis included 208 patients with differentiated thyroid cancer managed according to a same protocol and followed up for 12-159 months (mean 30 ± 69 months). Patients received RIT in doses ranging from 3,700 to 11,100 MBq (100 mCi to 300 mCi). Post-RIT-whole body scintigraphy images were performed 10 days after RIT in all patients. In addition, images were also performed 24-48 hours after therapy in 22 patients. Outcome was classified as no evidence of disease (NED), stable disease (SD) and progressive disease (PD). Thyroid stunning occurred in 40 patients (19.2%), including 26 patients with NED and 14 patients with SD. A multivariate analysis showed no association between disease outcome and the occurrence of stunning (p = 0.3476). The efficacy of RIT and disease outcome do not seem to be related to thyroid stunning.