894 resultados para Quantitative verification
Resumo:
La programación concurrente es una tarea difícil aún para los más experimentados programadores. Las investigaciones en concurrencia han dado como resultado una gran cantidad de mecanismos y herramientas para resolver problemas de condiciones de carrera de datos y deadlocks, problemas que surgen por el mal uso de los mecanismos de sincronización. La verificación de propiedades interesantes de programas concurrentes presenta dificultades extras a los programas secuenciales debido al no-determinismo de su ejecución, lo cual resulta en una explosión en el número de posibles estados de programa, haciendo casi imposible un tratamiento manual o aún con la ayuda de computadoras. Algunos enfoques se basan en la creación de lenguajes de programación con construcciones con un alto nivel de abstración para expresar concurrencia y sincronización. Otros enfoques tratan de desarrollar técnicas y métodos de razonamiento para demostrar propiedades, algunos usan demostradores de teoremas generales, model-checking o algortimos específicos sobre un determinado sistema de tipos. Los enfoques basados en análisis estático liviano utilizan técnicas como interpretación abstracta para detectar ciertos tipos de errores, de una manera conservativa. Estas técnicas generalmente escalan lo suficiente para aplicarse en grandes proyectos de software pero los tipos de errores que pueden detectar es limitada. Algunas propiedades interesantes están relacionadas a condiciones de carrera y deadlocks, mientras que otros están interesados en problemas relacionados con la seguridad de los sistemas, como confidencialidad e integridad de datos. Los principales objetivos de esta propuesta es identificar algunas propiedades de interés a verificar en sistemas concurrentes y desarrollar técnicas y herramientas para realizar la verificación en forma automática. Para lograr estos objetivos, se pondrá énfasis en el estudio y desarrollo de sistemas de tipos como tipos dependientes, sistema de tipos y efectos, y tipos de efectos sensibles al flujo de datos y control. Estos sistemas de tipos se aplicarán a algunos modelos de programación concurrente como por ejemplo, en Simple Concurrent Object-Oriented Programming (SCOOP) y Java. Además se abordarán propiedades de seguridad usando sistemas de tipos específicos. Concurrent programming has remained a dificult task even for very experienced programmers. Concurrency research has provided a rich set of tools and mechanisms for dealing with data races and deadlocks that arise of incorrect use of synchronization. Verification of most interesting properties of concurrent programs is a very dificult task due to intrinsic non-deterministic nature of concurrency, resulting in a state explosion which make it almost imposible to be manually treat and it is a serious challenge to do that even with help of computers. Some approaches attempts create programming languages with higher levels of abstraction for expressing concurrency and synchronization. Other approaches try to develop reasoning methods to prove properties, either using general theorem provers, model-checking or specific algorithms on some type systems. The light-weight static analysis approach apply techniques like abstract interpretation to find certain kind of bugs in a conservative way. This techniques scale well to be applied in large software projects but the kind of bugs they may find are limited. Some interesting properties are related to data races and deadlocks, while others are interested in some security problems like confidentiality and integrity of data. The main goals of this proposal is to identify some interesting properties to verify in concurrent systems and develop techniques and tools to do full automatic verification. The main approach will be the application of type systems, as dependent types, type and effect systems, and flow-efect types. Those type systems will be applied to some models for concurrent programming as Simple Concurrent Object-Oriented Programming (SCOOP) and Java. Other goals include the analysis of security properties also using specific type systems.
Resumo:
Este proyecto se enmarca en la utlización de métodos formales (más precisamente, en la utilización de teoría de tipos) para garantizar la ausencia de errores en programas. Por un lado se plantea el diseño de nuevos algoritmos de chequeo de tipos. Para ello, se proponen nuevos algoritmos basados en la idea de normalización por evaluación que sean extensibles a otros sistemas de tipos. En el futuro próximo extenderemos resultados que hemos conseguido recientemente [16,17] para obtener: una simplificación de los trabajos realizados para sistemas sin regla eta (acá se estudiarán dos sistemas: a la Martin Löf y a la PTS), la formulación de estos chequeadores para sistemas con variables, generalizar la noción de categoría con familia utilizada para dar semántica a teoría de tipos, obtener una formulación categórica de la noción de normalización por evaluación y finalmente, aplicar estos algoritmos a sistemas con reescrituras. Para los primeros resultados esperados mencionados, nos proponemos como método adaptar las pruebas de [16,17] a los nuevos sistemas. La importancia radica en que permitirán tornar más automatizables (y por ello, más fácilmente utilizables) los asistentes de demostración basados en teoría de tipos. Por otro lado, se utilizará la teoría de tipos para certificar compiladores, intentando llevar adelante la propuesta nunca explorada de [22] de utilizar un enfoque abstracto basado en categorías funtoriales. El método consistirá en certificar el lenguaje "Peal" [29] y luego agregar sucesivamente funcionalidad hasta obtener Forsythe [23]. En este período esperamos poder agregar varias extensiones. La importancia de este proyecto radica en que sólo un compilador certificado garantiza que un programa fuente correcto se compile a un programa objeto correcto. Es por ello, crucial para todo proceso de verificación que se base en verificar código fuente. Finalmente, se abordará la formalización de sistemas con session types. Los mismos han demostrado tener fallas en sus formulaciones [30], por lo que parece conveniente su formalización. Durante la marcha de este proyecto, esperamos tener alguna formalización que dé lugar a un algoritmo de chequeo de tipos y a demostrar las propiedades usuales de los sistemas. La contribución es arrojar un poco de luz sobre estas formulaciones cuyos errores revelan que el tema no ha adquirido aún suficiente madurez o comprensión por parte de la comunidad. This project is about using type theory to garantee program correctness. It follows three different directions: 1) Finding new type-checking algorithms based on normalization by evaluation. First, we would show that recent results like [16,17] extend to other type systems like: Martin-Löf´s type theory without eta rule, PTSs, type systems with variables (in addition to systems in [16,17] which are a la de Bruijn), systems with rewrite rules. This will be done by adjusting the proofs in [16,17] so that they apply to such systems as well. We will also try to obtain a more general definition of categories with families and normalization by evaluation, formulated in categorical terms. We expect this may turn proof-assistants more automatic and useful. 2) Exploring the proposal in [22] to compiler construction for Algol-like languages using functorial categories. According to [22] such approach is suitable for verifying compiler correctness, claim which was never explored. First, the language Peal [29] will be certified in type theory and we will gradually add funtionality to it until a correct compiler for the language Forsythe [23] is obtained. 3) Formilizing systems for session types. Several proposals have shown to be faulty [30]. This means that a formalization of it may contribute to the general understanding of session types.
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2011
Resumo:
Robuste Regelung, Schwingungsregelung, Smart-Structures, Quantitative Feedback Theory, Optimierung
Resumo:
Background: The diagnostic accuracy of 64-slice MDCT in comparison with IVUS has been poorly described and is mainly restricted to reports analyzing segments with documented atherosclerotic plaques. Objectives: We compared 64-slice multidetector computed tomography (MDCT) with gray scale intravascular ultrasound (IVUS) for the evaluation of coronary lumen dimensions in the context of a comprehensive analysis, including segments with absent or mild disease. Methods: The 64-slice MDCT was performed within 72 h before the IVUS imaging, which was obtained for at least one coronary, regardless of the presence of luminal stenosis at angiography. A total of 21 patients were included, with 70 imaged vessels (total length 114.6 ± 38.3 mm per patient). A coronary plaque was diagnosed in segments with plaque burden > 40%. Results: At patient, vessel, and segment levels, average lumen area, minimal lumen area, and minimal lumen diameter were highly correlated between IVUS and 64-slice MDCT (p < 0.01). However, 64-slice MDCT tended to underestimate the lumen size with a relatively wide dispersion of the differences. The comparison between 64-slice MDCT and IVUS lumen measurements was not substantially affected by the presence or absence of an underlying plaque. In addition, 64-slice MDCT showed good global accuracy for the detection of IVUS parameters associated with flow-limiting lesions. Conclusions: In a comprehensive, multi-territory, and whole-artery analysis, the assessment of coronary lumen by 64-slice MDCT compared with coronary IVUS showed a good overall diagnostic ability, regardless of the presence or absence of underlying atherosclerotic plaques.
Resumo:
[s.c.]
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2015
Resumo:
Magdeburg, Univ., Fak. für Elektrotechnik und Informationstechnik, Diss., 2015
Resumo:
Magdeburg, Univ., Med. Fak., Diss., 2015
Resumo:
This study represents one of the first contributions to the knowledge on the quantitative fidelity of the recent freshwater molluscan assemblages in subtropical rivers. Thanatocoenoses and biocoenoses were studied in straight and meandering to braided sectors, in the middle course of the Touro Passo River, a fourth-order tributary of the Uruguay River, located in the westernmost part of the State of Rio Grande do Sul. Samplings were carried out through quadrats of 5 m², five in each sector. A total area of 50 m² was sampled. Samplings were also made in a lentic environment (abandoned meander), with intermittent communication with the Touro Passo River, aiming to record out-of-habitat shell transportation from the lentic communities to the main river channel. The results show that, despite the frequent oscillation of the water level, the biocoenosis of the Touro Passo River shows high ecological fidelity and undergoes little influence from the lentic vicinal environments. The taxonomic composition and some features of the structure of communities, especially the dominant species, also reflect some ecological differences between the two main sectors sampled, such as the complexity of habitats in the meandering-sector. Regarding the quantitative fidelity, 60% of the species found alive were also found dead and 47.3% of the species found dead were also found alive, at river-scale. However, 72% of the dead individuals belong to species also found alive. This value might be related with the good rank order correlation obtained for live/dead assemblages. Consequently, the dominant species of the thanatocoenoses could be used to infer the ecological attributes of the biocoenoses. The values of all the indexes analyzed were very variable in small-scale samplings (quadrat), but were more similar to others registered in previous studies, when they were analyzed in a station and river scale.
Resumo:
Quantitative method of viral pollution determination for large volume of water using ferric hydroxide gel impregnated on the surface of glassfibre cartridge filter. The use of ferric hydroxide gel, impregnated on the surface of glassfibre cartridge filter enable us to recover 62.5% of virus (Poliomylitis type I, Lsc strain) exsogeneously added to 400 liters of tap-water. The virus concentrator system consists of four cartridge filters, in which the three first one are clarifiers, where the contaminants are removed physically, without significant virus loss at this stage. The last cartridge filter is impregnated with ferric hydroxide gel, where the virus is adsorbed. After the required volume of water has been processed, the last filter is removed from the system and the viruses are recovered from the gel, using 1 liter of glycine/NaOH buffer, at pH 11. Immediately the eluate is clarified through series of cellulose acetate membranes mounted in a 142mm Millipore filter. For the second step of virus concentration, HC1 1N is added slowly to the eluate to achieve pH 3.5-4. MgC1, is added to give a final concentration of 0.05M and the viruses are readsorbed on a 0.45 , porosity (HA) cellulose acetate membrane, mounted in a 90 mm Millipore filter. The viruses are recovered using the same eluent plus 10% of fetal calf serum, to a final volume of 3 ml. In this way, it was possible to concentrate virus from 400 liters of tap-water, into 1 liter in the first stage of virus concentration and just to 3 ml of final volume in a second step. The efficiency, simplicity and low operational cost, provded by the method, make it feasible to study viral pollution of recreational and tap-water sources.
Resumo:
Molecular monitoring of BCR/ABL transcripts by real time quantitative reverse transcription PCR (qRT-PCR) is an essential technique for clinical management of patients with BCR/ABL-positive CML and ALL. Though quantitative BCR/ABL assays are performed in hundreds of laboratories worldwide, results among these laboratories cannot be reliably compared due to heterogeneity in test methods, data analysis, reporting, and lack of quantitative standards. Recent efforts towards standardization have been limited in scope. Aliquots of RNA were sent to clinical test centers worldwide in order to evaluate methods and reporting for e1a2, b2a2, and b3a2 transcript levels using their own qRT-PCR assays. Total RNA was isolated from tissue culture cells that expressed each of the different BCR/ABL transcripts. Serial log dilutions were prepared, ranging from 100 to 10-5, in RNA isolated from HL60 cells. Laboratories performed 5 independent qRT-PCR reactions for each sample type at each dilution. In addition, 15 qRT-PCR reactions of the 10-3 b3a2 RNA dilution were run to assess reproducibility within and between laboratories. Participants were asked to run the samples following their standard protocols and to report cycle threshold (Ct), quantitative values for BCR/ABL and housekeeping genes, and ratios of BCR/ABL to housekeeping genes for each sample RNA. Thirty-seven (n=37) participants have submitted qRT-PCR results for analysis (36, 37, and 34 labs generated data for b2a2, b3a2, and e1a2, respectively). The limit of detection for this study was defined as the lowest dilution that a Ct value could be detected for all 5 replicates. For b2a2, 15, 16, 4, and 1 lab(s) showed a limit of detection at the 10-5, 10-4, 10-3, and 10-2 dilutions, respectively. For b3a2, 20, 13, and 4 labs showed a limit of detection at the 10-5, 10-4, and 10-3 dilutions, respectively. For e1a2, 10, 21, 2, and 1 lab(s) showed a limit of detection at the 10-5, 10-4, 10-3, and 10-2 dilutions, respectively. Log %BCR/ABL ratio values provided a method for comparing results between the different laboratories for each BCR/ABL dilution series. Linear regression analysis revealed concordance among the majority of participant data over the 10-1 to 10-4 dilutions. The overall slope values showed comparable results among the majority of b2a2 (mean=0.939; median=0.9627; range (0.399 - 1.1872)), b3a2 (mean=0.925; median=0.922; range (0.625 - 1.140)), and e1a2 (mean=0.897; median=0.909; range (0.5174 - 1.138)) laboratory results (Fig. 1-3)). Thirty-four (n=34) out of the 37 laboratories reported Ct values for all 15 replicates and only those with a complete data set were included in the inter-lab calculations. Eleven laboratories either did not report their copy number data or used other reporting units such as nanograms or cell numbers; therefore, only 26 laboratories were included in the overall analysis of copy numbers. The median copy number was 348.4, with a range from 15.6 to 547,000 copies (approximately a 4.5 log difference); the median intra-lab %CV was 19.2% with a range from 4.2% to 82.6%. While our international performance evaluation using serially diluted RNA samples has reinforced the fact that heterogeneity exists among clinical laboratories, it has also demonstrated that performance within a laboratory is overall very consistent. Accordingly, the availability of defined BCR/ABL RNAs may facilitate the validation of all phases of quantitative BCR/ABL analysis and may be extremely useful as a tool for monitoring assay performance. Ongoing analyses of these materials, along with the development of additional control materials, may solidify consensus around their application in routine laboratory testing and possible integration in worldwide efforts to standardize quantitative BCR/ABL testing.