35 resultados para Quantitative verification
Resumo:
Building a computational model for complex biological systems is an iterative process. It starts from an abstraction of the process and then incorporates more details regarding the specific biochemical reactions which results in the change of the model fit. Meanwhile, the model’s numerical properties such as its numerical fit and validation should be preserved. However, refitting the model after each refinement iteration is computationally expensive resource-wise. There is an alternative approach which ensures the model fit preservation without the need to refit the model after each refinement iteration. And this approach is known as quantitative model refinement. The aim of this thesis is to develop and implement a tool called ModelRef which does the quantitative model refinement automatically. It is both implemented as a stand-alone Java application and as one of Anduril framework components. ModelRef performs data refinement of a model and generates the results in two different well known formats (SBML and CPS formats). The development of this tool successfully reduces the time and resource needed and the errors generated as well by traditional reiteration of the whole model to perform the fitting procedure.
Resumo:
Atherosclerosis is a chronic and progressive disease of the vasculature. Increasing coronary atherosclerosis can lead to obstructive coronary artery disease (CAD) or myocardial infarction. Computed tomography angiography (CTA) allows noninvasive assessment of coronary anatomy and quantitation of atherosclerotic burden. Myocardial blood flow (MBF) can be accurately measured in absolute terms (mL/g/min) by positron emission tomography (PET) with [15O] H O as a radiotracer. We studied the coronary microvascular dysfunction as a risk factor for future coronary calcification in healthy young men by measuring the coronary flow reserve (CFR) which is the ratio between resting and hyperemic MBF. Impaired vasodilator function was not linked with accelerated atherosclerosis 11 years later. Currently, there is a global interest in quantitative PET perfusion imaging. We established optimal thresholds of [15O] H O PET perfusion for diagnosis of CAD (hyperemic MBF of 2.3 mL/g/min and CFR of 2.5) in the first multicenter study of this type (Turku, Amsterdam and Uppsala). In myocardial bridging a segment of the coronary artery travels inside the myocardium and can be seen as intramural course (CTA) or systolic compression (invasive coronary angiography). Myocardial bridging is frequently linked with proximal atherosclerotic plaques. We used quantitative [15O] H O PET perfusion to evaluate the hemodynamic effects of myocardial bridging. Myocardial bridging was not associated with decreased absolute MBF or increased atherosclerotic burden. Speckle tracking allows quantitative echocardiographic imaging of myocardial deformation. Speckle tracking during dobutamine stress echocardiography was feasible and comparable to subjective wall motion analysis in the diagnosis of CAD. In addition, it correctly risk stratified patients with multivessel disease and extensive ischemia.
Resumo:
The vast majority of our contemporary society owns a mobile phone, which has resulted in a dramatic rise in the amount of networked computers in recent years. Security issues in the computers have followed the same trend and nearly everyone is now affected by such issues. How could the situation be improved? For software engineers, an obvious answer is to build computer software with security in mind. A problem with building software with security is how to define secure software or how to measure security. This thesis divides the problem into three research questions. First, how can we measure the security of software? Second, what types of tools are available for measuring security? And finally, what do these tools reveal about the security of software? Measuring tools of these kind are commonly called metrics. This thesis is focused on the perspective of software engineers in the software design phase. Focus on the design phase means that code level semantics or programming language specifics are not discussed in this work. Organizational policy, management issues or software development process are also out of the scope. The first two research problems were studied using a literature review while the third was studied using a case study research. The target of the case study was a Java based email server called Apache James, which had details from its changelog and security issues available and the source code was accessible. The research revealed that there is a consensus in the terminology on software security. Security verification activities are commonly divided into evaluation and assurance. The focus of this work was in assurance, which means to verify one’s own work. There are 34 metrics available for security measurements, of which five are evaluation metrics and 29 are assurance metrics. We found, however, that the general quality of these metrics was not good. Only three metrics in the design category passed the inspection criteria and could be used in the case study. The metrics claim to give quantitative information on the security of the software, but in practice they were limited to evaluating different versions of the same software. Apart from being relative, the metrics were unable to detect security issues or point out problems in the design. Furthermore, interpreting the metrics’ results was difficult. In conclusion, the general state of the software security metrics leaves a lot to be desired. The metrics studied had both theoretical and practical issues, and are not suitable for daily engineering workflows. The metrics studied provided a basis for further research, since they pointed out areas where the security metrics were necessary to improve whether verification of security from the design was desired.