901 resultados para Formal Methods. Component-Based Development. Competition. Model Checking
Resumo:
Background: Squamous cell carcinoma (SCC) is one of the most common human cancers worldwide. In SCC, tumour development is accompanied by an immune response that leads to massive tumour infiltration by inflammatory cells, and consequently, local and systemic production of cytokines, chemokines and other mediators. Studies in both humans and animal models indicate that imbalances in these inflammatory mediators are associated with cancer development. Methods: We used a multistage model of SCC to examine the involvement of elastase (ELA), myeloperoxidase (MPO), nitric oxide (NO), cytokines (IL-6, IL-10, IL-13, IL-17, TGF-β and TNF-α), and neutrophils and macrophages in tumour development. ELA and MPO activity and NO, IL-10, IL −17, TNF-α and TGF-β levels were increased in the precancerous microenvironment. Results: ELA and MPO activity and NO, IL-10, IL −17, TNF-α and TGF-β levels were increased in the precancerous microenvironment. Significantly higher levels of IL-6 and lower levels of IL-10 were detected at 4 weeks following 7,12-Dimethylbenz(a)anthracene (DMBA) treatment. Similar levels of IL-13 were detected in the precancerous microenvironment compared with control tissue. We identified significant increases in the number of GR-1+ neutrophils and F4/80+/GR-1- infiltrating cells in tissues at 4 and 8 weeks following treatment and a higher percentage of tumour-associated macrophages (TAM) expressing both GR-1 and F4/80, an activated phenotype, at 16 weeks. We found a significant correlation between levels of IL-10, IL-17, ELA, and activated TAMs and the lesions. Additionally, neutrophil infiltrate was positively correlated with MPO and NO levels in the lesions. Conclusion: Our results indicate an imbalance of inflammatory mediators in precancerous SCC caused by neutrophils and macrophages and culminating in pro-tumour local tissue alterations.
Resumo:
Die protokollbasierte Medizin stellt einen interdisziplinären Brennpunkt der Informatik dar. Als besonderer Ausschnitt der medizinischen Teilgebiete erlaubt sie die relativ formale Spezifikation von Prozessen in den drei Bereichen der Prävention, Diagnose und Therapie.Letzterer wurde immer besonders fokussiert und gilt seit jeher im Rahmen klinischer Studien als Projektionsfläche für informationstechnologische Konzepte. Die Euphorie der frühen Jahre ernüchtert sich jedoch bei jeder Bilanz. Nur sehr wenige der unzählbaren Projekte haben ihre Routine in der alltäglichen Praxis gefunden. Die meisten Vorhaben sind an der Illusion der vollständigen Berechenbarkeit medizinischer Arbeitsabläufe gescheitert. Die traditionelle Sichtweise der klinischen Praxis beruht auf einer blockorientierten Vorstellung des Therapieausführungsprozesses. Sie entsteht durch seine Zerlegung in einzelne Therapiezweige, welche aus vordefinierten Blöcken zusammengesetzt sind. Diese können sequentiell oder parallel ausgeführt werden und sind selbst zusammengesetzt aus jeweils einer Menge von Elementen,welche die Aktivitäten der untersten Ebene darstellen. Das blockorientierte Aufbaumodell wird ergänzt durch ein regelorientiertes Ablaufmodell. Ein komplexes Regelwerk bestimmt Bedingungen für die zeitlichen und logischen Abhängigkeiten der Blöcke, deren Anordnung durch den Ausführungsprozeß gebildet wird. Die Modellierung der Therapieausführung steht zunächst vor der grundsätzlichen Frage, inwieweit die traditionelle Sichtweise für eine interne Repräsentation geeignet ist. Das übergeordnete Ziel besteht in der Integration der unterschiedlichen Ebenen der Therapiespezifikation. Dazu gehört nicht nur die strukturelle Komponente, sondern vorallem die Ablaufkomponente. Ein geeignetes Regelmodell ist erforderlich, welches den spezifischen Bedürfnissen der Therapieüberwachung gerecht wird. Die zentrale Aufgabe besteht darin, diese unterschiedlichen Ebenen zusammenzuführen. Eine sinnvolle Alternative zur traditionellen Sichtweise liefert das zustandsorientierte Modell des Therapieausführungsprozesses. Das zustandsorientierte Modell beruht auf der Sichtweise, daß der gesamte Therapieausführungsprozeß letztendlich eine lineare Folge von Zuständen beschreibt, wobei jeder Zustandsübergang durch ein Ereignis eingeleitet wird, an bestimmte Bedingungen geknüpft ist und bestimmte Aktionen auslösen kann. Die Parallelität des blockorientierten Modells tritt in den Hintergrund, denn die Menge der durchzuführenden Maßnahmen sind lediglich Eigenschaften der Zustände und keine strukturellen Elemente der Ablaufspezifikation. Zu jedem Zeitpunkt ist genau ein Zustand aktiv, und er repräsentiert eine von endlich vielen klinischen Situationen, mit all ihren spezifischen Aktivitäten und Ausführungsregeln. Die Vorteile des zustandsorientierten Modells liegen in der Integration. Die Grundstruktur verbindet die statische Darstellung der möglichen Phasenanordnungen mit der dynamischen Ausführung aktiver Regeln. Die ursprünglichen Inhalte des blockorientierten Modells werden als gewöhnliche Eigenschaften der Zustände reproduziert und stellen damit nur einen Spezialfall der zustandsbezogenen Sicht dar.Weitere Möglichkeiten für die Anreicherung der Zustände mit zusätzlichen Details sind denkbar wie sinnvoll. Die Grundstruktur bleibt bei jeder Erweiterung jedoch die gleiche. Es ergibt sich ein wiederverwendbares Grundgerüst,ein gemeinsamer Nenner für die Erfüllung der Überwachungsaufgabe.
Resumo:
Die Aufgabenstellung, welche dieser Dissertation zugrunde liegt, lässt sich kurz als die Untersuchung von komponentenbasierten Konzepten zum Einsatz in der Softwareentwicklung durch Endanwender beschreiben. In den letzten 20 bis 30 Jahren hat sich das technische Umfeld, in dem ein Großteil der Arbeitnehmer seine täglichen Aufgaben verrichtet, grundlegend verändert. Der Computer, früher in Form eines Großrechners ausschließlich die Domäne von Spezialisten, ist nun ein selbstverständlicher Bestandteil der täglichen Arbeit. Der Umgang mit Anwendungsprogrammen, die dem Nutzer erlauben in einem gewissen Rahmen neue, eigene Funktionalität zu definieren, ist in vielen Bereichen so selbstverständlich, dass viele dieser Tätigkeiten nicht bewusst als Programmieren wahrgenommen werden. Da diese Nutzer nicht notwendigerweise in der Entwicklung von Software ausgebildet sind, benötigen sie entsprechende Unterstützung bei diesen Tätigkeiten. Dies macht deutlich, welche praktische Relevanz die Untersuchungen in diesem Bereich haben. Zur Erstellung eines Programmiersystems für Endanwender wird zunächst ein flexibler Anwendungsrahmen entwickelt, welcher sich als Basis zur Erstellung solcher Systeme eignet. In Softwareprojekten sind sich ändernde Anforderungen und daraus resultierende Notwendigkeiten ein wichtiger Aspekt. Dies wird im Entwurf des Frameworks durch Konzepte zur Bereitstellung von wieder verwendbarer Funktionalität durch das Framework und Möglichkeiten zur Anpassung und Erweiterung der vorhandenen Funktionalität berücksichtigt. Hier ist zum einen der Einsatz einer serviceorientierten Architektur innerhalb der Anwendung und zum anderen eine komponentenorientierte Variante des Kommando-Musters zu nennen. Zum anderen wird ein Konzept zur Kapselung von Endnutzerprogrammiermodellen in Komponenten erarbeitet. Dieser Ansatz ermöglicht es, unterschiedliche Modelle als Grundlage der entworfenen Entwicklungsumgebung zu verwenden. Im weiteren Verlauf der Arbeit wird ein Programmiermodell entworfen und unter Verwendung des zuvor genannten Frameworks implementiert. Damit dieses zur Nutzung durch Endanwender geeignet ist, ist eine Anhebung der zur Beschreibung eines Softwaresystems verwendeten Abstraktionsebene notwendig. Dies wird durch die Verwendung von Komponenten und einem nachrichtenbasierten Kompositionsmechanismus erreicht. Die vorgenommene Realisierung ist dabei noch nicht auf konkrete Anwendungsfamilien bezogen, diese Anpassungen erfolgen in einem weiteren Schritt für zwei unterschiedliche Anwendungsbereiche.
Resumo:
A control-oriented model of a Dual Clutch Transmission was developed for real-time Hardware In the Loop (HIL) applications, to support model-based development of the DCT controller. The model is an innovative attempt to reproduce the fast dynamics of the actuation system while maintaining a step size large enough for real-time applications. The model comprehends a detailed physical description of hydraulic circuit, clutches, synchronizers and gears, and simplified vehicle and internal combustion engine sub-models. As the oil circulating in the system has a large bulk modulus, the pressure dynamics are very fast, possibly causing instability in a real-time simulation; the same challenge involves the servo valves dynamics, due to the very small masses of the moving elements. Therefore, the hydraulic circuit model has been modified and simplified without losing physical validity, in order to adapt it to the real-time simulation requirements. The results of offline simulations have been compared to on-board measurements to verify the validity of the developed model, that was then implemented in a HIL system and connected to the TCU (Transmission Control Unit). Several tests have been performed: electrical failure tests on sensors and actuators, hydraulic and mechanical failure tests on hydraulic valves, clutches and synchronizers, and application tests comprehending all the main features of the control performed by the TCU. Being based on physical laws, in every condition the model simulates a plausible reaction of the system. The first intensive use of the HIL application led to the validation of the new safety strategies implemented inside the TCU software. A test automation procedure has been developed to permit the execution of a pattern of tests without the interaction of the user; fully repeatable tests can be performed for non-regression verification, allowing the testing of new software releases in fully automatic mode.
Resumo:
BACKGROUND: Many HIV-infected patients on highly active antiretroviral therapy (HAART) experience metabolic complications including dyslipidaemia and insulin resistance, which may increase their coronary heart disease (CHD) risk. We developed a prognostic model for CHD tailored to the changes in risk factors observed in patients starting HAART. METHODS: Data from five cohort studies (British Regional Heart Study, Caerphilly and Speedwell Studies, Framingham Offspring Study, Whitehall II) on 13,100 men aged 40-70 and 114,443 years of follow up were used. CHD was defined as myocardial infarction or death from CHD. Model fit was assessed using the Akaike Information Criterion; generalizability across cohorts was examined using internal-external cross-validation. RESULTS: A parametric model based on the Gompertz distribution generalized best. Variables included in the model were systolic blood pressure, total cholesterol, high-density lipoprotein cholesterol, triglyceride, glucose, diabetes mellitus, body mass index and smoking status. Compared with patients not on HAART, the estimated CHD hazard ratio (HR) for patients on HAART was 1.46 (95% CI 1.15-1.86) for moderate and 2.48 (95% CI 1.76-3.51) for severe metabolic complications. CONCLUSIONS: The change in the risk of CHD in HIV-infected men starting HAART can be estimated based on typical changes in risk factors, assuming that HRs estimated using data from non-infected men are applicable to HIV-infected men. Based on this model the risk of CHD is likely to increase, but increases may often be modest, and could be offset by lifestyle changes.
Resumo:
Obesity is becoming an epidemic phenomenon in most developed countries. The fundamental cause of obesity and overweight is an energy imbalance between calories consumed and calories expended. It is essential to monitor everyday food intake for obesity prevention and management. Existing dietary assessment methods usually require manually recording and recall of food types and portions. Accuracy of the results largely relies on many uncertain factors such as user's memory, food knowledge, and portion estimations. As a result, the accuracy is often compromised. Accurate and convenient dietary assessment methods are still blank and needed in both population and research societies. In this thesis, an automatic food intake assessment method using cameras, inertial measurement units (IMUs) on smart phones was developed to help people foster a healthy life style. With this method, users use their smart phones before and after a meal to capture images or videos around the meal. The smart phone will recognize food items and calculate the volume of the food consumed and provide the results to users. The technical objective is to explore the feasibility of image based food recognition and image based volume estimation. This thesis comprises five publications that address four specific goals of this work: (1) to develop a prototype system with existing methods to review the literature methods, find their drawbacks and explore the feasibility to develop novel methods; (2) based on the prototype system, to investigate new food classification methods to improve the recognition accuracy to a field application level; (3) to design indexing methods for large-scale image database to facilitate the development of new food image recognition and retrieval algorithms; (4) to develop novel convenient and accurate food volume estimation methods using only smart phones with cameras and IMUs. A prototype system was implemented to review existing methods. Image feature detector and descriptor were developed and a nearest neighbor classifier were implemented to classify food items. A reedit card marker method was introduced for metric scale 3D reconstruction and volume calculation. To increase recognition accuracy, novel multi-view food recognition algorithms were developed to recognize regular shape food items. To further increase the accuracy and make the algorithm applicable to arbitrary food items, new food features, new classifiers were designed. The efficiency of the algorithm was increased by means of developing novel image indexing method in large-scale image database. Finally, the volume calculation was enhanced through reducing the marker and introducing IMUs. Sensor fusion technique to combine measurements from cameras and IMUs were explored to infer the metric scale of the 3D model as well as reduce noises from these sensors.
Resumo:
OBJECTIVES: This paper is concerned with checking goodness-of-fit of binary logistic regression models. For the practitioners of data analysis, the broad classes of procedures for checking goodness-of-fit available in the literature are described. The challenges of model checking in the context of binary logistic regression are reviewed. As a viable solution, a simple graphical procedure for checking goodness-of-fit is proposed. METHODS: The graphical procedure proposed relies on pieces of information available from any logistic analysis; the focus is on combining and presenting these in an informative way. RESULTS: The information gained using this approach is presented with three examples. In the discussion, the proposed method is put into context and compared with other graphical procedures for checking goodness-of-fit of binary logistic models available in the literature. CONCLUSION: A simple graphical method can significantly improve the understanding of any logistic regression analysis and help to prevent faulty conclusions.
Resumo:
The general model The aim of this chapter is to introduce a structured overview of the different possibilities available to display and analyze brain electric scalp potentials. First, a general formal model of time-varying distributed EEG potentials is introduced. Based on this model, the most common analysis strategies used in EEG research are introduced and discussed as specific cases of this general model. Both the general model and particular methods are also expressed in mathematical terms. It is however not necessary to understand these terms to understand the chapter. The general model that we propose here is based on the statement made in Chapter 3, stating that the electric field produced by active neurons in the brain propagates in brain tissue without delay in time. Contrary to other imaging methods that are based on hemodynamic or metabolic processes, the EEG scalp potentials are thus “real-time,” not delayed and not a-priori frequency-filtered measurements. If only a single dipolar source in the brain were active, the temporal dynamics of the activity of that source would be exactly reproduced by the temporal dynamics observed in the scalp potentials produced by that source. This is illustrated in Figure 5.1, where the expected EEG signal of a single source with spindle-like dynamics in time has been computed. The dynamics of the scalp potentials exactly reproduce the dynamics of the source. The amplitude of the measured potentials depends on the relation between the location and orientation of the active source, its strength and the electrode position.
Resumo:
Mixed Reality (MR) aims to link virtual entities with the real world and has many applications such as military and medical domains [JBL+00, NFB07]. In many MR systems and more precisely in augmented scenes, one needs the application to render the virtual part accurately at the right time. To achieve this, such systems acquire data related to the real world from a set of sensors before rendering virtual entities. A suitable system architecture should minimize the delays to keep the overall system delay (also called end-to-end latency) within the requirements for real-time performance. In this context, we propose a compositional modeling framework for MR software architectures in order to specify, simulate and validate formally the time constraints of such systems. Our approach is first based on a functional decomposition of such systems into generic components. The obtained elements as well as their typical interactions give rise to generic representations in terms of timed automata. A whole system is then obtained as a composition of such defined components. To write specifications, a textual language named MIRELA (MIxed REality LAnguage) is proposed along with the corresponding compilation tools. The generated output contains timed automata in UPPAAL format for simulation and verification of time constraints. These automata may also be used to generate source code skeletons for an implementation on a MR platform. The approach is illustrated first on a small example. A realistic case study is also developed. It is modeled by several timed automata synchronizing through channels and including a large number of time constraints. Both systems have been simulated in UPPAAL and checked against the required behavioral properties.
Resumo:
Localized short-echo-time (1)H-MR spectra of human brain contain contributions of many low-molecular-weight metabolites and baseline contributions of macromolecules. Two approaches to model such spectra are compared and the data acquisition sequence, optimized for reproducibility, is presented. Modeling relies on prior knowledge constraints and linear combination of metabolite spectra. Investigated was what can be gained by basis parameterization, i.e., description of basis spectra as sums of parametric lineshapes. Effects of basis composition and addition of experimentally measured macromolecular baselines were investigated also. Both fitting methods yielded quantitatively similar values, model deviations, error estimates, and reproducibility in the evaluation of 64 spectra of human gray and white matter from 40 subjects. Major advantages of parameterized basis functions are the possibilities to evaluate fitting parameters separately, to treat subgroup spectra as independent moieties, and to incorporate deviations from straightforward metabolite models. It was found that most of the 22 basis metabolites used may provide meaningful data when comparing patient cohorts. In individual spectra, sums of closely related metabolites are often more meaningful. Inclusion of a macromolecular basis component leads to relatively small, but significantly different tissue content for most metabolites. It provides a means to quantitate baseline contributions that may contain crucial clinical information.
Resumo:
BACKGROUND The cost-effectiveness of routine viral load (VL) monitoring of HIV-infected patients on antiretroviral therapy (ART) depends on various factors that differ between settings and across time. Low-cost point-of-care (POC) tests for VL are in development and may make routine VL monitoring affordable in resource-limited settings. We developed a software tool to study the cost-effectiveness of switching to second-line ART with different monitoring strategies, and focused on POC-VL monitoring. METHODS We used a mathematical model to simulate cohorts of patients from start of ART until death. We modeled 13 strategies (no 2nd-line, clinical, CD4 (with or without targeted VL), POC-VL, and laboratory-based VL monitoring, with different frequencies). We included a scenario with identical failure rates across strategies, and one in which routine VL monitoring reduces the risk of failure. We compared lifetime costs and averted disability-adjusted life-years (DALYs). We calculated incremental cost-effectiveness ratios (ICER). We developed an Excel tool to update the results of the model for varying unit costs and cohort characteristics, and conducted several sensitivity analyses varying the input costs. RESULTS Introducing 2nd-line ART had an ICER of US$1651-1766/DALY averted. Compared with clinical monitoring, the ICER of CD4 monitoring was US$1896-US$5488/DALY averted and VL monitoring US$951-US$5813/DALY averted. We found no difference between POC- and laboratory-based VL monitoring, except for the highest measurement frequency (every 6 months), where laboratory-based testing was more effective. Targeted VL monitoring was on the cost-effectiveness frontier only if the difference between 1st- and 2nd-line costs remained large, and if we assumed that routine VL monitoring does not prevent failure. CONCLUSION Compared with the less expensive strategies, the cost-effectiveness of routine VL monitoring essentially depends on the cost of 2nd-line ART. Our Excel tool is useful for determining optimal monitoring strategies for specific settings, with specific sex-and age-distributions and unit costs.
Resumo:
Development of homology modeling methods will remain an area of active research. These methods aim to develop and model increasingly accurate three-dimensional structures of yet uncrystallized therapeutically relevant proteins e.g. Class A G-Protein Coupled Receptors. Incorporating protein flexibility is one way to achieve this goal. Here, I will discuss the enhancement and validation of the ligand-steered modeling, originally developed by Dr. Claudio Cavasotto, via cross modeling of the newly crystallized GPCR structures. This method uses known ligands and known experimental information to optimize relevant protein binding sites by incorporating protein flexibility. The ligand-steered models were able to model, reasonably reproduce binding sites and the co-crystallized native ligand poses of the β2 adrenergic and Adenosine 2A receptors using a single template structure. They also performed better than the choice of template, and crude models in a small scale high-throughput docking experiments and compound selectivity studies. Next, the application of this method to develop high-quality homology models of Cannabinoid Receptor 2, an emerging non-psychotic pain management target, is discussed. These models were validated by their ability to rationalize structure activity relationship data of two, inverse agonist and agonist, series of compounds. The method was also applied to improve the virtual screening performance of the β2 adrenergic crystal structure by optimizing the binding site using β2 specific compounds. These results show the feasibility of optimizing only the pharmacologically relevant protein binding sites and applicability to structure-based drug design projects.
Resumo:
The copepod Calanus finmarchicus is the dominant species of the meso-zooplankton in the Norwegian Sea, and constitutes an important link between the phytoplankton and the higher trophic levels in the Norwegian Sea food chain. An individualbased model for C. finmarchicus, based on super-individuals and evolving traits for behaviour, stages, etc., is two-way coupled to the NORWegian ECOlogical Model system (NORWECOM). One year of modelled C. finmarchicus spatial distribution, production and biomass are found to represent observations reasonably well. High C. finmarchicus abundance is found along the Norwegian shelf-break in the early summer, while the overwintering population is found along the slope and in the deeper Norwegian Sea basins. The timing of the spring bloom is generally later than in the observations. Annual Norwegian Sea production is found to be 29 million tonnes of carbon and a production to biomass (P/B) ratio of 4.3 emerges. Sensitivity tests show that the modelling system is robust to initial values of behavioural traits and with regards to the number of super-individuals simulated given that this is above about 50,000 individuals. Experiments with the model system indicate that it provides a valuable tool for studies of ecosystem responses to causative forces such as prey density or overwintering population size. For example, introducing C. finmarchicus food limitations reduces the stock dramatically, but on the other hand, a reduced stock may rebuild in one year under normal conditions. The NetCDF file contains model grid coordinates and bottom topography.
Resumo:
China’s huge domestic market is constantly expanding, and is low-end demand oriented and highly dispersed. The domestic market-based development of China’s industrial cluster, however, is not only a quantitative expansion, but has also been accompanied with remarkable qualitative upgrading. Specialized markets are a microcosm that clearly indicate this paradoxical phenomenon. By analyzing three typical cases of industrial clusters that have specialized markets, this paper will make the case that under modern China’s market conditions, the local public sector is the crucial driving force for upgrading industrial clusters, which organize complicated transactions, promote quality control, and stimulate the division of labor.
Resumo:
We analyze competition in emerging markets between firms in developing and developed countries from the viewpoint of the boundaries of the firm. Although indigenous firms generally face a disadvantage in technology compared with foreign firms, they have an advantage in marketing as local firms. Moreover, they have opportunities to leave weaker fields to independent specialized firms and use lower wages. On the other hand, foreign firms also have their own advantages and disadvantages for growth. Therefore, entry conditions for indigenous firms can vary greatly depending on the situation. We classify these conditions into eight cases by developing a model and showing each boundary choice for indigenous firms.