813 resultados para Constraint based modelling
Resumo:
The modelling of the local structure of sol-gel derived Eu3+-based organic/inorganic hybrids is reported, based on Small-Angle X-ray Scattering (SAXS), photoluminescence and mid-infrared spectroscopy. The hybrid matrix of these organically modified silicates, classed as di-ureasils and termed U(2000) and U(600), is formed by poly( oxyethylene) (POE) chains of variable length grafted to siloxane domains by means of urea cross-linkages. Europium triflate, Eu(CF3SO3)(3), was incorporated in the two di-ureasil matrices with compositions 400 greater than or equal ton greater than or equal to 10, n is the molar ratio of ether oxygens per Eu3+. The SAXS data for undoped hybrids (n=infinity) show the presence of a well-defined peak attributed to the existence of a liquid-like spatial correlation of siloxane rich domains embedded in the polymer matrix and located at the ends of the organic segments. The obtained siloxane particle gyration radius Rg(1) is around 5 Angstrom (error within 10%), whereas the interparticle distance d is 25 +/-2 Angstrom and 40 +/-2 Angstrom, for U(600) and U(2000), respectively. For the Eu3+-based nanocomposites the formation of a two-level hierarchical local structure is discerned. The primary level is constituted by strongly spatially correlated siloxane particles of gyration radius Rg(1) (4-6 and 3-8 Angstrom, errors within 5%, for U(600())n Eu(CF3SO3)(3), 200 greater than or equal ton greater than or equal to 40, and U(2000)(n)Eu(CF3SO3)(3), 400 greater than or equal ton greater than or equal to 40, respectively) forming large clusters of gyration radius Rg(2) (approximate to 75 +/- 10 Angstrom). The local coordination of Eu3+ in both di-ureasil series is described combining the SAXS, photoluminescence and mid-infrared results. In the di-ureasils containing long polymer chains, U(2000)(n)Eu(CF3SO3)(3), the cations interact exclusively with the carbonyl oxygens atoms of the urea bridges at the siloxane-POE interface. In the hybrids containing shorter chains, U(600)(n)Eu(CF3SO3)(3) with n ranging from 200 to 60, the Eu3+ ions interact solely with the ether-type oxygens of the polymer chains. Nevertheless, in this latter family of hybrids a distinct Eu3+ local site environment involving the urea cross-linkages is detected when the europium content is increased up to n=40.
Resumo:
The crystallographically determined structure of biologically active 4,4-dichloro-1,3-diphenyl-4-telluraoct-2-en-1-one, 3, shows the coordination geometry for Te to be distorted psi-pentagonal bipyramidal based on a C2OCl3(lone pair) donor set. Notable is the presence of an intramolecular axial Te center dot center dot center dot O (carbonyl) interaction, a design element included to reduce hydrolysis. Raman and molecular modelling studies indicate the persistence of the Te center dot center dot center dot O(carbonyl) interaction in the solution (CHCl3) and gasphases, respectively. Docking studies of 3' (i.e. original 3 less one chloride) with Cathepsin B reveals a change in the configuration about the vinyl C = C bond. i.e. to E from Z (crystal structure). This isomerism allows the optimisation of interactions in the complex which features a covalent Te-SGCys29 bond. Crucially, the E configuration observed for 3' allows for the formation of a hypervalent Te center dot center dot center dot O interaction as well as an O center dot center dot center dot H-O hydrogen bond with the Gly27 and Glu122 residues, respectively. Additional stabilisation is afforded by a combination of interactions spanning the S1, S2, S1' and S2' sub-sites of Cathepsin B. The greater experimental inhibitory activity of 3 compared with analogues is rationalised by the additional interactions formed between 3' and the His110 and His111 residues in the occluding loop, which serve to hinder the entrance to the active site. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
The diagnosis, grading and classification of tumours has benefited considerably from the development of DCE-MRI which is now essential to the adequate clinical management of many tumour types due to its capability in detecting active angiogenesis. Several strategies have been proposed for DCE-MRI evaluation. Visual inspection of contrast agent concentration curves vs time is a very simple yet operator dependent procedure, therefore more objective approaches have been developed in order to facilitate comparison between studies. In so called model free approaches, descriptive or heuristic information extracted from time series raw data have been used for tissue classification. The main issue concerning these schemes is that they have not a direct interpretation in terms of physiological properties of the tissues. On the other hand, model based investigations typically involve compartmental tracer kinetic modelling and pixel-by-pixel estimation of kinetic parameters via non-linear regression applied on region of interests opportunely selected by the physician. This approach has the advantage to provide parameters directly related to the pathophysiological properties of the tissue such as vessel permeability, local regional blood flow, extraction fraction, concentration gradient between plasma and extravascular-extracellular space. Anyway, nonlinear modelling is computational demanding and the accuracy of the estimates can be affected by the signal-to-noise ratio and by the initial solutions. The principal aim of this thesis is investigate the use of semi-quantitative and quantitative parameters for segmentation and classification of breast lesion. The objectives can be subdivided as follow: describe the principal techniques to evaluate time intensity curve in DCE-MRI with focus on kinetic model proposed in literature; to evaluate the influence in parametrization choice for a classic bi-compartmental kinetic models; to evaluate the performance of a method for simultaneous tracer kinetic modelling and pixel classification; to evaluate performance of machine learning techniques training for segmentation and classification of breast lesion.
Resumo:
This thesis is focused on Smart Grid applications in medium voltage distribution networks. For the development of new applications it appears useful the availability of simulation tools able to model dynamic behavior of both the power system and the communication network. Such a co-simulation environment would allow the assessment of the feasibility of using a given network technology to support communication-based Smart Grid control schemes on an existing segment of the electrical grid and to determine the range of control schemes that different communications technologies can support. For this reason, is presented a co-simulation platform that has been built by linking the Electromagnetic Transients Program Simulator (EMTP v3.0) with a Telecommunication Network Simulator (OPNET-Riverbed v18.0). The simulator is used to design and analyze a coordinate use of Distributed Energy Resources (DERs) for the voltage/var control (VVC) in distribution network. This thesis is focused control structure based on the use of phase measurement units (PMUs). In order to limit the required reinforcements of the communication infrastructures currently adopted by Distribution Network Operators (DNOs), the study is focused on leader-less MAS schemes that do not assign special coordinating rules to specific agents. Leader-less MAS are expected to produce more uniform communication traffic than centralized approaches that include a moderator agent. Moreover, leader-less MAS are expected to be less affected by limitations and constraint of some communication links. The developed co-simulator has allowed the definition of specific countermeasures against the limitations of the communication network, with particular reference to the latency and loss and information, for both the case of wired and wireless communication networks. Moreover, the co-simulation platform has bee also coupled with a mobility simulator in order to study specific countermeasures against the negative effects on the medium voltage/current distribution network caused by the concurrent connection of electric vehicles.
Resumo:
Systems Biology is an innovative way of doing biology recently raised in bio-informatics contexts, characterised by the study of biological systems as complex systems with a strong focus on the system level and on the interaction dimension. In other words, the objective is to understand biological systems as a whole, putting on the foreground not only the study of the individual parts as standalone parts, but also of their interaction and of the global properties that emerge at the system level by means of the interaction among the parts. This thesis focuses on the adoption of multi-agent systems (MAS) as a suitable paradigm for Systems Biology, for developing models and simulation of complex biological systems. Multi-agent system have been recently introduced in informatics context as a suitabe paradigm for modelling and engineering complex systems. Roughly speaking, a MAS can be conceived as a set of autonomous and interacting entities, called agents, situated in some kind of nvironment, where they fruitfully interact and coordinate so as to obtain a coherent global system behaviour. The claim of this work is that the general properties of MAS make them an effective approach for modelling and building simulations of complex biological systems, following the methodological principles identified by Systems Biology. In particular, the thesis focuses on cell populations as biological systems. In order to support the claim, the thesis introduces and describes (i) a MAS-based model conceived for modelling the dynamics of systems of cells interacting inside cell environment called niches. (ii) a computational tool, developed for implementing the models and executing the simulations. The tool is meant to work as a kind of virtual laboratory, on top of which kinds of virtual experiments can be performed, characterised by the definition and execution of specific models implemented as MASs, so as to support the validation, falsification and improvement of the models through the observation and analysis of the simulations. A hematopoietic stem cell system is taken as reference case study for formulating a specific model and executing virtual experiments.
Resumo:
Globalization has increased the pressure on organizations and companies to operate in the most efficient and economic way. This tendency promotes that companies concentrate more and more on their core businesses, outsource less profitable departments and services to reduce costs. By contrast to earlier times, companies are highly specialized and have a low real net output ratio. For being able to provide the consumers with the right products, those companies have to collaborate with other suppliers and form large supply chains. An effect of large supply chains is the deficiency of high stocks and stockholding costs. This fact has lead to the rapid spread of Just-in-Time logistic concepts aimed minimizing stock by simultaneous high availability of products. Those concurring goals, minimizing stock by simultaneous high product availability, claim for high availability of the production systems in the way that an incoming order can immediately processed. Besides of design aspects and the quality of the production system, maintenance has a strong impact on production system availability. In the last decades, there has been many attempts to create maintenance models for availability optimization. Most of them concentrated on the availability aspect only without incorporating further aspects as logistics and profitability of the overall system. However, production system operator’s main intention is to optimize the profitability of the production system and not the availability of the production system. Thus, classic models, limited to represent and optimize maintenance strategies under the light of availability, fail. A novel approach, incorporating all financial impacting processes of and around a production system, is needed. The proposed model is subdivided into three parts, maintenance module, production module and connection module. This subdivision provides easy maintainability and simple extendability. Within those modules, all aspect of production process are modeled. Main part of the work lies in the extended maintenance and failure module that offers a representation of different maintenance strategies but also incorporates the effect of over-maintaining and failed maintenance (maintenance induced failures). Order release and seizing of the production system are modeled in the production part. Due to computational power limitation, it was not possible to run the simulation and the optimization with the fully developed production model. Thus, the production model was reduced to a black-box without higher degree of details.
Resumo:
Model based calibration has gained popularity in recent years as a method to optimize increasingly complex engine systems. However virtually all model based techniques are applied to steady state calibration. Transient calibration is by and large an emerging technology. An important piece of any transient calibration process is the ability to constrain the optimizer to treat the problem as a dynamic one and not as a quasi-static process. The optimized air-handling parameters corresponding to any instant of time must be achievable in a transient sense; this in turn depends on the trajectory of the same parameters over previous time instances. In this work dynamic constraint models have been proposed to translate commanded to actually achieved air-handling parameters. These models enable the optimization to be realistic in a transient sense. The air handling system has been treated as a linear second order system with PD control. Parameters for this second order system have been extracted from real transient data. The model has been shown to be the best choice relative to a list of appropriate candidates such as neural networks and first order models. The selected second order model was used in conjunction with transient emission models to predict emissions over the FTP cycle. It has been shown that emission predictions based on air-handing parameters predicted by the dynamic constraint model do not differ significantly from corresponding emissions based on measured air-handling parameters.
Resumo:
OBJECTIVES: Treatment as prevention depends on retaining HIV-infected patients in care. We investigated the effect on HIV transmission of bringing patients lost to follow up (LTFU) back into care. DESIGN: Mathematical model. METHODS: Stochastic mathematical model of cohorts of 1000 HIV-infected patients on antiretroviral therapy (ART), based on data from two clinics in Lilongwe, Malawi. We calculated cohort viral load (CVL; sum of individual mean viral loads each year) and used a mathematical relationship between viral load and transmission probability to estimate the number of new HIV infections. We simulated four scenarios: 'no LTFU' (all patients stay in care); 'no tracing' (patients LTFU are not traced); 'immediate tracing' (after missed clinic appointment); and, 'delayed tracing' (after six months). RESULTS: About 440 of 1000 patients were LTFU over five years. CVL (million copies/ml per 1000 patients) were 3.7 (95% prediction interval [PrI] 2.9-4.9) for no LTFU, 8.6 (95% PrI 7.3-10.0) for no tracing, 7.7 (95% PrI 6.2-9.1) for immediate, and 8.0 (95% PrI 6.7-9.5) for delayed tracing. Comparing no LTFU with no tracing the number of new infections increased from 33 (95% PrI 29-38) to 54 (95% PrI 47-60) per 1000 patients. Immediate tracing prevented 3.6 (95% PrI -3.3-12.8) and delayed tracing 2.5 (95% PrI -5.8-11.1) new infections per 1000. Immediate tracing was more efficient than delayed tracing: 116 and to 142 tracing efforts, respectively, were needed to prevent one new infection. CONCLUSION: Tracing of patients LTFU enhances the preventive effect of ART, but the number of transmissions prevented is small.
Resumo:
We present quantitative reconstructions of regional vegetation cover in north-western Europe, western Europe north of the Alps, and eastern Europe for five time windows in the Holocene around 6k, 3k, 0.5k, 0.2k, and 0.05k calendar years before present (bp)] at a 1 degrees x1 degrees spatial scale with the objective of producing vegetation descriptions suitable for climate modelling. The REVEALS model was applied on 636 pollen records from lakes and bogs to reconstruct the past cover of 25 plant taxa grouped into 10 plant-functional types and three land-cover types evergreen trees, summer-green (deciduous) trees, and open land]. The model corrects for some of the biases in pollen percentages by using pollen productivity estimates and fall speeds of pollen, and by applying simple but robust models of pollen dispersal and deposition. The emerging patterns of tree migration and deforestation between 6k bp and modern time in the REVEALS estimates agree with our general understanding of the vegetation history of Europe based on pollen percentages. However, the degree of anthropogenic deforestation (i.e. cover of cultivated and grazing land) at 3k, 0.5k, and 0.2k bp is significantly higher than deduced from pollen percentages. This is also the case at 6k in some parts of Europe, in particular Britain and Ireland. Furthermore, the relationship between summer-green and evergreen trees, and between individual tree taxa, differs significantly when expressed as pollen percentages or as REVEALS estimates of tree cover. For instance, when Pinus is dominant over Picea as pollen percentages, Picea is dominant over Pinus as REVEALS estimates. These differences play a major role in the reconstruction of European landscapes and for the study of land cover-climate interactions, biodiversity and human resources.
Resumo:
In this paper, we propose a new method for stitching multiple fluoroscopic images taken by a C-arm instrument. We employ an X-ray radiolucent ruler with numbered graduations while acquiring the images, and the image stitching is based on detecting and matching ruler parts in the images to the corresponding parts of a virtual ruler. To achieve this goal, we first detect the regular spaced graduations on the ruler and the numbers. After graduation labeling, for each image, we have the location and the associated number for every graduation on the ruler. Then, we initialize the panoramic X-ray image with the virtual ruler, and we “paste” each image by aligning the detected ruler part on the original image, to the corresponding part of the virtual ruler on the panoramic image. Our method is based on ruler matching but without the requirement of matching similar feature points in pairwise images, and thus, we do not necessarily require overlap between the images. We tested our method on eight different datasets of X-ray images, including long bones and a complete spine. Qualitative and quantitative experiments show that our method achieves good results.
Resumo:
Energy consumption modelling by state based approaches often assume constant energy consumption values in each state. However, it happens in certain situations that during state transitions or even during a state the energy consumption is not constant and does fluctuate. This paper discusses those issues by presenting some examples from wireless sensor and wireless local area networks for such cases and possible solutions.
Resumo:
Irregular computations pose sorne of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures, which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. Starting in the mid 80s there has been significant progress in the development of parallelizing compilers for logic programming (and more recently, constraint programming) resulting in quite capable parallelizers. The typical applications of these paradigms frequently involve irregular computations, and make heavy use of dynamic data structures with pointers, since logical variables represent in practice a well-behaved form of pointers. This arguably makes the techniques used in these compilers potentially interesting. In this paper, we introduce in a tutoríal way, sorne of the problems faced by parallelizing compilers for logic and constraint programs and provide pointers to sorne of the significant progress made in the area. In particular, this work has resulted in a series of achievements in the areas of inter-procedural pointer aliasing analysis for independence detection, cost models and cost analysis, cactus-stack memory management, techniques for managing speculative and irregular computations through task granularity control and dynamic task allocation such as work-stealing schedulers), etc.
Resumo:
In computer science, different types of reusable components for building software applications were proposed as a direct consequence of the emergence of new software programming paradigms. The success of these components for building applications depends on factors such as the flexibility in their combination or the facility for their selection in centralised or distributed environments such as internet. In this article, we propose a general type of reusable component, called primitive of representation, inspired by a knowledge-based approach that can promote reusability. The proposal can be understood as a generalisation of existing partial solutions that is applicable to both software and knowledge engineering for the development of hybrid applications that integrate conventional and knowledge based techniques. The article presents the structure and use of the component and describes our recent experience in the development of real-world applications based on this approach.