891 resultados para Computer aided software engineering
Resumo:
This paper presents an optimization study of a distillation column for methanol and aqueous glycerol separation in a biodiesel production plant. Considering the available physical data of the column configuration, a steady state model was built for the column using Aspen-HYSYS as process simulator. Several sensitivity analysis were performed in order to better understand the relation between the variables of the distillation process. With the information obtained by the simulator, it is possible to define the best range for some operational variables that maintain composition of the desired product under specifications and choose operational conditions to minimize energy consumptions.
Resumo:
The paper presents a multi-robot cooperative framework to estimate the 3D position of dynamic targets, based on bearing-only vision measurements. The uncertainty of the observation provided by each robot equipped with a bearing-only vision system is effectively addressed for cooperative triangulation purposes by weighing the contribution of each monocular bearing ray in a probabilistic manner. The envisioned framework is evaluated in an outdoor scenario with a team of heterogeneous robots composed of an Unmanned Ground and Aerial Vehicle.
Resumo:
Based on the report for the unit “Sociology of New Information Technologies” of the Master on Computer Sciences at FCT/University Nova Lisbon in 2015-16. The responsible of this curricular unit is Prof. António Moniz
Resumo:
Adoptive cell transfer using engineered T cells is emerging as a promising treatment for metastatic melanoma. Such an approach allows one to introduce T cell receptor (TCR) modifications that, while maintaining the specificity for the targeted antigen, can enhance the binding and kinetic parameters for the interaction with peptides (p) bound to major histocompatibility complexes (MHC). Using the well-characterized 2C TCR/SIYR/H-2K(b) structure as a model system, we demonstrated that a binding free energy decomposition based on the MM-GBSA approach provides a detailed and reliable description of the TCR/pMHC interactions at the structural and thermodynamic levels. Starting from this result, we developed a new structure-based approach, to rationally design new TCR sequences, and applied it to the BC1 TCR targeting the HLA-A2 restricted NY-ESO-1157-165 cancer-testis epitope. Fifty-four percent of the designed sequence replacements exhibited improved pMHC binding as compared to the native TCR, with up to 150-fold increase in affinity, while preserving specificity. Genetically engineered CD8(+) T cells expressing these modified TCRs showed an improved functional activity compared to those expressing BC1 TCR. We measured maximum levels of activities for TCRs within the upper limit of natural affinity, K D = ∼1 - 5 μM. Beyond the affinity threshold at K D < 1 μM we observed an attenuation in cellular function, in line with the "half-life" model of T cell activation. Our computer-aided protein-engineering approach requires the 3D-structure of the TCR-pMHC complex of interest, which can be obtained from X-ray crystallography. We have also developed a homology modeling-based approach, TCRep 3D, to obtain accurate structural models of any TCR-pMHC complexes when experimental data is not available. Since the accuracy of the models depends on the prediction of the TCR orientation over pMHC, we have complemented the approach with a simplified rigid method to predict this orientation and successfully assessed it using all non-redundant TCR-pMHC crystal structures available. These methods potentially extend the use of our TCR engineering method to entire TCR repertoires for which no X-ray structure is available. We have also performed a steered molecular dynamics study of the unbinding of the TCR-pMHC complex to get a better understanding of how TCRs interact with pMHCs. This entire rational TCR design pipeline is now being used to produce rationally optimized TCRs for adoptive cell therapies of stage IV melanoma.
Resumo:
Virtual Laboratories are an indispensablespace for developing practical activities in a Virtual Environment. In the field of Computer and Software Engineering different types of practical activities have tobe performed in order to obtain basic competences which are impossible to achieve by other means. This paper specifies an ontology for a general virtual laboratory.The proposed ontology provides a mechanism to select the best resources needed in a Virtual Laboratory once a specific practical activity has been defined and the maincompetences that students have to achieve in the learning process have been fixed. Furthermore, the proposed ontology can be used to develop an automatic and wizardtool that creates a Moodle Classroom using the practical activity specification and the related competences.
Resumo:
Visualization of high-dimensional data requires a mapping to a visual space. Whenever the goal is to preserve similarity relations a frequent strategy is to use 2D projections, which afford intuitive interactive exploration, e. g., by users locating and selecting groups and gradually drilling down to individual objects. In this paper, we propose a framework for projecting high-dimensional data to 3D visual spaces, based on a generalization of the Least-Square Projection (LSP). We compare projections to 2D and 3D visual spaces both quantitatively and through a user study considering certain exploration tasks. The quantitative analysis confirms that 3D projections outperform 2D projections in terms of precision. The user study indicates that certain tasks can be more reliably and confidently answered with 3D projections. Nonetheless, as 3D projections are displayed on 2D screens, interaction is more difficult. Therefore, we incorporate suitable interaction functionalities into a framework that supports 3D transformations, predefined optimal 2D views, coordinated 2D and 3D views, and hierarchical 3D cluster definition and exploration. For visually encoding data clusters in a 3D setup, we employ color coding of projected data points as well as four types of surface renderings. A second user study evaluates the suitability of these visual encodings. Several examples illustrate the framework`s applicability for both visual exploration of multidimensional abstract (non-spatial) data as well as the feature space of multi-variate spatial data.
Resumo:
In Information Visualization, adding and removing data elements can strongly impact the underlying visual space. We have developed an inherently incremental technique (incBoard) that maintains a coherent disposition of elements from a dynamic multidimensional data set on a 2D grid as the set changes. Here, we introduce a novel layout that uses pairwise similarity from grid neighbors, as defined in incBoard, to reposition elements on the visual space, free from constraints imposed by the grid. The board continues to be updated and can be displayed alongside the new space. As similar items are placed together, while dissimilar neighbors are moved apart, it supports users in the identification of clusters and subsets of related elements. Densely populated areas identified in the incSpace can be efficiently explored with the corresponding incBoard visualization, which is not susceptible to occlusion. The solution remains inherently incremental and maintains a coherent disposition of elements, even for fully renewed sets. The algorithm considers relative positions for the initial placement of elements, and raw dissimilarity to fine tune the visualization. It has low computational cost, with complexity depending only on the size of the currently viewed subset, V. Thus, a data set of size N can be sequentially displayed in O(N) time, reaching O(N (2)) only if the complete set is simultaneously displayed.
Resumo:
While watching TV, viewers use the remote control to turn the TV set on and off, change channel and volume, to adjust the image and audio settings, etc. Worldwide, research institutes collect information about audience measurement, which can also be used to provide personalization and recommendation services, among others. The interactive digital TV offers viewers the opportunity to interact with interactive applications associated with the broadcast program. Interactive TV infrastructure supports the capture of the user-TV interaction at fine-grained levels. In this paper we propose the capture of all the user interaction with a TV remote control-including short term and instant interactions: we argue that the corresponding captured information can be used to create content pervasively and automatically, and that this content can be used by a wide variety of services, such as audience measurement, personalization and recommendation services. The capture of fine grained data about instant and interval-based interactions also allows the underlying infrastructure to offer services at the same scale, such as annotation services and adaptative applications. We present the main modules of an infrastructure for TV-based services, along with a detailed example of a document used to record the user-remote control interaction. Our approach is evaluated by means of a proof-of-concept prototype which uses the Brazilian Digital TV System, the Ginga-NCL middleware.
Resumo:
Multidimensional Visualization techniques are invaluable tools for analysis of structured and unstructured data with variable dimensionality. This paper introduces PEx-Image-Projection Explorer for Images-a tool aimed at supporting analysis of image collections. The tool supports a methodology that employs interactive visualizations to aid user-driven feature detection and classification tasks, thus offering improved analysis and exploration capabilities. The visual mappings employ similarity-based multidimensional projections and point placement to layout the data on a plane for visual exploration. In addition to its application to image databases, we also illustrate how the proposed approach can be successfully employed in simultaneous analysis of different data types, such as text and images, offering a common visual representation for data expressed in different modalities.
Resumo:
The pervasive and ubiquitous computing has motivated researches on multimedia adaptation which aims at matching the video quality to the user needs and device restrictions. This technique has a high computational cost which needs to be studied and estimated when designing architectures and applications. This paper presents an analytical model to quantify these video transcoding costs in a hardware independent way. The model was used to analyze the impact of transcoding delays in end-to-end live-video transmissions over LANs, MANs and WANs. Experiments confirm that the proposed model helps to define the best transcoding architecture for different scenarios.
Resumo:
Modern database applications are increasingly employing database management systems (DBMS) to store multimedia and other complex data. To adequately support the queries required to retrieve these kinds of data, the DBMS need to answer similarity queries. However, the standard structured query language (SQL) does not provide effective support for such queries. This paper proposes an extension to SQL that seamlessly integrates syntactical constructions to express similarity predicates to the existing SQL syntax and describes the implementation of a similarity retrieval engine that allows posing similarity queries using the language extension in a relational DBM. The engine allows the evaluation of every aspect of the proposed extension, including the data definition language and data manipulation language statements, and employs metric access methods to accelerate the queries. Copyright (c) 2008 John Wiley & Sons, Ltd.
Resumo:
Techniques devoted to generating triangular meshes from intensity images either take as input a segmented image or generate a mesh without distinguishing individual structures contained in the image. These facts may cause difficulties in using such techniques in some applications, such as numerical simulations. In this work we reformulate a previously developed technique for mesh generation from intensity images called Imesh. This reformulation makes Imesh more versatile due to an unified framework that allows an easy change of refinement metric, rendering it effective for constructing meshes for applications with varied requirements, such as numerical simulation and image modeling. Furthermore, a deeper study about the point insertion problem and the development of geometrical criterion for segmentation is also reported in this paper. Meshes with theoretical guarantee of quality can also be obtained for each individual image structure as a post-processing step, a characteristic not usually found in other methods. The tests demonstrate the flexibility and the effectiveness of the approach.
Resumo:
Point placement strategies aim at mapping data points represented in higher dimensions to bi-dimensional spaces and are frequently used to visualize relationships amongst data instances. They have been valuable tools for analysis and exploration of data sets of various kinds. Many conventional techniques, however, do not behave well when the number of dimensions is high, such as in the case of documents collections. Later approaches handle that shortcoming, but may cause too much clutter to allow flexible exploration to take place. In this work we present a novel hierarchical point placement technique that is capable of dealing with these problems. While good grouping and separation of data with high similarity is maintained without increasing computation cost, its hierarchical structure lends itself both to exploration in various levels of detail and to handling data in subsets, improving analysis capability and also allowing manipulation of larger data sets.
Resumo:
The problem of projecting multidimensional data into lower dimensions has been pursued by many researchers due to its potential application to data analyses of various kinds. This paper presents a novel multidimensional projection technique based on least square approximations. The approximations compute the coordinates of a set of projected points based on the coordinates of a reduced number of control points with defined geometry. We name the technique Least Square Projections ( LSP). From an initial projection of the control points, LSP defines the positioning of their neighboring points through a numerical solution that aims at preserving a similarity relationship between the points given by a metric in mD. In order to perform the projection, a small number of distance calculations are necessary, and no repositioning of the points is required to obtain a final solution with satisfactory precision. The results show the capability of the technique to form groups of points by degree of similarity in 2D. We illustrate that capability through its application to mapping collections of textual documents from varied sources, a strategic yet difficult application. LSP is faster and more accurate than other existing high-quality methods, particularly where it was mostly tested, that is, for mapping text sets.