986 resultados para Application programming interfaces (API)
Resumo:
Applications are subject of a continuous evolution process with a profound impact on their underlining data model, hence requiring frequent updates in the applications' class structure and database structure as well. This twofold problem, schema evolution and instance adaptation, usually known as database evolution, is addressed in this thesis. Additionally, we address concurrency and error recovery problems with a novel meta-model and its aspect-oriented implementation. Modern object-oriented databases provide features that help programmers deal with object persistence, as well as all related problems such as database evolution, concurrency and error handling. In most systems there are transparent mechanisms to address these problems, nonetheless the database evolution problem still requires some human intervention, which consumes much of programmers' and database administrators' work effort. Earlier research works have demonstrated that aspect-oriented programming (AOP) techniques enable the development of flexible and pluggable systems. In these earlier works, the schema evolution and the instance adaptation problems were addressed as database management concerns. However, none of this research was focused on orthogonal persistent systems. We argue that AOP techniques are well suited to address these problems in orthogonal persistent systems. Regarding the concurrency and error recovery, earlier research showed that only syntactic obliviousness between the base program and aspects is possible. Our meta-model and framework follow an aspect-oriented approach focused on the object-oriented orthogonal persistent context. The proposed meta-model is characterized by its simplicity in order to achieve efficient and transparent database evolution mechanisms. Our meta-model supports multiple versions of a class structure by applying a class versioning strategy. Thus, enabling bidirectional application compatibility among versions of each class structure. That is to say, the database structure can be updated because earlier applications continue to work, as well as later applications that have only known the updated class structure. The specific characteristics of orthogonal persistent systems, as well as a metadata enrichment strategy within the application's source code, complete the inception of the meta-model and have motivated our research work. To test the feasibility of the approach, a prototype was developed. Our prototype is a framework that mediates the interaction between applications and the database, providing them with orthogonal persistence mechanisms. These mechanisms are introduced into applications as an {\it aspect} in the aspect-oriented sense. Objects do not require the extension of any super class, the implementation of an interface nor contain a particular annotation. Parametric type classes are also correctly handled by our framework. However, classes that belong to the programming environment must not be handled as versionable due to restrictions imposed by the Java Virtual Machine. Regarding concurrency support, the framework provides the applications with a multithreaded environment which supports database transactions and error recovery. The framework keeps applications oblivious to the database evolution problem, as well as persistence. Programmers can update the applications' class structure because the framework will produce a new version for it at the database metadata layer. Using our XML based pointcut/advice constructs, the framework's instance adaptation mechanism is extended, hence keeping the framework also oblivious to this problem. The potential developing gains provided by the prototype were benchmarked. In our case study, the results confirm that mechanisms' transparency has positive repercussions on the programmer's productivity, simplifying the entire evolution process at application and database levels. The meta-model itself also was benchmarked in terms of complexity and agility. Compared with other meta-models, it requires less meta-object modifications in each schema evolution step. Other types of tests were carried out in order to validate prototype and meta-model robustness. In order to perform these tests, we used an OO7 small size database due to its data model complexity. Since the developed prototype offers some features that were not observed in other known systems, performance benchmarks were not possible. However, the developed benchmark is now available to perform future performance comparisons with equivalent systems. In order to test our approach in a real world scenario, we developed a proof-of-concept application. This application was developed without any persistence mechanisms. Using our framework and minor changes applied to the application's source code, we added these mechanisms. Furthermore, we tested the application in a schema evolution scenario. This real world experience using our framework showed that applications remains oblivious to persistence and database evolution. In this case study, our framework proved to be a useful tool for programmers and database administrators. Performance issues and the single Java Virtual Machine concurrent model are the major limitations found in the framework.
Resumo:
Relatório de Estágio para a obtenção do grau de Mestre na área de Educação e Comunicação Multimédia
Resumo:
Dissertação apresentada ao Instituto Politécnico de Castelo Branco para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Desenvolvimento de Software e Sistemas Interativos, realizada sob a orientação científica do Doutor Pedro Nuno Moreira da Silva, Professor Adjunto da Unidade Técnico-Científica de Informática do Departamento da Escola Superior de Tecnologia do Instituto Politécnico de Castelo Branco.
Resumo:
Humans have a high ability to extract visual data information acquired by sight. Trought a learning process, which starts at birth and continues throughout life, image interpretation becomes almost instinctively. At a glance, one can easily describe a scene with reasonable precision, naming its main components. Usually, this is done by extracting low-level features such as edges, shapes and textures, and associanting them to high level meanings. In this way, a semantic description of the scene is done. An example of this, is the human capacity to recognize and describe other people physical and behavioral characteristics, or biometrics. Soft-biometrics also represents inherent characteristics of human body and behaviour, but do not allow unique person identification. Computer vision area aims to develop methods capable of performing visual interpretation with performance similar to humans. This thesis aims to propose computer vison methods which allows high level information extraction from images in the form of soft biometrics. This problem is approached in two ways, unsupervised and supervised learning methods. The first seeks to group images via an automatic feature extraction learning , using both convolution techniques, evolutionary computing and clustering. In this approach employed images contains faces and people. Second approach employs convolutional neural networks, which have the ability to operate on raw images, learning both feature extraction and classification processes. Here, images are classified according to gender and clothes, divided into upper and lower parts of human body. First approach, when tested with different image datasets obtained an accuracy of approximately 80% for faces and non-faces and 70% for people and non-person. The second tested using images and videos, obtained an accuracy of about 70% for gender, 80% to the upper clothes and 90% to lower clothes. The results of these case studies, show that proposed methods are promising, allowing the realization of automatic high level information image annotation. This opens possibilities for development of applications in diverse areas such as content-based image and video search and automatica video survaillance, reducing human effort in the task of manual annotation and monitoring.
Resumo:
This document presents GEmSysC, an unified cryptographic API for embedded systems. Software layers implementing this API can be built over existing libraries, allowing embedded software to access cryptographic functions in a consistent way that does not depend on the underlying library. The API complies to good practices for API design and good practices for embedded software development and took its inspiration from other cryptographic libraries and standards. The main inspiration for creating GEmSysC was the CMSIS-RTOS standard, which defines an unified API for embedded software in an implementation-independent way, but targets operating systems instead of cryptographic functions. GEmSysC is made of a generic core and attachable modules, one for each cryptographic algorithm. This document contains the specification of the core of GEmSysC and three of its modules: AES, RSA and SHA-256. GEmSysC was built targeting embedded systems, but this does not restrict its use only in such systems – after all, embedded systems are just very limited computing devices. As a proof of concept, two implementations of GEmSysC were made. One of them was built over wolfSSL, which is an open source library for embedded systems. The other was built over OpenSSL, which is open source and a de facto standard. Unlike wolfSSL, OpenSSL does not specifically target embedded systems. The implementation built over wolfSSL was evaluated in a Cortex- M3 processor with no operating system while the implementation built over OpenSSL was evaluated on a personal computer with Windows 10 operating system. This document displays test results showing GEmSysC to be simpler than other libraries in some aspects. These results have shown that both implementations incur in little overhead in computation time compared to the cryptographic libraries themselves. The overhead of the implementation has been measured for each cryptographic algorithm and is between around 0% and 0.17% for the implementation over wolfSSL and between 0.03% and 1.40% for the one over OpenSSL. This document also presents the memory costs for each implementation.
Resumo:
Se ha diseñado una aplicación móvil que permite examinar el riesgo de melanoma mediante el análisis de una foto. La memoria documenta la realización de una aplicación de servidor que forma parte de una solución de eHealth con un cliente ya desarrollado por un proyecto previo de fin de carrera en forma de una aplicación Android. La aplicación de servidor desarrollada expone una API de servicios web REST y presenta una arquitectura extensible dinámicamente mediante la implementación de un patrón plug-in.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
Policy and decision makers dealing with environmental conservation and land use planning often require identifying potential sites for contributing to minimize sediment flow reaching riverbeds. This is the case of reforestation initiatives, which can have sediment flow minimization among their objectives. This paper proposes an Integer Programming (IP) formulation and a Heuristic solution method for selecting a predefined number of locations to be reforested in order to minimize sediment load at a given outlet in a watershed. Although the core structure of both methods can be applied for different sorts of flow, the formulations are targeted to minimization of sediment delivery. The proposed approaches make use of a Single Flow Direction (SFD) raster map covering the watershed in order to construct a tree structure so that the outlet cell corresponds to the root node in the tree. The results obtained with both approaches are in agreement with expert assessments of erosion levels, slopes and distances to the riverbeds, which in turn allows concluding that this approach is suitable for minimizing sediment flow. Since the results obtained with the IP formulation are the same as the ones obtained with the Heuristic approach, an optimality proof is included in the present work. Taking into consideration that the heuristic requires much less computation time, this solution method is more suitable to be applied in large sized problems.
Resumo:
During the lifetime of a research project, different partners develop several research prototype tools that share many common aspects. This is equally true for researchers as individuals and as groups: during a period of time they often develop several related tools to pursue a specific research line. Making research prototype tools easily accessible to the community is of utmost importance to promote the corresponding research, get feedback, and increase the tools’ lifetime beyond the duration of a specific project. One way to achieve this is to build graphical user interfaces (GUIs) that facilitate trying tools; in particular, with web-interfaces one avoids the overhead of downloading and installing the tools. Building GUIs from scratch is a tedious task, in particular for web-interfaces, and thus it typically gets low priority when developing a research prototype. Often we opt for copying the GUI of one tool and modifying it to fit the needs of a new related tool. Apart from code duplication, these tools will “live” separately, even though we might benefit from having them all in a common environment since they are related. This work aims at simplifying the process of building GUIs for research prototypes tools. In particular, we present EasyInterface, a toolkit that is based on novel methodology that provides an easy way to make research prototype tools available via common different environments such as a web-interface, within Eclipse, etc. It includes a novel text-based output language that allows to present results graphically without requiring any knowledge in GUI/Web programming. For example, an output of a tool could be (a structured version of) “highlight line number 10 of file ex.c” and “when the user clicks on line 10, open a dialog box with the text ...”. The environment will interpret this output and converts it to corresponding visual e_ects. The advantage of using this approach is that it will be interpreted equally by all environments of EasyInterface, e.g., the web-interface, the Eclipse plugin, etc. EasyInterface has been developed in the context of the Envisage [5] project, and has been evaluated on tools developed in this project, which include static analyzers, test-case generators, compilers, simulators, etc. EasyInterface is open source and available at GitHub2.
Resumo:
The liver is an important metabolic and endocrine organ in the fetus but the extent to which its hormone receptor (R) sensitivity is developmentally regulated in early life is not fully established. We, therefore, examined developmental changes in mRNA abundance for the growth hormone (GH) and prolactin (PRL) receptors (R) plus insulin-like growth factor (IGF)-I and –II and their receptors. Fetal and postnatal sheep were sampled at either 80, or 140 days gestation, 1, 30 days or six months of age. The effect of maternal nutrient restriction between early to mid (i.e. 28 to 80 days gestation, the time of early liver growth) gestation on gene expression was also examined in the fetus and juvenile offspring. Gene expression for the GHR, PRLR and IGF-IR increased through gestation peaking at birth, whereas IGF-I was maximal near to term. In contrast, IGF-II mRNA decreased between mid and late gestation to increase after birth whereas IGF-IIR remained unchanged. A substantial decline in mRNA abundance for GHR, PRLR and IGF-IR then occurred up to six months. Maternal nutrient restriction reduced GHR and IGF-IIR mRNA abundance in the fetus, but caused a precocious increase in the PRLR. Gene expression for IGF-I and –II were increased in juvenile offspring born to nutrient restricted mothers. In conclusion, there are marked differences in the developmental ontogeny and nutritional programming of specific hormones and their receptors involved in hepatic growth and development in the fetus. These could contribute to changes in liver function during adult life.
Resumo:
This study investigated the developmental and nutritional programming of two important mitochondrial proteins, namely voltage dependent anion channel (VDAC) and cytochrome c in the sheep kidney, liver and lung. The effect of maternal nutrient restriction between early to mid gestation (i.e. 28 to 80 days gestation, the period of maximal placental growth) on the abundance of these proteins was also examined in fetal and juvenile offspring. Fetuses were sampled at 80 and 140 days gestation (term ~147 days), and postnatal animals at 1 and 30 days and 6 months of age. The abundance of VDAC peaked at 140 days gestation in the lung, compared with 1 day after birth in the kidney and liver, whereas cytochrome c abundance was greatest at 140 days gestation in the liver, 1 day after birth in the kidney and 6 months of age in lungs. This differential ontogeny in mitochondrial protein abundance between tissues was accompanied with very different tissue specific responses to changes in maternal food intake. In the liver, maternal nutrient restriction only increased mitochondrial protein abundance at 80 days gestation, compared with no effect in the kidney. In contrast, in the lung mitochondrial protein abundance was raised near to term, whereas VDAC abundance was decreased by 6 months of age. These findings demonstrate the tissue specific nature of mitochondrial protein development that reflects differences in functional adaptation after birth. The divergence in mitochondrial response between tissues to maternal nutrient restriction early in pregnancy further reflects these differential ontogeny’s.
Resumo:
[en] It is known that most of the problems applied in the real life present uncertainty. In the rst part of the dissertation, basic concepts and properties of the Stochastic Programming have been introduced to the reader, also known as Optimization under Uncertainty. Moreover, since stochastic programs are complex to compute, we have presented some other models such as wait-and-wee, expected value and the expected result of using expected value. The expected value of perfect information and the value of stochastic solution measures quantify how worthy the Stochastic Programming is, with respect to the other models. In the second part, it has been designed and implemented with the modeller GAMS and the optimizer CPLEX an application that optimizes the distribution of non-perishable products, guaranteeing some nutritional requirements with minimum cost. It has been developed within Hazia project, managed by Sortarazi association and associated with Food Bank of Biscay and Basic Social Services of several districts of Biscay.
Resumo:
Esta tesis tiene dos objetivos, el primero es responder a la pregunta: ¿Cómo las personas con discapacidad pueden ser incluidos en los gimnasios regulares de la ciudad de Medellín y mediante la práctica de la actividad física aumentar su calidad de vida? -- El segundo objetivo es la aplicación de un proceso de diseño construido a partir de 3 metodologías: HCD Toolkit de IDEO, metodología de diseño y desarrollo de nuevos productos de Ulrich y Eppinger y las técnicas de ingeniería inversa y desarrollo de nuevos productos de Kevin otto y Kristin Wood, cuyo diferenciador es la ubicación de la usabilidad y la innovación social como eje transversal durante el proceso de diseño -- Para alcanzar estos objetivos, se diseñó una investigación aplicada y explicativa, en la que tuvieron participación medios documentales, de campo y de experimentación como libros, entrevistas, hallazgos guiados por la comunidad de personas con discapacidad en el gimnasio VIVO de la Universidad EAFIT y pruebas de uso de los productos diseñados con el usuario real en el contexto real; adicionalmente, esta tesis fue desarrollada bajo la modalidad de investigación cualitativa de acción en la que se tuvo contacto constante con la población y caso de estudio “población con discapacidad en gimnasios de la ciudad de Medellín” durante toda su ejecución, sin embargo, para efectos de la etapa de desarrollo de producto fue necesario aplicar la modalidad cuantitativa, por lo tanto tiene un carácter mixto -- Se realizó una investigación sobre las dificultades y necesidades que tenían las personas con discapacidad para acceder a un gimnasio regular, esto por medio de una validación en la que se llevó a dichos usuarios al gimnasio VIVO de la Universidad EAFIT donde luego del análisis del ejercicio de validación se identificó la necesidad de diseñar un kit de objetos de interacción que permitiera a las personas con discapacidad, usar los objetos de los gimnasios regulares, específicamente para la población con dificultades para agarrar como cuadripléjicos y artríticos; y para usuarios de sillas de ruedas con dificultades de equilibrio a la hora de interactuar con algunas de las máquinas -- El proceso de diseño aplicado para dicho desarrollo fue construido mediante la mezcla de tres metodologías diferentes mencionadas anteriormente y teniendo en cuenta la aplicación de la usabilidad y la innovación social como eje transversal del mismo proceso de creación -- Finalmente, se diseñó un freno para sillas de ruedas el cual evita que estas derriben al usuario hacia atrás cuando hace fuerza y un elemento tipo guante que reemplaza el agarre y la pinza gruesa de la mano; con estos objetos se logró que la mayoría de personas con dificultades para agarrar objetos o manijas puedan acceder a los gimnasios regulares y realizar una amplia serie de ejercicios con los que pueden aumentar la funcionalidad de su cuerpo y por tanto su calidad de vida
Resumo:
Questa Tesi prende in esame tutte le fasi che portano alla realizzazione di un generico videogioco applicandole per creare, dal principio, un gioco 3D con Unity. Se ne analizzerà l'ideazione, la progettazione degli ambienti ma anche degli algoritmi implementati, la produzione e quindi la scrittura del codice per poi terminare con i test effettuati.