975 resultados para API (Application Programming Interface)
Resumo:
The persistence concern implemented as an aspect has been studied since the appearance of the Aspect-Oriented paradigm. Frequently, persistence is given as an example that can be aspectized, but until today no real world solution has applied that paradigm. Such solution should be able to enhance the programmer productivity and make the application less prone to errors. To test the viability of that concept, in a previous study we developed a prototype that implements Orthogonal Persistence as an aspect. This first version of the prototype was already fully functional with all Java types including arrays. In this work the results of our new research to overcome some limitations that we have identified on the data type abstraction and transparency in the prototype are presented. One of our goals was to avoid the Java standard idiom for genericity, based on casts, type tests and subtyping. Moreover, we also find the need to introduce some dynamic data type abilities. We consider that the Reflection is the solution to those issues. To achieve that, we have extended our prototype with a new static weaver that preprocesses the application source code in order to introduce changes to the normal behavior of the Java compiler with a new generated reflective code.
Resumo:
Based on the presupposition that the arts in the West always counted on resources, supports, and devices pertaining to its time context, an reflection is intended regarding the scenic compositions mediated by digital technologies do. Such technologies are inserted in the daily routine, also composing artistic experiments, thus playing a dialogical role with the art/technology intersection. Therefore, the proposal is to investigate what relationships are established in the contemporary theatrical scene from the contagion by digital technologies, aiming at establishing this parallel through a dialogue with the authors discussing the subject, and also based on the group practices having technological resources as a determinant factor in their plays. Furthermore, a reflection should be made on the scene that incorporates or is carried out in intermediatic events, analyzing how digital technologies (re)configure compositional processes of the plays by GAG Phila7, in the city of São Paulo/SP. For such, the dissertation is organized in three sections comprising four moments, to wit: brief overview of the field, contextualization, poetic analysis and synthesis. Qualitative methods are used as the methodological proposal: semi-structure interview, note and document taking (program, website, playing book, disclosure material for advertising text, photographs, and videos). Within the universe of qualitative research, it works with the epistemological perspective of the Gadamer philosophical hermeneutics. The possibilities allowed by the double virtual (Internet/web) generated a type of theater with another material basis and new forms of organization and structure, being possible to perceive that such technological advances and the arts are mutually contaminated, generating a dislocation in the logics of theatrical composition, movement beginning with the artistic vanguards, gradually intensified, thus offering new possibilities of constructions and hybridization of the of the most different possible types. Experiment ―Profanações_superfície de eventos de construção coletiva‖, idealized by Phila7 is inserted in this perspective. Object of the discussion of such research, the experiment works with possible poetics arising from the intersection with the digital technologies, aiming at identifying and problematizing the challenges from the technological evolution and expansion in a scenic context
Resumo:
A quantidade de páginas disponibilizadas na Web atingiu um tamanho tão grande que se tornou impossível recuperar informações manualmente, necessitando-se de mecanismos que possam ajudar nesse processo. Nesse contexto, os mecanismos de busca podem ser considerados como uma importante categoria do ciberespaço, especialmente para a área da Ciência da Informação, porque diz respeito à organização do conhecimento nesse ambiente, de tal forma que o Google tem sido considerado a porta de entrada no ciberespaço. Isso evidencia a importância que as interfaces de tais mecanismos podem ter sobre o comportamento informacional das pessoas. Recentes pesquisas mostram que nos últimos anos novos elementos contendo dados estruturados foram inseridos nas páginas de resultados do Google o que pode criar condições para mudanças no comportamento dos usuários. Neste trabalho apresenta-se características da tecnologia de eye tracking e seu uso em User Experience, com a apresentação de alguns resultados obtidos por meio uma investigação experimental, comparando o comportamento de usuários diante das páginas de resultados do Google e Yahoo. Observou-se que nos testes com o Google os participantes precisaram de cerca de 30% a menos de tempo para se decidirem sobre a escolha do link. Acredita-se que os participantes podem ter sofrido influência do elemento conhecido como rich snippet. Os resultados mostram que a interface foi capaz de influenciar o comportamento dos participantes quanto à escolha do melhor link, evidenciando a importância da apresentação dos resultados no processo de tomada de decisão de seus usuários.
Resumo:
Humans have a high ability to extract visual data information acquired by sight. Trought a learning process, which starts at birth and continues throughout life, image interpretation becomes almost instinctively. At a glance, one can easily describe a scene with reasonable precision, naming its main components. Usually, this is done by extracting low-level features such as edges, shapes and textures, and associanting them to high level meanings. In this way, a semantic description of the scene is done. An example of this, is the human capacity to recognize and describe other people physical and behavioral characteristics, or biometrics. Soft-biometrics also represents inherent characteristics of human body and behaviour, but do not allow unique person identification. Computer vision area aims to develop methods capable of performing visual interpretation with performance similar to humans. This thesis aims to propose computer vison methods which allows high level information extraction from images in the form of soft biometrics. This problem is approached in two ways, unsupervised and supervised learning methods. The first seeks to group images via an automatic feature extraction learning , using both convolution techniques, evolutionary computing and clustering. In this approach employed images contains faces and people. Second approach employs convolutional neural networks, which have the ability to operate on raw images, learning both feature extraction and classification processes. Here, images are classified according to gender and clothes, divided into upper and lower parts of human body. First approach, when tested with different image datasets obtained an accuracy of approximately 80% for faces and non-faces and 70% for people and non-person. The second tested using images and videos, obtained an accuracy of about 70% for gender, 80% to the upper clothes and 90% to lower clothes. The results of these case studies, show that proposed methods are promising, allowing the realization of automatic high level information image annotation. This opens possibilities for development of applications in diverse areas such as content-based image and video search and automatica video survaillance, reducing human effort in the task of manual annotation and monitoring.
Resumo:
Este documento descreve o trabalho realizado em conjunto com a empresa MedSUPPORT[1] no desenvolvimento de uma plataforma digital para análise da satisfação dos utentes de unidades de saúde. Atualmente a avaliação de satisfação junto dos seus clientes é um procedimento importante e que deve ser utilizado pelas empresas como mais uma ferramenta de avaliação dos seus produtos ou serviços. Para as unidades de saúde a avaliação da satisfação do utente é atualmente considerada como um objetivo fundamental dos serviços de saúde e tem vindo a ocupar um lugar progressivamente mais importante na avaliação da qualidade dos mesmos. Neste âmbito idealizou-se desenvolver uma plataforma digital para análise da satisfação dos utentes de unidades de saúde. O estudo inicial sobre o conceito da satisfação de consumidores e utentes permitiu consolidar os conceitos associados à temática em estudo. Conhecer as oito dimensões que, de acordo com os investigadores englobam a satisfação do utente é um dos pontos relevantes do estudo inicial. Para avaliar junto do utente a sua satisfação é necessário questiona-lo diretamente. Para efeito desenvolveu-se um inquérito de satisfação estudando cuidadosamente cada um dos elementos que deste fazem parte. No desenvolvimento do inquérito de satisfação foram seguidas as seguintes etapas: Planeamento do questionário, partindo das oito dimensões da satisfação do utente até às métricas que serão avaliadas junto do utente; Análise dos dados a recolher, definindo-se, para cada métrica, se os dados serão nominais, ordinais ou provenientes de escalas balanceadas; Por último a formulação das perguntas do inquérito de satisfação foi alvo de estudo cuidado para garantir que o utente percecione da melhor forma o objetivo da questão. A definição das especificações da plataforma e do questionário passou por diferentes estudos, entre eles uma análise de benchmarking[2], que permitiram definir que o inquérito iv estará localizado numa zona acessível da unidade de saúde, será respondido com recurso a um ecrã táctil (tablet) e que estará alojado na web. As aplicações web desenvolvidas atualmente apresentam um design apelativo e intuitivo. Foi fundamental levar a cabo um estudo do design da aplicação web, como garantia que as cores utilizadas, o tipo de letra, e o local onde a informação são os mais adequados. Para desenvolver a aplicação web foi utilizada a linguagem de programação Ruby, com recurso à framework Ruby on Rails. Para a implementação da aplicação foram estudadas as diferentes tecnologias disponíveis, com enfoque no estudo do sistema de gestão de base de dados a utilizar. O desenvolvimento da aplicação web teve também como objetivo melhorar a gestão da informação gerada pelas respostas ao inquérito de satisfação. O colaborador da MedSUPPORT é o responsável pela gestão da informação pelo que as suas necessidades foram atendidas. Um menu para a gestão da informação é disponibilizado ao administrador da aplicação, colaborador MedSUPPORT. O menu de gestão da informação permitirá uma análise simplificada do estado atual com recurso a um painel do tipo dashboard e, a fim de melhorar a análise interna dos dados terá uma função de exportação dos dados para folha de cálculo. Para validação do estudo efetuado foram realizados os testes de funcionamento à plataforma, tanto à sua funcionalidade como à sua utilização em contexto real pelos utentes inquiridos nas unidades de saúde. Os testes em contexto real objetivaram validar o conceito junto dos utentes inquiridos.
Resumo:
When performing Particle Image Velocimetry (PIV) measurements in complex fluid flows with moving interfaces and a two-phase flow, it is necessary to develop a mask to remove non-physical measurements. This is the case when studying, for example, the complex bubble sweep-down phenomenon observed in oceanographic research vessels. Indeed, in such a configuration, the presence of an unsteady free surface, of a solid–liquid interface and of bubbles in the PIV frame, leads to generate numerous laser reflections and therefore spurious velocity vectors. In this note, an image masking process is developed to successively identify the boundaries of the ship and the free surface interface. As the presence of the solid hull surface induces laser reflections, the hull edge contours are simply detected in the first PIV frame and dynamically estimated for consecutive ones. As for the unsteady surface determination, a specific process is implemented like the following: i) the edge detection of the gradient magnitude in the PIV frame, ii) the extraction of the particles by filtering high-intensity large areas related to the bubbles and/or hull reflections, iii) the extraction of the rough region containing these particles and their reflections, iv) the removal of these reflections. The unsteady surface is finally obtained with a fifth-order polynomial interpolation. The resulted free surface is successfully validated from the Fourier analysis and by visualizing selected PIV images containing numerous spurious high intensity areas. This paper demonstrates how this data analysis process leads to PIV images database without reflections and an automatic detection of both the free surface and the rigid body. An application of this new mask is finally detailed, allowing a preliminary analysis of the hydrodynamic flow.
Resumo:
Se ha diseñado una aplicación móvil que permite examinar el riesgo de melanoma mediante el análisis de una foto. La memoria documenta la realización de una aplicación de servidor que forma parte de una solución de eHealth con un cliente ya desarrollado por un proyecto previo de fin de carrera en forma de una aplicación Android. La aplicación de servidor desarrollada expone una API de servicios web REST y presenta una arquitectura extensible dinámicamente mediante la implementación de un patrón plug-in.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
É crescente a utilização dos dispositivos móveis com ecrãs maiores e melhores, mais memória, maiores capacidades multimédia e métodos mais refinados para introduzir dados. Dispositivos que integram comunicações, acesso à internet e diversos tipos de sensores possibilitarão, seguramente, abordagens inovadoras e criativas em atividades pedagógicas, em contraste com as utilizações atuais nos computadores pessoais. A análise das aplicações que atualmente integram os módulos do Moodle nos dispositivos móveis mostra que existe ainda um longo caminho a percorrer. As aplicações existentes têm, quase na sua totalidade, como objetivo adaptar o interface aos dispositivos móveis, o que é apenas o primeiro passo no sentido de aproveitar todas as potencialidades destes dispositivos. É, pois, possível imaginar um futuro próximo, onde as potencialidades dos dispositivos móveis darão origem a aplicações com um enorme potencial de aprendizagem, que advém do facto de os estudantes encontrarem conexões entre as suas vidas e a sua educação, através da realização de atividades em contexto no dispositivo móvel, sempre omnipresente. Com este trabalho de investigação e desenvolvimento pretende-se: a) avaliar o estado da arte do mobile learning, na área dos Learning Management System (LMS); b) refletir sobre as funcionalidades que deve oferecer uma aplicação para dispositivos móveis, com enfoque no sistema operativo Android, que permita a gestão e atualização dos fóruns e ficheiros do Moodle; c) conceber e produzir a referida aplicação, de acordo com as especificações consideradas relevantes; d) avaliar o seu impacto educativo e funcional. É demonstrado neste estudo que o recurso a dispositivos móveis potencia a aprendizagem baseada em LMS (Learning Management System), identificando-se as vantagens da sua utilização. São também apresentadas as funcionalidades da aplicação Mais(f), desenvolvida no âmbito da investigação, a avaliação da mesma pelos participantes no estudo, bem como as perspectivas futuras de utilização da aplicação Mais(f).
Resumo:
Policy and decision makers dealing with environmental conservation and land use planning often require identifying potential sites for contributing to minimize sediment flow reaching riverbeds. This is the case of reforestation initiatives, which can have sediment flow minimization among their objectives. This paper proposes an Integer Programming (IP) formulation and a Heuristic solution method for selecting a predefined number of locations to be reforested in order to minimize sediment load at a given outlet in a watershed. Although the core structure of both methods can be applied for different sorts of flow, the formulations are targeted to minimization of sediment delivery. The proposed approaches make use of a Single Flow Direction (SFD) raster map covering the watershed in order to construct a tree structure so that the outlet cell corresponds to the root node in the tree. The results obtained with both approaches are in agreement with expert assessments of erosion levels, slopes and distances to the riverbeds, which in turn allows concluding that this approach is suitable for minimizing sediment flow. Since the results obtained with the IP formulation are the same as the ones obtained with the Heuristic approach, an optimality proof is included in the present work. Taking into consideration that the heuristic requires much less computation time, this solution method is more suitable to be applied in large sized problems.
Resumo:
During the lifetime of a research project, different partners develop several research prototype tools that share many common aspects. This is equally true for researchers as individuals and as groups: during a period of time they often develop several related tools to pursue a specific research line. Making research prototype tools easily accessible to the community is of utmost importance to promote the corresponding research, get feedback, and increase the tools’ lifetime beyond the duration of a specific project. One way to achieve this is to build graphical user interfaces (GUIs) that facilitate trying tools; in particular, with web-interfaces one avoids the overhead of downloading and installing the tools. Building GUIs from scratch is a tedious task, in particular for web-interfaces, and thus it typically gets low priority when developing a research prototype. Often we opt for copying the GUI of one tool and modifying it to fit the needs of a new related tool. Apart from code duplication, these tools will “live” separately, even though we might benefit from having them all in a common environment since they are related. This work aims at simplifying the process of building GUIs for research prototypes tools. In particular, we present EasyInterface, a toolkit that is based on novel methodology that provides an easy way to make research prototype tools available via common different environments such as a web-interface, within Eclipse, etc. It includes a novel text-based output language that allows to present results graphically without requiring any knowledge in GUI/Web programming. For example, an output of a tool could be (a structured version of) “highlight line number 10 of file ex.c” and “when the user clicks on line 10, open a dialog box with the text ...”. The environment will interpret this output and converts it to corresponding visual e_ects. The advantage of using this approach is that it will be interpreted equally by all environments of EasyInterface, e.g., the web-interface, the Eclipse plugin, etc. EasyInterface has been developed in the context of the Envisage [5] project, and has been evaluated on tools developed in this project, which include static analyzers, test-case generators, compilers, simulators, etc. EasyInterface is open source and available at GitHub2.
Resumo:
The liver is an important metabolic and endocrine organ in the fetus but the extent to which its hormone receptor (R) sensitivity is developmentally regulated in early life is not fully established. We, therefore, examined developmental changes in mRNA abundance for the growth hormone (GH) and prolactin (PRL) receptors (R) plus insulin-like growth factor (IGF)-I and –II and their receptors. Fetal and postnatal sheep were sampled at either 80, or 140 days gestation, 1, 30 days or six months of age. The effect of maternal nutrient restriction between early to mid (i.e. 28 to 80 days gestation, the time of early liver growth) gestation on gene expression was also examined in the fetus and juvenile offspring. Gene expression for the GHR, PRLR and IGF-IR increased through gestation peaking at birth, whereas IGF-I was maximal near to term. In contrast, IGF-II mRNA decreased between mid and late gestation to increase after birth whereas IGF-IIR remained unchanged. A substantial decline in mRNA abundance for GHR, PRLR and IGF-IR then occurred up to six months. Maternal nutrient restriction reduced GHR and IGF-IIR mRNA abundance in the fetus, but caused a precocious increase in the PRLR. Gene expression for IGF-I and –II were increased in juvenile offspring born to nutrient restricted mothers. In conclusion, there are marked differences in the developmental ontogeny and nutritional programming of specific hormones and their receptors involved in hepatic growth and development in the fetus. These could contribute to changes in liver function during adult life.
Resumo:
This study investigated the developmental and nutritional programming of two important mitochondrial proteins, namely voltage dependent anion channel (VDAC) and cytochrome c in the sheep kidney, liver and lung. The effect of maternal nutrient restriction between early to mid gestation (i.e. 28 to 80 days gestation, the period of maximal placental growth) on the abundance of these proteins was also examined in fetal and juvenile offspring. Fetuses were sampled at 80 and 140 days gestation (term ~147 days), and postnatal animals at 1 and 30 days and 6 months of age. The abundance of VDAC peaked at 140 days gestation in the lung, compared with 1 day after birth in the kidney and liver, whereas cytochrome c abundance was greatest at 140 days gestation in the liver, 1 day after birth in the kidney and 6 months of age in lungs. This differential ontogeny in mitochondrial protein abundance between tissues was accompanied with very different tissue specific responses to changes in maternal food intake. In the liver, maternal nutrient restriction only increased mitochondrial protein abundance at 80 days gestation, compared with no effect in the kidney. In contrast, in the lung mitochondrial protein abundance was raised near to term, whereas VDAC abundance was decreased by 6 months of age. These findings demonstrate the tissue specific nature of mitochondrial protein development that reflects differences in functional adaptation after birth. The divergence in mitochondrial response between tissues to maternal nutrient restriction early in pregnancy further reflects these differential ontogeny’s.
Resumo:
[en] It is known that most of the problems applied in the real life present uncertainty. In the rst part of the dissertation, basic concepts and properties of the Stochastic Programming have been introduced to the reader, also known as Optimization under Uncertainty. Moreover, since stochastic programs are complex to compute, we have presented some other models such as wait-and-wee, expected value and the expected result of using expected value. The expected value of perfect information and the value of stochastic solution measures quantify how worthy the Stochastic Programming is, with respect to the other models. In the second part, it has been designed and implemented with the modeller GAMS and the optimizer CPLEX an application that optimizes the distribution of non-perishable products, guaranteeing some nutritional requirements with minimum cost. It has been developed within Hazia project, managed by Sortarazi association and associated with Food Bank of Biscay and Basic Social Services of several districts of Biscay.
Resumo:
The northeastern region of Brazil has a large number of wells producing oil using a method of secondary recovery steam injection, since the oil produced in this region is essentially viscous. This recovery method puts the cement / coating on thermal cycling, due to the difference in coefficient of thermal expansion between cement and metal coating causes the appearance of cracks at this interface, allowing the passage of the annular fluid, which is associated with serious risk socioeconomic and environmental. In view of these cracks, a correction operation is required, resulting in more costs and temporary halt of production of the well. Alternatively, the oil industry has developed technology for adding new materials in cement pastes, oil well, providing high ductility and low density in order to withstand the thermo-mechanical loads generated by the injection of water vapor. In this context, vermiculite, a clay mineral found in abundance in Brazil has been applied in its expanded form in the construction industry for the manufacture of lightweight concrete with excellent insulation and noise due to its high melting point and the presence of air in their layers lamellar. Therefore, the vermiculite is used for the purpose of providing low-density cement paste and withstand high temperatures caused by steam injection. Thus, the present study compared the default folder containing cement and water with the folders with 6%, 8% and 10% vermiculite micron conducting tests of free water, rheology and compressive strength where it obtained the concentration of 8 % with the best results. Subsequently, the selected concentration, was compared with the results recommended by the API standard tests of filtered and stability. And finally, analyzed the results from tests of specific gravity and time of thickening. Before the study we were able to make a folder with a low density that can be used in cementing oil well in order to withstand the thermo-mechanical loads generated by steam injection