993 resultados para Pruning composed
Resumo:
Dissertation submitted for a PhD degree in Electrical Engineering, speciality of Robotics and Integrated Manufacturing from the Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
The acetohydroxamic acid synthesis reaction was studied using whole cells, cell-free extract and purified amidase from the strains of Pseudomonas aeruginosa L10 and A13 entrapped in a reverse micelles system composed of cationic surfactant tetradecyltrimethyl ammonium bromide. The specific activity of amidase, yield of synthesis and storage stability were determined for the reversed micellar system as well as for free amidase in conventional buffer medium. The results have revealed that amidase solutions in the reverse micelles system exhibited a substantial increase in specific activity, yield of synthesis and storage stability. In fact, whole cells from P. aeruginosa L10 and AI3 in reverse micellar medium revealed an increase in specific activity of 9.3- and 13.9-fold, respectively, relatively to the buffer medium. Yields of approximately 92% and 66% of acetohydroxamic acid synthesis were obtained for encapsulated cell free extract from P. aeruginosa L10 and A13, respectively. On the other hand, the half-life values obtained for the amidase solutions encapsulated in reverse micelles were overall higher than that obtained for the free amidase solution in buffer medium. Half-life values obtained for encapsulated purified amidase from P. aeruginosa strain L10 and encapsulated cell-free extract from P. aeruginosa strain AI3 were of 17.0 and 26.0 days, respectively. As far as the different sources biocatalyst are concerned, the data presented in this work has revealed that the best results, in both storage stability and biocatalytic efficiency, were obtained when encapsulated cell-free extract from P. aeruginosa strain AI3 at 14/0 of 10 were used. Conformational changes occurring upon encapsulation of both strains enzymes in reverse micelles of TAB in heptane/octanol were additionally identified by FTIR spectroscopy which clarified the biocatalysts performances.
Resumo:
We numerically study a simple fluid composed of particles having a hard-core repulsion complemented by two patchy attractive sites on the particle poles. An appropriate choice of the patch angular width allows for the formation of ring structures which, at low temperatures and low densities, compete with the growth of linear aggregates. The simplicity of the model makes it possible to compare simulation results and theoretical predictions based on the Wertheim perturbation theory, specialized to the case in which ring formation is allowed. Such a comparison offers a unique framework for establishing the quality of the analytic predictions. We find that the Wertheim theory describes remarkably well the simulation results.
Resumo:
Journal of Human Evolution, V. 55, pp. 148-163
Resumo:
Most of the traditional software and database development approaches tend to be serial, not evolutionary and certainly not agile, especially on data-oriented aspects. Most of the more commonly used methodologies are strict, meaning they’re composed by several stages each with very specific associated tasks. A clear example is the Rational Unified Process (RUP), divided into Business Modeling, Requirements, Analysis & Design, Implementation, Testing and Deployment. But what happens when the needs of a well design and structured plan, meet the reality of a small starting company that aims to build an entire user experience solution. Here resource control and time productivity is vital, requirements are in constant change, and so is the product itself. In order to succeed in this environment a highly collaborative and evolutionary development approach is mandatory. The implications of constant changing requirements imply an iterative development process. Project focus is on Data Warehouse development and business modeling. This area is usually a tricky one. Business knowledge is part of the enterprise, how they work, their goals, what is relevant for analyses are internal business processes. Throughout this document it will be explained why Agile Modeling development was chosen. How an iterative and evolutionary methodology, allowed for reasonable planning and documentation while permitting development flexibility, from idea to product. More importantly how it was applied on the development of a Retail Focused Data Warehouse. A productized Data Warehouse built on the knowledge of not one but several client needs. One that aims not just to store usual business areas but create an innovative sets of business metrics by joining them with store environment analysis, converting Business Intelligence into Actionable Business Intelligence.
Resumo:
Relatório de Estágio apresentado à Escola Superior de Educação de Lisboa para obtenção do grau de mestre em Ensino do 1.º e 2.º Ciclo do Ensino Básico
Resumo:
Dissertação apresentada à Escola Superior de Educação de Lisboa para obtenção do grau de mestre em Ensino do 1º e do 2º Ciclos
Resumo:
A pesquisa sobre resiliência sugere que a criança que se desenvolve em contexto adverso, poderá usufruir de atributos relevantes, pessoais e do ambiente. Neste sentido pretendeu-se estudar, até que ponto, as competências de modulação sensorial da criança e a qualidade das interacções mãe-filho, influenciavam as trajectórias de risco e podiam promover as oportunidades de resiliência da criança. Participaram no estudo 136 crianças, 67 do sexo feminino e 69 do sexo masculino, com idades entre os 7 e os 36 meses. Analisámos a sensibilidade materna em situação de jogo livre recorrendo à escala CARE-Index e o processamento sensorial através do de entrevista baseado no protocolo de Dunn (1997) assente nos quatro padrões de processamento sensorial: baixo registo; sensibilidade sensorial; procura sensorial; evitamento sensorial, construto anteriormente validado. Constituímos, com base nas premissas do modelo de avaliação autêntica, um índex de capacidades, que nos serviu como referencial para a avaliação do risco e da resiliência. Os resultados indicaram que a resiliência infantil em ambiente de pobrezaestava associada a indicadores de elevada sensibilidade materna e a índices adequados de processamento sensorial. A discussão dos resultados enquadrou-se nos modelos actuais e emergentes das influências neurobiológicas e ambientais nos processos de risco e de resiliência.
Resumo:
Doutoramento em Economia Financeira e Contabilidade
Resumo:
A quantidade e variedade de conteúdos multimédia actualmente disponíveis cons- tituem um desafio para os utilizadores dado que o espaço de procura e escolha de fontes e conteúdos excede o tempo e a capacidade de processamento dos utilizado- res. Este problema da selecção, em função do perfil do utilizador, de informação em grandes conjuntos heterogéneos de dados é complexo e requer ferramentas específicas. Os Sistemas de Recomendação surgem neste contexto e são capazes de sugerir ao utilizador itens que se coadunam com os seus gostos, interesses ou necessidades, i.e., o seu perfil, recorrendo a metodologias de inteligência artificial. O principal objectivo desta tese é demonstrar que é possível recomendar em tempo útil conteúdos multimédia a partir do perfil pessoal e social do utilizador, recorrendo exclusivamente a fontes públicas e heterogéneas de dados. Neste sen- tido, concebeu-se e desenvolveu-se um Sistema de Recomendação de conteúdos multimédia baseado no conteúdo, i.e., nas características dos itens, no historial e preferências pessoais e nas interacções sociais do utilizador. Os conteúdos mul- timédia recomendados, i.e., os itens sugeridos ao utilizador, são provenientes da estação televisiva britânica, British Broadcasting Corporation (BBC), e estão classificados de acordo com as categorias dos programas da BBC. O perfil do utilizador é construído levando em conta o historial, o contexto, as preferências pessoais e as actividades sociais. O YouTube é a fonte do histo- rial pessoal utilizada, permitindo simular a principal fonte deste tipo de dados - a Set-Top Box (STB). O historial do utilizador é constituído pelo conjunto de vídeos YouTube e programas da BBC vistos pelo utilizador. O conteúdo dos vídeos do YouTube está classificado segundo as categorias de vídeo do próprio YouTube, sendo efectuado o mapeamento para as categorias dos programas da BBC. A informação social, que é proveniente das redes sociais Facebook e Twit- ter, é recolhida através da plataforma Beancounter. As actividades sociais do utilizador obtidas são filtradas para extrair os filmes e séries que são, por sua vez, enriquecidos semanticamente através do recurso a repositórios abertos de dados interligados. Neste caso, os filmes e séries são classificados através dos géneros da IMDb e, posteriormente, mapeados para as categorias de programas da BBC. Por último, a informação do contexto e das preferências explícitas, através da classificação dos itens recomendados, do utilizador são também contempladas. O sistema desenvolvido efectua recomendações em tempo real baseado nas actividades das redes sociais Facebook e Twitter, no historial de vídeos Youtube e de programas da BBC vistos e preferências explícitas. Foram realizados testes com cinco utilizadores e o tempo médio de resposta do sistema para criar o conjunto inicial de recomendações foi 30 s. As recomendações personalizadas são geradas e actualizadas mediante pedido expresso do utilizador.
Resumo:
A serologic study was undertaken in a group of 43 patients with active paracoccidioidomycosis who were treated in the same form (ketoconazole), for identical periods of time (6 months), and folio wed-up for various periods posttherapy. The tests employed were agar gel immunodiffusion (AGID) and complement fixation (FC). Also studied were 50 sera from patients with proven histoplasmosis and pulmonary aspergilloma, 30 patients with culturaly proven tuberculosis as well as 92 specimens from healthy individuals, residents in the endemic area for paracoccidioidomycosis. A single lot of yeast filtrate antigen was used throughout the study. The value of each test was measured according to GALEN and GAMBINO6. Both tests were highly sensitive, 89 and 93% respectively. Regarding their specificity, the AGID was totally specific while the CF exhibited 96.6% and 97% specificity in front of tuberculosis patients and healthy individuals respectively and 82% in comparison with patients with other mycoses. The concept of predictive value, that is, the certainty one has in accepting a positive test as diagnostic of paracoccidioidomycosis, favored the AGID procedure (100%) over the CF test. The latter could sort out with 93% certainty a patient with paracoccidioidomycosis among a group of healthy individuals and with 97.5% in the case of TB patients; when the group in question was composed by individuals with other deep mycoses, such certainty was lower (81%). The above results indicate that both the AGID and the CF tests furnish results of high confidence; one should not relay, however, in the CF alone as a means to establish the specific diagnosis of paracoccidioidomycosis.
Resumo:
Vishnu is a tool for XSLT visual programming in Eclipse - a popular and extensible integrated development environment. Rather than writing the XSLT transformations, the programmer loads or edits two document instances, a source document and its corresponding target document, and pairs texts between then by drawing lines over the documents. This form of XSLT programming is intended for simple transformations between related document types, such as HTML formatting or conversion among similar formats. Complex XSLT programs involving, for instance, recursive templates or second order transformations are out of the scope of Vishnu. We present the architecture of Vishnu composed by a graphical editor and a programming engine. The editor is an Eclipse plug-in where the programmer loads and edits document examples and pairs their content using graphical primitives. The programming engine receives the data collected by the editor and produces an XSLT program. The design of the engine and the process of creation of an XSLT program from examples are also detailed. It starts with the generation of an initial transformation that maps source document to the target document. This transformation is fed to a rewrite process where each step produces a refined version of the transformation. Finally, the transformation is simplified before being presented to the programmer for further editing.
Resumo:
Consolidation consists in scheduling multiple virtual machines onto fewer servers in order to improve resource utilization and to reduce operational costs due to power consumption. However, virtualization technologies do not offer performance isolation, causing applications’ slowdown. In this work, we propose a performance enforcing mechanism, composed of a slowdown estimator, and a interference- and power-aware scheduling algorithm. The slowdown estimator determines, based on noisy slowdown data samples obtained from state-of-the-art slowdown meters, if tasks will complete within their deadlines, invoking the scheduling algorithm if needed. When invoked, the scheduling algorithm builds performance and power aware virtual clusters to successfully execute the tasks. We conduct simulations injecting synthetic jobs which characteristics follow the last version of the Google Cloud tracelogs. The results indicate that our strategy can be efficiently integrated with state-of-the-art slowdown meters to fulfil contracted SLAs in real-world environments, while reducing operational costs in about 12%.
Resumo:
XSLT is a powerful and widely used language for transforming XML documents. However its power and complexity can be overwhelming for novice or infrequent users, many of which simply give up on using this language. On the other hand, many XSLT programs of practical use are simple enough to be automatically inferred from examples of source and target documents. An inferred XSLT program is seldom adequate for production usage but can be used as a skeleton of the final program, or at least as scaffolding in the process of coding it. It should be noted that the authors do not claim that XSLT programs, in general, can be inferred from examples. The aim of Vishnu - the XSLT generator engine described in this paper – is to produce XSLT programs for processing documents similar to the given examples and with enough readability to be easily understood by a programmer not familiar with the language. The architecture of Vishnu is composed by a graphical editor and a programming engine. In this paper we focus on the editor as a GWT web application where the programmer loads and edits document examples and pairs their content using graphical primitives. The programming engine receives the data collected by the editor and produces an XSLT program.