962 resultados para catch databases
Resumo:
This publication is a support and resource document for the "National Action Plan for Promotion, Prevention and Early Intervention for Mental Health 2000". It includes indicators, measurement tools and databases relevant to assessing the implementation of the outcomes and strategies identified in the action plan.
Resumo:
Cyclic peptides are appealing targets in the drug-discovery process. Unfortunately, there currently exist no robust solid-phase strategies that allow the synthesis of large arrays of discrete cyclic peptides. Existing strategies are complicated, when synthesizing large libraries, by the extensive workup that is required to extract the cyclic product from the deprotection/cleavage mixture. To overcome this, we have developed a new safety-catch linker. The safety-catch concept described here involves the use of a protected catechol derivative in which one of the hydroxyls is masked with a benzyl group during peptide synthesis, thus making the linker deactivated to aminolysis. This masked derivative of the linker allows BOC solid-phase peptide assembly of the linear precursor. Prior to cyclization, the linker is activated and the linear peptide deprotected using conditions commonly employed (TFMSA), resulting in deprotected peptide attached to the activated form of the linker. Scavengers and deprotection adducts are removed by simple washing and filtration. Upon neutralization of the N-terminal amine, cyclization with concomitant cleavage from the resin yields the cyclic peptide in DMF solution. Workup is simple solvent removal. To exemplify this strategy, several cyclic peptides were synthesized targeted toward the somatostatin and integrin receptors. From this initial study and to show the strength of this method, we were able to synthesize a cyclic-peptide library containing over 400 members. This linker technology provides a new solid-phase avenue to access large arrays of cyclic peptides.
Resumo:
Dissertação de Mestrado, Estudos Integrados dos Oceanos, 11 de Outubro 2013, Universidade dos Açores.
Resumo:
The changes introduced into the European Higher Education Area (EHEA) by the Bologna Process, together with renewed pedagogical and methodological practices, have created a new teaching-learning paradigm: Student-Centred Learning. In addition, the last few years have been characterized by the application of Information Technologies, especially the Semantic Web, not only to the teaching-learning process, but also to administrative processes within learning institutions. On one hand, the aim of this study was to present a model for identifying and classifying Competencies and Learning Outcomes and, on the other hand, the computer applications of the information management model were developed, namely a relational Database and an Ontology.
Resumo:
The clinical content of administrative databases includes, among others, patient demographic characteristics, and codes for diagnoses and procedures. The data in these databases is standardized, clearly defined, readily available, less expensive than collected by other means, and normally covers hospitalizations in entire geographic areas. Although with some limitations, this data is often used to evaluate the quality of healthcare. Under these circumstances, the quality of the data, for instance, errors, or it completeness, is of central importance and should never be ignored. Both the minimization of data quality problems and a deep knowledge about this data (e.g., how to select a patient group) are important for users in order to trust and to correctly interpret results. In this paper we present, discuss and give some recommendations for some problems found in these administrative databases. We also present a simple tool that can be used to screen the quality of data through the use of domain specific data quality indicators. These indicators can significantly contribute to better data, to give steps towards a continuous increase of data quality and, certainly, to better informed decision-making.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
O aparecimento de soluções de software baseadas na Cloud vieram democratizar o acesso a aplicações de suporte à actividade empresarial, permitindo a micro e pequenas empresas aceder a ferramentas que outrora apenas as grandes empresas poderiam financiar, dada a introdução de novas formas de pagamento mensais com base em contratos flexíveis, acesso via internet e ausência de instalação de hardware específico ou compra de licenças por utilizador – a verdadeira utilização de software como um serviço, vulgo SaaS (Software as a Service). As aplicações de tipo SaaS aportam inúmeros benefícios para as empresas e mesmo vantagens competitivas importantes, estando disponíveis soluções em diversas áreas, nomeadamente para a Gestão de Projectos, como ferramentas de CRM (Customer Relationship Management) e CMS (Content Management System), entre outros. Assim, as empresas de Marketing e Comunicação, caso da empresa em que se centra este Projecto, têm hoje em dia acesso a um conjunto de aplicações SaaS, que pelo seu custo acessível e fácil acesso online, permitem às empresas mais pequenas serem rapidamente tão competitivas quanto as maiores, por norma com processos mais pesados e tradicionais. Adicionalmente, assistimos também ao fenómeno da consumerização das TI, em que os consumidores passam a querer ter o mesmo tipo de User Experience (UX) de que usufruem na utilização de aplicações fora do seu trabalho, aplicadas à vida empresarial. Este Projecto argumenta que a Usabilidade deve ser um dos elementos chave para a selecção correcta de uma aplicação online de Gestão de Projectos (do tipo SaaS), algo que deveria ser facilitado pela aplicação de uma metodologia de teste da Usabilidade, disponível numa plataforma online de acesso livre. A metodologia deverá ser eficaz e passível de ser utilizada por colaboradores de uma micro ou pequena empresa, apoiando o seu processo decisório de investimento, sendo eles especialistas ou não na matéria. A metodologia proposta neste projecto exploratório pressupõe uma complementaridade entre a avaliação Heurística de Usabilidade pelo método de Nielsen e o Método de Purdue - Purdue Usability Testing Questionnaire (PUTQ).
Resumo:
Current computer systems have evolved from featuring only a single processing unit and limited RAM, in the order of kilobytes or few megabytes, to include several multicore processors, o↵ering in the order of several tens of concurrent execution contexts, and have main memory in the order of several tens to hundreds of gigabytes. This allows to keep all data of many applications in the main memory, leading to the development of inmemory databases. Compared to disk-backed databases, in-memory databases (IMDBs) are expected to provide better performance by incurring in less I/O overhead. In this dissertation, we present a scalability study of two general purpose IMDBs on multicore systems. The results show that current general purpose IMDBs do not scale on multicores, due to contention among threads running concurrent transactions. In this work, we explore di↵erent direction to overcome the scalability issues of IMDBs in multicores, while enforcing strong isolation semantics. First, we present a solution that requires no modification to either database systems or to the applications, called MacroDB. MacroDB replicates the database among several engines, using a master-slave replication scheme, where update transactions execute on the master, while read-only transactions execute on slaves. This reduces contention, allowing MacroDB to o↵er scalable performance under read-only workloads, while updateintensive workloads su↵er from performance loss, when compared to the standalone engine. Second, we delve into the database engine and identify the concurrency control mechanism used by the storage sub-component as a scalability bottleneck. We then propose a new locking scheme that allows the removal of such mechanisms from the storage sub-component. This modification o↵ers performance improvement under all workloads, when compared to the standalone engine, while scalability is limited to read-only workloads. Next we addressed the scalability limitations for update-intensive workloads, and propose the reduction of locking granularity from the table level to the attribute level. This further improved performance for intensive and moderate update workloads, at a slight cost for read-only workloads. Scalability is limited to intensive-read and read-only workloads. Finally, we investigate the impact applications have on the performance of database systems, by studying how operation order inside transactions influences the database performance. We then propose a Read before Write (RbW) interaction pattern, under which transaction perform all read operations before executing write operations. The RbW pattern allowed TPC-C to achieve scalable performance on our modified engine for all workloads. Additionally, the RbW pattern allowed our modified engine to achieve scalable performance on multicores, almost up to the total number of cores, while enforcing strong isolation.
Resumo:
O aumento da quantidade de dados gerados que se tem verificado nos últimos anos e a que se tem vindo a dar o nome de Big Data levou a que a tecnologia relacional começasse a demonstrar algumas fragilidades no seu armazenamento e manuseamento o que levou ao aparecimento das bases de dados NoSQL. Estas estão divididas por quatro tipos distintos nomeadamente chave/valor, documentos, grafos e famílias de colunas. Este artigo é focado nas bases de dados do tipo column-based e nele serão analisados os dois sistemas deste tipo considerados mais relevantes: Cassandra e HBase.
Resumo:
Dissertação de Mestrado em Engenharia Informática
Resumo:
Sport fishing for peacock bass Cichla spp. in the Brazilian Amazon has increased in popularity and attracts anglers who generate significant economic benefits in rural regions. The sustainability of this fishery is partly dependent on the survival of fish caught through catch-and-release fishing. The objective of this work was to investigate, hooking mortality of Cichla spp., including speckled peacock bass (C. temensis Humbolt), butterfly peacock bass (C. orinocensis Humbolt), and popoca peacock bass (C. monoculus Agassiz) in the basin of the Negro River, the largest tributary of the Amazon River. Fish were caught at two different sites using artificial lures, transported to pens anchored in the river and monitored for 72 hours. A total of 162 individual peacock bass were captured and hooking mortality (mean % ± 95% confidence intervals) was calculated. Mean mortality was 3.5% (± 5.0), 2.3% (± 3.5) and 5.2% (± 10.2) for speckled peacock bass, butterfly peacock bass, and popoca peacock bass, respectively. Lengths of captured fish ranged from 26 to 79 cm (standard length), however, only fish under 42 cm died. This research suggests that catch-and-release sport fishing of peacock bass does not result in substantial mortality in the Negro River basin.
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação
Resumo:
Propositionalization, Inductive Logic Programming, Multi-Relational Data Mining