909 resultados para Differential Inclusions with Constraints


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern geographical databases, which are at the core of geographic information systems (GIS), store a rich set of aspatial attributes in addition to geographic data. Typically, aspatial information comes in textual and numeric format. Retrieving information constrained on spatial and aspatial data from geodatabases provides GIS users the ability to perform more interesting spatial analyses, and for applications to support composite location-aware searches; for example, in a real estate database: “Find the nearest homes for sale to my current location that have backyard and whose prices are between $50,000 and $80,000”. Efficient processing of such queries require combined indexing strategies of multiple types of data. Existing spatial query engines commonly apply a two-filter approach (spatial filter followed by nonspatial filter, or viceversa), which can incur large performance overheads. On the other hand, more recently, the amount of geolocation data has grown rapidly in databases due in part to advances in geolocation technologies (e.g., GPS-enabled smartphones) that allow users to associate location data to objects or events. The latter poses potential data ingestion challenges of large data volumes for practical GIS databases. In this dissertation, we first show how indexing spatial data with R-trees (a typical data pre-processing task) can be scaled in MapReduce—a widely-adopted parallel programming model for data intensive problems. The evaluation of our algorithms in a Hadoop cluster showed close to linear scalability in building R-tree indexes. Subsequently, we develop efficient algorithms for processing spatial queries with aspatial conditions. Novel techniques for simultaneously indexing spatial with textual and numeric data are developed to that end. Experimental evaluations with real-world, large spatial datasets measured query response times within the sub-second range for most cases, and up to a few seconds for a small number of cases, which is reasonable for interactive applications. Overall, the previous results show that the MapReduce parallel model is suitable for indexing tasks in spatial databases, and the adequate combination of spatial and aspatial attribute indexes can attain acceptable response times for interactive spatial queries with constraints on aspatial data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation consists of four studies examining two constructs related to time orientation in organizations: polychronicity and multitasking. The first study investigates the internal structure of polychronicity and its external correlates in a sample of undergraduate students (N = 732). Results converge to support a one-factor model and finds measures of polychronicity to be significantly related to extraversion, agreeableness, and openness to experience. The second study quantitatively reviews the existing research examining the relationship between polychronicity and the Big Five factors of personality. Results reveal a significant relationship between extraversion and openness to experience across studies. Studies three and four examine the usefulness of multitasking ability in the prediction of work related criteria using two organizational samples (N = 175 and 119, respectively). Multitasking ability demonstrated predictive validity, however the incremental validity over that of traditional predictors (i.e., cognitive ability and the Big Five factors of personality) was minimal. The relationships between multitasking ability, polychronicity, and other individual differences were also investigated. Polychronicity and multitasking ability proved to be distinct constructs demonstrating differential relationships with cognitive ability, personality, and performance. Results provided support for multitasking performance as a mediator in the relationship between multitasking ability and overall job performance. Additionally, polychronicity moderated the relationship between multitasking ability and both ratings of multitasking performance and overall job performance in Study four. Clarification of the factor structure of polychronicity and its correlates will facilitate future research in the time orientation literature. Results from two organizational samples point to work related measures of multitasking ability as a worthwhile tool for predicting the performance of job applicants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis develops a new technique for composite microstructures projects by the Topology Optimization process, in order to maximize rigidity, making use of Deformation Energy Method and using a refining scheme h-adaptative to obtain a better defining the topological contours of the microstructure. This is done by distributing materials optimally in a region of pre-established project named as Cell Base. In this paper, the Finite Element Method is used to describe the field and for government equation solution. The mesh is refined iteratively refining so that the Finite Element Mesh is made on all the elements which represent solid materials, and all empty elements containing at least one node in a solid material region. The Finite Element Method chosen for the model is the linear triangular three nodes. As for the resolution of the nonlinear programming problem with constraints we were used Augmented Lagrangian method, and a minimization algorithm based on the direction of the Quasi-Newton type and Armijo-Wolfe conditions assisting in the lowering process. The Cell Base that represents the composite is found from the equivalence between a fictional material and a preescribe material, distributed optimally in the project area. The use of the strain energy method is justified for providing a lower computational cost due to a simpler formulation than traditional homogenization method. The results are presented prescription with change, in displacement with change, in volume restriction and from various initial values of relative densities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Single-phase multiferroic materials are of considerable interest for future memory and sensing applications. Thin films of Aurivillius phase Bi 7Ti3Fe3O21 and Bi6Ti 2.8Fe1.52Mn0.68O18 (possessing six and five perovskite units per half-cell, respectively) have been prepared by chemical solution deposition on c-plane sapphire. Superconducting quantum interference device magnetometry reveal Bi7Ti3Fe 3O21 to be antiferromagnetic (TN = 190 K) and weakly ferromagnetic below 35 K, however, Bi6Ti2.8Fe 1.52Mn0.68O18 gives a distinct room-temperature in-plane ferromagnetic signature (Ms = 0.74 emu/g, μ0Hc =7 mT). Microstructural analysis, coupled with the use of a statistical analysis of the data, allows us to conclude that ferromagnetism does not originate from second phase inclusions, with a confidence level of 99.5%. Piezoresponse force microscopy (PFM) demonstrates room-temperature ferroelectricity in both films, whereas PFM observations on Bi6Ti2.8Fe1.52Mn0.68O18 show Aurivillius grains undergo ferroelectric domain polarization switching induced by an applied magnetic field. Here, we show for the first time that Bi6Ti2.8Fe1.52Mn0.68O18 thin films are both ferroelectric and ferromagnetic and, demonstrate magnetic field-induced switching of ferroelectric polarization in individual Aurivillius phase grains at room temperature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Photoacoustic tomography (PAT) of genetically encoded probes allows for imaging of targeted biological processes deep in tissues with high spatial resolution; however, high background signals from blood can limit the achievable detection sensitivity. Here we describe a reversibly switchable nonfluorescent bacterial phytochrome for use in multiscale photoacoustic imaging, BphP1, with the most red-shifted absorption among genetically encoded probes. BphP1 binds a heme-derived biliverdin chromophore and is reversibly photoconvertible between red and near-infrared light-absorption states. We combined single-wavelength PAT with efficient BphP1 photoswitching, which enabled differential imaging with substantially decreased background signals, enhanced detection sensitivity, increased penetration depth and improved spatial resolution. We monitored tumor growth and metastasis with ∼ 100-μm resolution at depths approaching 10 mm using photoacoustic computed tomography, and we imaged individual cancer cells with a suboptical-diffraction resolution of ∼ 140 nm using photoacoustic microscopy. This technology is promising for biomedical studies at several scales.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of decentralized sequential detection is studied in this thesis, where local sensors are memoryless, receive independent observations, and no feedback from the fusion center. In addition to traditional criteria of detection delay and error probability, we introduce a new constraint: the number of communications between local sensors and the fusion center. This metric is able to reflect both the cost of establishing communication links as well as overall energy consumption over time. A new formulation for communication-efficient decentralized sequential detection is proposed where the overall detection delay is minimized with constraints on both error probabilities and the communication cost. Two types of problems are investigated based on the communication-efficient formulation: decentralized hypothesis testing and decentralized change detection. In the former case, an asymptotically person-by-person optimum detection framework is developed, where the fusion center performs a sequential probability ratio test based on dependent observations. The proposed algorithm utilizes not only reported statistics from local sensors, but also the reporting times. The asymptotically relative efficiency of proposed algorithm with respect to the centralized strategy is expressed in closed form. When the probabilities of false alarm and missed detection are close to one another, a reduced-complexity algorithm is proposed based on a Poisson arrival approximation. In addition, decentralized change detection with a communication cost constraint is also investigated. A person-by-person optimum change detection algorithm is proposed, where transmissions of sensing reports are modeled as a Poisson process. The optimum threshold value is obtained through dynamic programming. An alternative method with a simpler fusion rule is also proposed, where the threshold values in the algorithm are determined by a combination of sequential detection analysis and constrained optimization. In both decentralized hypothesis testing and change detection problems, tradeoffs in parameter choices are investigated through Monte Carlo simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Certaines recherches ont investigué le traitement visuel de bas et de plus hauts niveaux chez des personnes neurotypiques et chez des personnes ayant un trouble du spectre de l’autisme (TSA). Cependant, l’interaction développementale entre chacun de ces niveaux du traitement visuel n’est toujours pas bien comprise. La présente thèse a donc deux objectifs principaux. Le premier objectif (Étude 1) est d’évaluer l’interaction développementale entre l’analyse visuelle de bas niveaux et de niveaux intermédiaires à travers différentes périodes développementales (âge scolaire, adolescence et âge adulte). Le second objectif (Étude 2) est d’évaluer la relation fonctionnelle entre le traitement visuel de bas niveaux et de niveaux intermédiaires chez des adolescents et des adultes ayant un TSA. Ces deux objectifs ont été évalué en utilisant les mêmes stimuli et procédures. Plus précisément, la sensibilité de formes circulaires complexes (Formes de Fréquences Radiales ou FFR), définies par de la luminance ou par de la texture, a été mesurée avec une procédure à choix forcés à deux alternatives. Les résultats de la première étude ont illustré que l’information locale des FFR sous-jacents aux processus visuels de niveaux intermédiaires, affecte différemment la sensibilité à travers des périodes développementales distinctes. Plus précisément, lorsque le contour est défini par de la luminance, la performance des enfants est plus faible comparativement à celle des adolescents et des adultes pour les FFR sollicitant la perception globale. Lorsque les FFR sont définies par la texture, la sensibilité des enfants est plus faible comparativement à celle des adolescents et des adultes pour les conditions locales et globales. Par conséquent, le type d’information locale, qui définit les éléments locaux de la forme globale, influence la période à laquelle la sensibilité visuelle atteint un niveau développemental similaire à celle identifiée chez les adultes. Il est possible qu’une faible intégration visuelle entre les mécanismes de bas et de niveaux intermédiaires explique la sensibilité réduite des FFR chez les enfants. Ceci peut être attribué à des connexions descendantes et horizontales immatures ainsi qu’au sous-développement de certaines aires cérébrales du système visuel. Les résultats de la deuxième étude ont démontré que la sensibilité visuelle en autisme est influencée par la manipulation de l’information locale. Plus précisément, en présence de luminance, la sensibilité est seulement affectée pour les conditions sollicitant un traitement local chez les personnes avec un TSA. Cependant, en présence de texture, la sensibilité est réduite pour le traitement visuel global et local. Ces résultats suggèrent que la perception de formes en autisme est reliée à l’efficacité à laquelle les éléments locaux (luminance versus texture) sont traités. Les connexions latérales et ascendantes / descendantes des aires visuelles primaires sont possiblement tributaires d’un déséquilibre entre les signaux excitateurs et inhibiteurs, influençant ainsi l’efficacité à laquelle l’information visuelle de luminance et de texture est traitée en autisme. Ces résultats supportent l’hypothèse selon laquelle les altérations de la perception visuelle de bas niveaux (local) sont à l’origine des atypies de plus hauts niveaux chez les personnes avec un TSA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The authors present three cases of symptomatic, large, benign, nonparasitic hepatic cysts. The diagnosis was determined by US and CT scan, the latter enabling differential diagnosis with neoplastic or hydatid cysts. All patients were treated with open hepatic resection. In 2 cases, laparoscopy was performed to enable complete diagnosis. The authors used LigaSure™ (Covidien, USA) instrument, avoiding bleeding complications and reducing surgery time. Histological examination confirmed the diagnosis of benigntic cysts. CT follow-up at 6 months and 1 year demonstrated the efficacy of the surgery, with no recurrences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les jeux de policiers et voleurs sont étudiés depuis une trentaine d’années en informatique et en mathématiques. Comme dans les jeux de poursuite en général, des poursuivants (les policiers) cherchent à capturer des évadés (les voleurs), cependant ici les joueurs agissent tour à tour et sont contraints de se déplacer sur une structure discrète. On suppose toujours que les joueurs connaissent les positions exactes de leurs opposants, autrement dit le jeu se déroule à information parfaite. La première définition d’un jeu de policiers-voleurs remonte à celle de Nowakowski et Winkler [39] et, indépendamment, Quilliot [46]. Cette première définition présente un jeu opposant un seul policier et un seul voleur avec des contraintes sur leurs vitesses de déplacement. Des extensions furent graduellement proposées telles que l’ajout de policiers et l’augmentation des vitesses de mouvement. En 2014, Bonato et MacGillivray [6] proposèrent une généralisation des jeux de policiers-voleurs pour permettre l’étude de ceux-ci dans leur globalité. Cependant, leur modèle ne couvre aucunement les jeux possédant des composantes stochastiques tels que ceux dans lesquels les voleurs peuvent bouger de manière aléatoire. Dans ce mémoire est donc présenté un nouveau modèle incluant des aspects stochastiques. En second lieu, on présente dans ce mémoire une application concrète de l’utilisation de ces jeux sous la forme d’une méthode de résolution d’un problème provenant de la théorie de la recherche. Alors que les jeux de policiers et voleurs utilisent l’hypothèse de l’information parfaite, les problèmes de recherches ne peuvent faire cette supposition. Il appert cependant que le jeu de policiers et voleurs peut être analysé comme une relaxation de contraintes d’un problème de recherche. Ce nouvel angle de vue est exploité pour la conception d’une borne supérieure sur la fonction objectif d’un problème de recherche pouvant être mise à contribution dans une méthode dite de branch and bound.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Os avanços e a disseminação do uso das Tecnologias de Informação e Comunicação (TIC) descortinam novas perspetivas para a educação com suporte em ambientes digitais de aprendizagem usados via internet (Fiolhais & Trindade, 2003). A plataforma usada no Projeto Matemática Ensino (PmatE) da Universidade de Aveiro (UA) é uma das ferramentas informáticas que suporta esses ambientes através da avaliação baseada no Modelo Gerador de Questões (MGQ), possibilitando a obtenção da imagem do progresso feito pelos alunos (Vieira, Carvalho & Oliveira, 2004). Reconhecendo a importância didática desta ferramenta, já demonstrada noutras investigações (por exemplo, Carvalho, 2011; Pais de Aquino, 2013; Peixoto, 2009), o presente estudo tem como objetivo geral desenvolver material didático digital de Física, no contexto do programa moçambicano de Física da 12ª classe, para alunos e professores sobre radiações e conteúdos da Física Moderna. Pretendeu-se, ainda, propor estratégias de trabalho com recurso às TIC para a melhoria da qualidade das aprendizagens nesta disciplina. O estudo assentou nas três seguintes questões de investigação: (a) Como conceber instrumentos de avaliação das aprendizagens baseadas no modelo gerador de questões para o estudo das radiações e conteúdos da Física Moderna, no contexto do programa moçambicano de Física da 12ª classe? (b) Que potencialidades e constrangimentos apresentam esses instrumentos quando implementados com alunos e professores? (c) De que forma o conhecimento construído pode ser mobilizado para outros temas da Física e para o ensino das ciências em geral? O estudo seguiu uma metodologia de Estudos de Desenvolvimento, de natureza mista, que compreendeu as fases da Análise, Design, Desenvolvimento e Avaliação, seguindo como paradigma um estudo de cariz exploratório, com uma vertente de estudo de caso. Assim, na Análise, foi discutido o contexto da educação em Moçambique e a problemática da abordagem das radiações e conteúdos de Física Moderna no ensino secundário no quadro desafiante que se coloca atualmente à educação científica. No Design foram avaliadas as abordagens dasTIC no ensino e aprendizagem da Física e das ciências em geral e construída a árvore de objetivos nos conteúdos referidos na fase anterior. Na fase do Desenvolvimento foram construídos os instrumentos de recolha de dados, elaborados os protótipos de MGQ e sua posterior programação, validação e testagem em formato impresso no estudo exploratório. Na Avaliação, foi conduzido o estudo principal com a aplicação dos modelos no formato digital e feita sua avaliação, o que incluiu a administração de inquéritos por questionário a alunos e professores. Os resultados indicam que na conceção de MGQ, a definição dos objetivos de aprendizagem em termos comportamentais é fundamental na formulação de questões e na análise dos resultados da avaliação com o objetivo de reajustar as estratégias didáticas. Apontam também que a plataforma do PmatE que suporta os MGQ, embora possua constrangimentos devido a sua dependência da internet e limitações de ordem didática, contribui positivamente na aprendizagem e na identificação das dificuldades e principais erros dos alunos, por um lado. Por outro, estimula através da avaliação os processos de assimilação e acomodação do conhecimento. O estudo recomenda a necessidade de mudanças nas práticas de ensino e de aprendizagem para que seja possível a utilização de conteúdos digitais como complemento à abordagem didática de conteúdos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we consider the a posteriori and a priori error analysis of discontinuous Galerkin interior penalty methods for second-order partial differential equations with nonnegative characteristic form on anisotropically refined computational meshes. In particular, we discuss the question of error estimation for linear target functionals, such as the outflow flux and the local average of the solution. Based on our a posteriori error bound we design and implement the corresponding adaptive algorithm to ensure reliable and efficient control of the error in the prescribed functional to within a given tolerance. This involves exploiting both local isotropic and anisotropic mesh refinement. The theoretical results are illustrated by a series of numerical experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O vírus Zika (Flavivirus) é um arbovírus transmitido sobretudo por mosquitos, mas também, por transmissão materno-fetal e sexual. Existem evidências que as infeções por vírus Zika podem estar associadas à síndrome de Guillian-Barré e a casos congénitos de microcefalia e outras malformações do sistema nervoso central. As infeções por vírus Zika, Dengue e Chikungunya partilham, atualmente, os mosquitos vetores, a sintomatologia e a distribuição geográfica. O Centro de Estudos de Vetores e Doenças Infeciosas do Instituto Nacional de Saúde Doutor Ricardo Jorge no seu Laboratório Nacional de Referência de Vírus Transmitidos por Vetores tem desenvolvido o diagnóstico e estudos epidemiológicos de vírus transmitidos por artrópodes desde o princípio dos anos 90. O diagnóstico de Zika foi desenvolvido e padronizado em 2007. O laboratório desenvolveu testes de diagnóstico molecular e serológico tendo identificado vários casos de importação para o território português e feito o diagnóstico diferencial com Dengue e Chikungunya e o despiste de infeção em grávidas e em casos de transmissão sexual.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the a priori error analysis of hp-version interior penalty discontinuous Galerkin methods for second-order partial differential equations with nonnegative characteristic form under weak assumptions on the mesh design and the local finite element spaces employed. In particular, we prove a priori hp-error bounds for linear target functionals of the solution, on (possibly) anisotropic computational meshes with anisotropic tensor-product polynomial basis functions. The theoretical results are illustrated by a numerical experiment.