973 resultados para Graphical representation, Textual discourse


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Relatório de investigação apresentado à Escola Superior de Educação de Lisboa para obtenção de grau de mestre em Ensino de 1º e 2º ciclo do Ensino Básico

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Para o projeto de qualquer estrutura existente (edifícios, pontes, veículos, máquinas, etc.) é necessário conhecer as condições de carga, geometria e comportamento de todas as suas partes, assim como respeitar as normativas em vigor nos países nos quais a estrutura será aplicada. A primeira parte de qualquer projeto nesta área passa pela fase da análise estrutural, onde são calculadas todas as interações e efeitos de cargas sobre as estruturas físicas e os seus componentes de maneira a verificar a aptidão da estrutura para o seu uso. Inicialmente parte-se de uma estrutura de geometria simplificada, pondo de parte os elementos físicos irrelevantes (elementos de fixação, revestimentos, etc.) de maneira a simplificar o cálculo de estruturas complexas e, em função dos resultados obtidos da análise estrutural, melhorar a estrutura se necessário. A análise por elementos finitos é a ferramenta principal durante esta primeira fase do projeto. E atualmente, devido às exigências do mercado, é imprescindível o suporte computorizado de maneira a agilizar esta fase do projeto. Existe para esta finalidade uma vasta gama de programas que permitem realizar tarefas que passam pelo desenho de estruturas, análise estática de cargas, análise dinâmica e vibrações, visualização do comportamento físico (deformações) em tempo real, que permitem a otimização da estrutura em análise. Porém, estes programas demostram uma certa complexidade durante a introdução dos parâmetros, levando muitas vezes a resultados errados. Assim sendo, é essencial para o projetista ter uma ferramenta fiável e simples de usar que possa ser usada para fins de projeto de estruturas e otimização. Sobre esta base nasce este projeto tese onde se elaborou um programa com interface gráfica no ambiente Matlab® para a análise de estruturas por elementos finitos, com elementos do tipo Barra e Viga, quer em 2D ou 3D. Este programa permite definir a estrutura por meio de coordenadas, introdução de forma rápida e clara, propriedades mecânicas dos elementos, condições fronteira e cargas a aplicar. Como resultados devolve ao utilizador as reações, deformações e distribuição de tensões nos elementos quer em forma tabular quer em representação gráfica sobre a estrutura em análise. Existe ainda a possibilidade de importação de dados e exportação dos resultados em ficheiros XLS e XLSX, de maneira a facilitar a gestão de informação. Foram realizados diferentes testes e análises de estruturas de forma a validar os resultados do programa e a sua integridade. Os resultados foram todos satisfatórios e convergem para os resultados de outros programas, publicados em livros, e para cálculo a mão feitos pelo autor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta dissertação descreve o sistema de apoio à racionalização da utilização de energia eléctrica desenvolvido no âmbito da unidade curricular de Tese/Dissertação. O domínio de aplicação enquadra-se no contexto da Directiva da União Europeia 2006/32/EC que declara ser necessário colocar à disposição dos consumidores a informação e os meios que promovam a redução do consumo e o aumento da eficiência energética individual. O objectivo é o desenvolvimento de uma solução que permita a representação gráfica do consumo/produção, a definição de tectos de consumo, a geração automática de alertas e alarmes, a comparação anónima com clientes com perfil idêntico por região e a previsão de consumo/produção no caso de clientes industriais. Trata-se de um sistema distribuído composto por front-end e back-end. O front-end é composto pelas aplicações de interface com o utilizador desenvolvidas para dispositivos móveis Android e navegadores Web. O back-end efectua o armazenamento e processamento de informação e encontra-se alojado numa plataforma de cloud computing – o Google App Engine – que disponibiliza uma interface padrão do tipo serviço Web. Esta opção assegura interoperabilidade, escalabilidade e robustez ao sistema. Descreve-se em detalhe a concepção, desenvolvimento e teste do protótipo realizado, incluindo: (i) as funcionalidades de gestão e análise de consumo e produção de energia implementadas; (ii) as estruturas de dados; (iii) a base de dados e o serviço Web; e (iv) os testes e a depuração efectuados. (iv) Por fim, apresenta-se o balanço deste projecto e efectuam-se sugestões de melhoria.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nos dias de hoje, os sistemas de tempo real crescem em importância e complexidade. Mediante a passagem do ambiente uniprocessador para multiprocessador, o trabalho realizado no primeiro não é completamente aplicável no segundo, dado que o nível de complexidade difere, principalmente devido à existência de múltiplos processadores no sistema. Cedo percebeu-se, que a complexidade do problema não cresce linearmente com a adição destes. Na verdade, esta complexidade apresenta-se como uma barreira ao avanço científico nesta área que, para já, se mantém desconhecida, e isto testemunha-se, essencialmente no caso de escalonamento de tarefas. A passagem para este novo ambiente, quer se trate de sistemas de tempo real ou não, promete gerar a oportunidade de realizar trabalho que no primeiro caso nunca seria possível, criando assim, novas garantias de desempenho, menos gastos monetários e menores consumos de energia. Este último fator, apresentou-se desde cedo, como, talvez, a maior barreira de desenvolvimento de novos processadores na área uniprocessador, dado que, à medida que novos eram lançados para o mercado, ao mesmo tempo que ofereciam maior performance, foram levando ao conhecimento de um limite de geração de calor que obrigou ao surgimento da área multiprocessador. No futuro, espera-se que o número de processadores num determinado chip venha a aumentar, e como é óbvio, novas técnicas de exploração das suas inerentes vantagens têm de ser desenvolvidas, e a área relacionada com os algoritmos de escalonamento não é exceção. Ao longo dos anos, diferentes categorias de algoritmos multiprocessador para dar resposta a este problema têm vindo a ser desenvolvidos, destacando-se principalmente estes: globais, particionados e semi-particionados. A perspectiva global, supõe a existência de uma fila global que é acessível por todos os processadores disponíveis. Este fato torna disponível a migração de tarefas, isto é, é possível parar a execução de uma tarefa e resumir a sua execução num processador distinto. Num dado instante, num grupo de tarefas, m, as tarefas de maior prioridade são selecionadas para execução. Este tipo promete limites de utilização altos, a custo elevado de preempções/migrações de tarefas. Em contraste, os algoritmos particionados, colocam as tarefas em partições, e estas, são atribuídas a um dos processadores disponíveis, isto é, para cada processador, é atribuída uma partição. Por essa razão, a migração de tarefas não é possível, acabando por fazer com que o limite de utilização não seja tão alto quando comparado com o caso anterior, mas o número de preempções de tarefas decresce significativamente. O esquema semi-particionado, é uma resposta de caráter hibrido entre os casos anteriores, pois existem tarefas que são particionadas, para serem executadas exclusivamente por um grupo de processadores, e outras que são atribuídas a apenas um processador. Com isto, resulta uma solução que é capaz de distribuir o trabalho a ser realizado de uma forma mais eficiente e balanceada. Infelizmente, para todos estes casos, existe uma discrepância entre a teoria e a prática, pois acaba-se por se assumir conceitos que não são aplicáveis na vida real. Para dar resposta a este problema, é necessário implementar estes algoritmos de escalonamento em sistemas operativos reais e averiguar a sua aplicabilidade, para caso isso não aconteça, as alterações necessárias sejam feitas, quer a nível teórico quer a nível prá

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION: New scores have been developed and validated in the US for in-hospital mortality risk stratification in patients undergoing coronary angioplasty: the National Cardiovascular Data Registry (NCDR) risk score and the Mayo Clinic Risk Score (MCRS). We sought to validate these scores in a European population with acute coronary syndrome (ACS) and to compare their predictive accuracy with that of the GRACE risk score. METHODS: In a single-center ACS registry of patients undergoing coronary angioplasty, we used the area under the receiver operating characteristic curve (AUC), a graphical representation of observed vs. expected mortality, and net reclassification improvement (NRI)/integrated discrimination improvement (IDI) analysis to compare the scores. RESULTS: A total of 2148 consecutive patients were included, mean age 63 years (SD 13), 74% male and 71% with ST-segment elevation ACS. In-hospital mortality was 4.5%. The GRACE score showed the best AUC (0.94, 95% CI 0.91-0.96) compared with NCDR (0.87, 95% CI 0.83-0.91, p=0.0003) and MCRS (0.85, 95% CI 0.81-0.90, p=0.0003). In model calibration analysis, GRACE showed the best predictive power. With GRACE, patients were more often correctly classified than with MCRS (NRI 78.7, 95% CI 59.6-97.7; IDI 0.136, 95% CI 0.073-0.199) or NCDR (NRI 79.2, 95% CI 60.2-98.2; IDI 0.148, 95% CI 0.087-0.209). CONCLUSION: The NCDR and Mayo Clinic risk scores are useful for risk stratification of in-hospital mortality in a European population of patients with ACS undergoing coronary angioplasty. However, the GRACE score is still to be preferred.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main objective of this thesis on flooding was to produce a detailed report on flooding with specific reference to the Clare River catchment. Past flooding in the Clare River catchment was assessed with specific reference to the November 2009 flood event. A Geographic Information System was used to produce a graphical representation of the spatial distribution of the November 2009 flood. Flood risk is prominent within the Clare River catchment especially in the region of Claregalway. The recent flooding events of November 2009 produced significant fluvial flooding from the Clare River. This resulted in considerable flood damage to property. There were also hidden costs such as the economic impact of the closing of the N17 until floodwater subsided. Land use and channel conditions are traditional factors that have long been recognised for their effect on flooding processes. These factors were examined in the context of the Clare River catchment to determine if they had any significant effect on flood flows. Climate change has become recognised as a factor that may produce more significant and frequent flood events in the future. Many experts feel that climate change will result in an increase in the intensity and duration of rainfall in western Ireland. This would have significant implications for the Clare River catchment, which is already vulnerable to flooding. Flood estimation techniques are a key aspect in understanding and preparing for flood events. This study uses methods based on the statistical analysis of recorded data and methods based on a design rainstorm and rainfall-runoff model to estimate flood flows. These provide a mathematical basis to evaluate the impacts of various factors on flooding and also to generate practical design floods, which can be used in the design of flood relief measures. The final element of the thesis includes the author’s recommendations on how flood risk management techniques can reduce existing flood risk in the Clare River catchment. Future implications to flood risk due to factors such as climate change and poor planning practices are also considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aquest projecte abasta el disseny i el desenvolupament d’un model prototípic de Metodologia per a la Valoració de l’Aprenentatge Ambiental, a la qual anomenem “MEVA-Ambiental”. Per a fer possible aquesta fita ens hem basat en fonaments ontològics i constructivistes per representar i analitzar el coneixement a fi de poder quantificar l’Increment de Coneixement (IC). Per nosaltres l’IC esdevé un indicador socio-educatiu que ens servirà per a determinar l’efectivitat dels tallers d’educació ambiental en percentatge. En procedir d’aquesta manera, les qualificacions resultats poden es poden prendre com punt de partida per a desenvolupar estudis en el temps i comprendre com “s’ancora” el nou coneixement a l’estructura cognitiva dels aprenents. Més enllà del plantejament teòric de mètode, també proveïm la solució tècnica que mostra com n’és de funcional i d’aplicable la part empírica metodològica. A aquesta solució que hem anomenat “MEVA-Tool”, és una eina virtual que automatitza la recollida i tractament de dades amb una estructura dinàmica basada en “qüestionaris web” que han d’emplenar els estudiants, una “base de dades” que acumula la informació i en permet un filtratge selectiu, i més “Llibre Excel” que en fa el tractament informatiu, la representació gràfica dels resultats, l’anàlisi i conclusions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to interpret the biplot it is necessary to know which points usually variables are the ones that are important contributors to the solution, and this information is available separately as part of the biplot s numerical results. We propose a new scaling of the display, called the contribution biplot, which incorporates this diagnostic directly into the graphical display, showing visually the important contributors and thus facilitating the biplot interpretation and often simplifying the graphical representation considerably. The contribution biplot can be applied to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. In the contribution biplot one set of points, usually the rows of the data matrix, optimally represent the spatial positions of the cases or sample units, according to some distance measure that usually incorporates some form of standardization unless all data are comparable in scale. The other set of points, usually the columns, is represented by vectors that are related to their contributions to the low-dimensional solution. A fringe benefit is that usually only one common scale for row and column points is needed on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot legible. Furthermore, this version of the biplot also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important, when they are in fact contributing minimally to the solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The graphical representation of spatial soil properties in a digital environment is complex because it requires a conversion of data collected in a discrete form onto a continuous surface. The objective of this study was to apply three-dimension techniques of interpolation and visualization on soil texture and fertility properties and establish relationships with pedogenetic factors and processes in a slope area. The GRASS Geographic Information System was used to generate three-dimensional models and ParaView software to visualize soil volumes. Samples of the A, AB, BA, and B horizons were collected in a regular 122-point grid in an area of 13 ha, in Pinhais, PR, in southern Brazil. Geoprocessing and graphic computing techniques were effective in identifying and delimiting soil volumes of distinct ranges of fertility properties confined within the soil matrix. Both three-dimensional interpolation and the visualization tool facilitated interpretation in a continuous space (volumes) of the cause-effect relationships between soil texture and fertility properties and pedological factors and processes, such as higher clay contents following the drainage lines of the area. The flattest part with more weathered soils (Oxisols) had the highest pH values and lower Al3+ concentrations. These techniques of data interpolation and visualization have great potential for use in diverse areas of soil science, such as identification of soil volumes occurring side-by-side but that exhibit different physical, chemical, and mineralogical conditions for plant root growth, and monitoring of plumes of organic and inorganic pollutants in soils and sediments, among other applications. The methodological details for interpolation and a three-dimensional view of soil data are presented here.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The broad aim of biomedical science in the postgenomic era is to link genomic and phenotype information to allow deeper understanding of the processes leading from genomic changes to altered phenotype and disease. The EuroPhenome project (http://www.EuroPhenome.org) is a comprehensive resource for raw and annotated high-throughput phenotyping data arising from projects such as EUMODIC. EUMODIC is gathering data from the EMPReSSslim pipeline (http://www.empress.har.mrc.ac.uk/) which is performed on inbred mouse strains and knock-out lines arising from the EUCOMM project. The EuroPhenome interface allows the user to access the data via the phenotype or genotype. It also allows the user to access the data in a variety of ways, including graphical display, statistical analysis and access to the raw data via web services. The raw phenotyping data captured in EuroPhenome is annotated by an annotation pipeline which automatically identifies statistically different mutants from the appropriate baseline and assigns ontology terms for that specific test. Mutant phenotypes can be quickly identified using two EuroPhenome tools: PhenoMap, a graphical representation of statistically relevant phenotypes, and mining for a mutant using ontology terms. To assist with data definition and cross-database comparisons, phenotype data is annotated using combinations of terms from biological ontologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 1851 the French Social economist Auguste Ott discussed the problem of gluts and commercial crises, together with the issue of distributive justice between workers in co-operative societies. He did so by means of a 'simple reproduction scheme' sharing some features with modern intersectoral transactions tables, in particular in terms of their graphical representation. This paper presents Ott's theory of crises (which was based on the disappointment of expectations) and the context of his model, and discusses its peculiarities, supplying a new piece for the reconstruction of the prehistory of input-output analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study was realized with high school Chemistry teachers from the region of and around Florianópolis (SC). It examines the pedagogical implications of these teachers' views on environmental issues, and discusses the possibilities and difficulties of bringing these issues into the Chemistry classroom. The semi-structured interviews were analyzed using Textual Discourse Analysis principles. The dominance of content-based teaching and traditional pedagogical approaches appears to hinder curricular changes. Most subjects pay little heed to environmental issues and their relation to Chemistry, and endorse a view of science as neutral, and the environment as anthropocentric - views far from Green Chemistry principles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Filtration is a widely used unit operation in chemical engineering. The huge variation in the properties of materials to be ltered makes the study of ltration a challenging task. One of the objectives of this thesis was to show that conventional ltration theories are di cult to use when the system to be modelled contains all of the stages and features that are present in a complete solid/liquid separation process. Furthermore, most of the ltration theories require experimental work to be performed in order to obtain critical parameters required by the theoretical models. Creating a good overall understanding of how the variables a ect the nal product in ltration is somewhat impossible on a purely theoretical basis. The complexity of solid/liquid separation processes require experimental work and when tests are needed, it is advisable to use experimental design techniques so that the goals can be achieved. The statistical design of experiments provides the necessary tools for recognising the e ects of variables. It also helps to perform experimental work more economically. Design of experiments is a prerequisite for creating empirical models that can describe how the measured response is related to the changes in the values of the variable. A software package was developed that provides a ltration practitioner with experimental designs and calculates the parameters for linear regression models, along with the graphical representation of the responses. The developed software consists of two software modules. These modules are LTDoE and LTRead. The LTDoE module is used to create experimental designs for di erent lter types. The lter types considered in the software are automatic vertical pressure lter, double-sided vertical pressure lter, horizontal membrane lter press, vacuum belt lter and ceramic capillary action disc lter. It is also possible to create experimental designs for those cases where the variables are totally user de ned, say for a customized ltration cycle or di erent piece of equipment. The LTRead-module is used to read the experimental data gathered from the experiments, to analyse the data and to create models for each of the measured responses. Introducing the structure of the software more in detail and showing some of the practical applications is the main part of this thesis. This approach to the study of cake ltration processes, as presented in this thesis, has been shown to have good practical value when making ltration tests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Filtration is a widely used unit operation in chemical engineering. The huge variation in the properties of materials to be ltered makes the study of ltration a challenging task. One of the objectives of this thesis was to show that conventional ltration theories are di cult to use when the system to be modelled contains all of the stages and features that are present in a complete solid/liquid separation process. Furthermore, most of the ltration theories require experimental work to be performed in order to obtain critical parameters required by the theoretical models. Creating a good overall understanding of how the variables a ect the nal product in ltration is somewhat impossible on a purely theoretical basis. The complexity of solid/liquid separation processes require experimental work and when tests are needed, it is advisable to use experimental design techniques so that the goals can be achieved. The statistical design of experiments provides the necessary tools for recognising the e ects of variables. It also helps to perform experimental work more economically. Design of experiments is a prerequisite for creating empirical models that can describe how the measured response is related to the changes in the values of the variable. A software package was developed that provides a ltration practitioner with experimental designs and calculates the parameters for linear regression models, along with the graphical representation of the responses. The developed software consists of two software modules. These modules are LTDoE and LTRead. The LTDoE module is used to create experimental designs for di erent lter types. The lter types considered in the software are automatic vertical pressure lter, double-sided vertical pressure lter, horizontal membrane lter press, vacuum belt lter and ceramic capillary action disc lter. It is also possible to create experimental designs for those cases where the variables are totally user de ned, say for a customized ltration cycle or di erent piece of equipment. The LTRead-module is used to read the experimental data gathered from the experiments, to analyse the data and to create models for each of the measured responses. Introducing the structure of the software more in detail and showing some of the practical applications is the main part of this thesis. This approach to the study of cake ltration processes, as presented in this thesis, has been shown to have good practical value when making ltration tests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Étant tous les deux récits d’événements, l’histoire et le roman ne sont pas logés à la même enseigne : le contenu du texte romanesque est habituellement considéré comme le contraire de celui du texte historique. On suppose que l’histoire raconte les vraies choses alors que le roman excelle dans l’imaginaire. Dans la représentation des génocides, le discours romanesque et celui historique partagent de nombreuses stratégies narratives à partir desquelles se réalise la relecture de l’expérience tragique. De nombreux artifices incitent le discours à se contenter d’être le (trans)porteur d’une conscience souveraine qui transcende les faits, le temps et l’espace reliés à l’événement. Ni l’histoire ni le roman ne sont reconstitutions expérientielles, mais le procédé de mise en récit doit démontrer une épaisseur discursive pouvant produire chez le lecteur la représentation d’un monde. Cette thèse prend pour objet les modalités littéraires des récits et des romans qui essayent de représenter l’expérience du génocide. En analysant ce dispositif discursif qui ne fait plus de différence entre le réel, le vrai et la vraisemblance, les livres de notre corpus présentent l’expérience du génocide et pensent les brisures et les déchirures d’humanité constatées dans différentes régions du monde (dans l’Empire ottoman, dans l’Allemagne nazie, en Bosnie, au Rwanda, etc.). Dans cette perspective, nous examinons la littérarisation de ces événements horribles qui se déroulent suivant un schéma narratif formé de séquences véridiques et de scènes imaginaires mettant en exergue toutes les innovations stylistiques et langagières qui font la singularité et l’originalité des ces œuvres. Fort de ces spécificités, les quatre principaux romans de notre corpus (Journal de déportation, Être sans destin, Le soldat et le gramophone et Le Passé devant soi) s’appuient sur une vraisemblance littéraire ou poétique qui leur permet d’aller à la quête d’une vérité ; une vérité littéraire non seulement subjective, mais en mesure d’accompagner la vérité historique.