944 resultados para Data processing methods
Resumo:
O presente estudo partiu do problema “Como promover aprendizagens da Matemática e do Estudo do Meio numa perspetiva interdisciplinar, explorando o mundo real?”. Neste sentido, tem como objetivos: selecionar recursos e atividades que se revelem motivadoras para os alunos; demonstrar a relevância da inter-relação de diferentes conceitos e a importância da sua ligação com as vivências dos alunos; ativar o envolvimento dos alunos para a aprendizagem da Matemática através do Estudo do meio e de situações do mundo real; estimular a perceção do aluno da presença da Matemática nos conteúdos de Estudo do Meio; fomentar a compreensão da relação dos conteúdos de Matemática e estudo do Meio. Com este propósito formularam-se as seguintes questões: (1) Que tipo de atividades se poderão proporcionar de forma a motivar os alunos para os conteúdos do Estudo do Meio e da Matemática? (2) De que forma a exploração das situações/conceções do quotidiano poderá promover o envolvimento dos alunos na aprendizagem da Matemática e do Estudo do Meio? (3) De que forma o Ensino Experimental das Ciências numa perspetiva interdisciplinar pode contribuir para desenvolver tanto as competências conceptuais (fatores do ambiente: temperatura e humidade/OTD/números racionais), como capacidades de pensamento crítico e tomada de decisão inerente? Tendo em vista os objetivos do estudo, desenvolveram-se, com uma turma do 2º ano de escolaridade, quatro situações formativas, que envolveram as disciplinas de Matemática e Estudo do Meio. O domínio de conteúdos preponderante na área de Estudo do Meio foi À descoberta do Ambiente Natural, enquanto na Matemática os domínios predominantes foram Organização e tratamento de dados e Números e operações. Foram realizadas diversas atividades experimentais, onde os alunos tiveram um papel ativo na construção dos seus conhecimentos. A investigação segue uma metodologia qualitativa, centrando-se num estudo de caso, onde se caracteriza uma experiência interdisciplinar que envolveu as disciplinas de Matemática e Estudo do Meio. Os dados foram recolhidos pela professora investigadora através de gravações de vídeo e áudio, fotografias, trabalhos dos alunos e de registos da professora investigadora. Os resultados demonstraram como os alunos mobilizaram e apropriaram os conteúdos de Matemática e Estudo do Meio. Os dados, através da análise de conteúdo, parecem iv sugerir que houve uma evolução no desempenho dos alunos a vários níveis, nomeadamente: no trabalho cooperativo, no envolvimento da tarefa, nas interações estabelecidas e na motivação para a aprendizagem da Matemática e Estudo do Meio.
Resumo:
The study of chemical diffusion in biological tissues is a research field of high importance and with application in many clinical, research and industrial areas. The evaluation of diffusion and viscosity properties of chemicals in tissues is necessary to characterize treatments or inclusion of preservatives in tissues or organs for low temperature conservation. Recently, we have demonstrated experimentally that the diffusion properties and dynamic viscosity of sugars and alcohols can be evaluated from optical measurements. Our studies were performed in skeletal muscle, but our results have revealed that the same methodology can be used with other tissues and different chemicals. Considering the significant number of studies that can be made with this method, it becomes necessary to turn data processing and calculation easier. With this objective, we have developed a software application that integrates all processing and calculations, turning the researcher work easier and faster. Using the same experimental data that previously was used to estimate the diffusion and viscosity of glucose in skeletal muscle, we have repeated the calculations with the new application. Comparing between the results obtained with the new application and with previous independent routines we have demonstrated great similarity and consequently validated the application. This new tool is now available to be used in similar research to obtain the diffusion properties of other chemicals in different tissues or organs.
Resumo:
Atualmente, as Tecnologias de Informação (TI) são cada vez mais vitais dentro das organizações. As TI são o motor de suporte do negócio. Para grande parte das organizações, o funcionamento e desenvolvimento das TI têm como base infraestruturas dedicadas (internas ou externas) denominadas por Centro de Dados (CD). Nestas infraestruturas estão concentrados os equipamentos de processamento e armazenamento de dados de uma organização, por isso, são e serão cada vez mais desafiadas relativamente a diversos fatores tais como a escalabilidade, disponibilidade, tolerância à falha, desempenho, recursos disponíveis ou disponibilizados, segurança, eficiência energética e inevitavelmente os custos associados. Com o aparecimento das tecnologias baseadas em computação em nuvem e virtualização, abrese todo um leque de novas formas de endereçar os desafios anteriormente descritos. Perante este novo paradigma, surgem novas oportunidades de consolidação dos CD que podem representar novos desafios para os gestores de CD. Por isso, é no mínimo irrealista para as organizações simplesmente eliminarem os CD ou transforma-los segundo os mais altos padrões de qualidade. As organizações devem otimizar os seus CD, contudo um projeto eficiente desta natureza, com capacidade para suportar as necessidades impostas pelo mercado, necessidades dos negócios e a velocidade da evolução tecnológica, exigem soluções complexas e dispendiosas tanto para a sua implementação como a sua gestão. É neste âmbito que surge o presente trabalho. Com o objetivo de estudar os CD inicia-se um estudo sobre esta temática, onde é detalhado o seu conceito, evolução histórica, a sua topologia, arquitetura e normas existentes que regem os mesmos. Posteriormente o estudo detalha algumas das principais tendências condicionadoras do futuro dos CD. Explorando o conhecimento teórico resultante do estudo anterior, desenvolve-se uma metodologia de avaliação dos CD baseado em critérios de decisão. O estudo culmina com uma análise sobre uma nova solução tecnológica e a avaliação de três possíveis cenários de implementação: a primeira baseada na manutenção do atual CD; a segunda baseada na implementação da nova solução em outro CD em regime de hosting externo; e finalmente a terceira baseada numa implementação em regime de IaaS.
Resumo:
O sector do turismo é uma área francamente em crescimento em Portugal e que tem desenvolvido a sua divulgação e estratégia de marketing. Contudo, apenas se prende com indicadores de desempenho e de oferta instalada (número de quartos, hotéis, voos, estadias), deixando os indicadores estatísticos em segundo plano. De acordo com o “ Travel & tourism Competitiveness Report 2013”, do World Economic Forum, classifica Portugal em 72º lugar no que respeita à qualidade e cobertura da informação estatística, disponível para o sector do Turismo. Refira-se que Espanha ocupa o 3º lugar. Uma estratégia de mercado, sem base analítica, que sustente um quadro de orientações específico e objetivo, com relevante conhecimento dos mercados alvo, dificilmente é compreensível ou até mesmo materializável. A implementação de uma estrutura de Business Intelligence que permita a realização de um levantamento e tratamento de dados que possibilite relacionar e sustentar os resultados obtidos no sector do turismo revela-se fundamental e crucial, para que sejam criadas estratégias de mercado. Essas estratégias são realizadas a partir da informação dos turistas que nos visitam, e dos potenciais turistas, para que possam ser cativados no futuro. A análise das características e dos padrões comportamentais dos turistas permite definir perfis distintos e assim detetar as tendências de mercado, de forma a promover a oferta dos produtos e serviços mais adequados. O conhecimento obtido permite, por um lado criar e disponibilizar os produtos mais atrativos para oferecer aos turistas e por outro informá-los, de uma forma direcionada, da existência desses produtos. Assim, a associação de uma recomendação personalizada que, com base no conhecimento de perfis do turista proceda ao aconselhamento dos melhores produtos, revela-se como uma ferramenta essencial na captação e expansão de mercado.
Resumo:
Oral busulfan is the historical backbone of the busulfan+cyclophosphamide regimen for autologous stem cell transplantation. However intravenous busulfan has more predictable pharmacokinetics and less toxicity than oral busulfan; we, therefore, retrospectively analyzed data from 952 patients with acute myeloid leukemia who received intravenous busulfan for autologous stem cell transplantation. Most patients were male (n=531, 56%), and the median age at transplantation was 50.5 years. Two-year overall survival, leukemia-free survival, and relapse incidence were 67±2%, 53±2%, and 40±2%, respectively. The non-relapse mortality rate at 2 years was 7±1%. Five patients died from veno-occlusive disease. Overall leukemia-free survival and relapse incidence at 2 years did not differ significantly between the 815 patients transplanted in first complete remission (52±2% and 40±2%, respectively) and the 137 patients transplanted in second complete remission (58±5% and 35±5%, respectively). Cytogenetic risk classification and age were significant prognostic factors: the 2-year leukemia-free survival was 63±4% in patients with good risk cytogenetics, 52±3% in those with intermediate risk cytogenetics, and 37 ± 10% in those with poor risk cytogenetics (P=0.01); patients ≤50 years old had better overall survival (77±2% versus 56±3%; P<0.001), leukemia-free survival (61±3% versus 45±3%; P<0.001), relapse incidence (35±2% versus 45±3%; P<0.005), and non-relapse mortality (4±1% versus 10±2%; P<0.001) than older patients. The combination of intravenous busulfan and high-dose melphalan was associated with the best overall survival (75±4%). Our results suggest that the use of intravenous busulfan simplifies the autograft procedure and confirm the usefulness of autologous stem cell transplantation in acute myeloid leukemia. As in allogeneic transplantation, veno-occlusive disease is an uncommon complication after an autograft using intravenous busulfan.
Resumo:
Nowadays, existing 3D scanning cameras and microscopes in the market use digital or discrete sensors, such as CCDs or CMOS for object detection applications. However, these combined systems are not fast enough for some application scenarios since they require large data processing resources and can be cumbersome. Thereby, there is a clear interest in exploring the possibilities and performances of analogue sensors such as arrays of position sensitive detectors with the final goal of integrating them in 3D scanning cameras or microscopes for object detection purposes. The work performed in this thesis deals with the implementation of prototype systems in order to explore the application of object detection using amorphous silicon position sensors of 32 and 128 lines which were produced in the clean room at CENIMAT-CEMOP. During the first phase of this work, the fabrication and the study of the static and dynamic specifications of the sensors as well as their conditioning in relation to the existing scientific and technological knowledge became a starting point. Subsequently, relevant data acquisition and suitable signal processing electronics were assembled. Various prototypes were developed for the 32 and 128 array PSD sensors. Appropriate optical solutions were integrated to work together with the constructed prototypes, allowing the required experiments to be carried out and allowing the achievement of the results presented in this thesis. All control, data acquisition and 3D rendering platform software was implemented for the existing systems. All these components were combined together to form several integrated systems for the 32 and 128 line PSD 3D sensors. The performance of the 32 PSD array sensor and system was evaluated for machine vision applications such as for example 3D object rendering as well as for microscopy applications such as for example micro object movement detection. Trials were also performed involving the 128 array PSD sensor systems. Sensor channel non-linearities of approximately 4 to 7% were obtained. Overall results obtained show the possibility of using a linear array of 32/128 1D line sensors based on the amorphous silicon technology to render 3D profiles of objects. The system and setup presented allows 3D rendering at high speeds and at high frame rates. The minimum detail or gap that can be detected by the sensor system is approximately 350 μm when using this current setup. It is also possible to render an object in 3D within a scanning angle range of 15º to 85º and identify its real height as a function of the scanning angle and the image displacement distance on the sensor. Simple and not so simple objects, such as a rubber and a plastic fork, can be rendered in 3D properly and accurately also at high resolution, using this sensor and system platform. The nip structure sensor system can detect primary and even derived colors of objects by a proper adjustment of the integration time of the system and by combining white, red, green and blue (RGB) light sources. A mean colorimetric error of 25.7 was obtained. It is also possible to detect the movement of micrometer objects using the 32 PSD sensor system. This kind of setup offers the possibility to detect if a micro object is moving, what are its dimensions and what is its position in two dimensions, even at high speeds. Results show a non-linearity of about 3% and a spatial resolution of < 2µm.
Resumo:
In the recent past, hardly anyone could predict this course of GIS development. GIS is moving from desktop to cloud. Web 2.0 enabled people to input data into web. These data are becoming increasingly geolocated. Big amounts of data formed something that is called "Big Data". Scientists still don't know how to deal with it completely. Different Data Mining tools are used for trying to extract some useful information from this Big Data. In our study, we also deal with one part of these data - User Generated Geographic Content (UGGC). The Panoramio initiative allows people to upload photos and describe them with tags. These photos are geolocated, which means that they have exact location on the Earth's surface according to a certain spatial reference system. By using Data Mining tools, we are trying to answer if it is possible to extract land use information from Panoramio photo tags. Also, we tried to answer to what extent this information could be accurate. At the end, we compared different Data Mining methods in order to distinguish which one has the most suited performances for this kind of data, which is text. Our answers are quite encouraging. With more than 70% of accuracy, we proved that extracting land use information is possible to some extent. Also, we found Memory Based Reasoning (MBR) method the most suitable method for this kind of data in all cases.
Resumo:
The purpose of this study is to explore the humorous side of television advertisement and its impact on Portuguese consumers’ hearts, minds and wallets. Both qualitative (through in-depth interviews) and quantitative (through an on-line survey and subsequent statistical data analysis) methods were used, guaranteeing a more consistent, strong and valid research. Twenty-five interviews with randomly chosen consumers were conducted face-to-face and three interviews via e-mail with marketers and television advertisers were performed in order to explore profoundly the subject. Moreover, 360 people have answered the on-line survey. Through the analysis of the data collected humor perception was found to be positively correlated with persuasion and intention to purchase the product; intention to share the advert; message comprehension; product liking and development of positive feelings towards the brand and brand credibility variables. The main implication of these findings relies on the fact that humor in advertising is able to boost its effectiveness.
Resumo:
In the following text I will develop three major aspects. The first is to draw attention to those who seem to have been the disciplinary fields where, despite everything, the Digital Humanities (in the broad perspective as will be regarded here) have asserted themselves in a more comprehensive manner. I think it is here that I run into greater risks, not only for what I have mentioned above, but certainly because a significant part, perhaps, of the achievements and of the researchers might have escaped the look that I sought to cast upon the past few decades, always influenced by my own experience and the work carried out in the field of History. But this can be considered as a work in progress and it is open to criticism and suggestions. A second point to note is that emphasis will be given to the main lines of development in the relationship between historical research and digital methodologies, resources and tools. Finally, I will try to make a brief analysis of what has been the Digital Humanities discourse appropriation in recent years, with very debatable data and methods for sure, because studies are still scarce and little systematic information is available that would allow to go beyond an introductory reflection.
Resumo:
Based in internet growth, through semantic web, together with communication speed improvement and fast development of storage device sizes, data and information volume rises considerably every day. Because of this, in the last few years there has been a growing interest in structures for formal representation with suitable characteristics, such as the possibility to organize data and information, as well as the reuse of its contents aimed for the generation of new knowledge. Controlled Vocabulary, specifically Ontologies, present themselves in the lead as one of such structures of representation with high potential. Not only allow for data representation, as well as the reuse of such data for knowledge extraction, coupled with its subsequent storage through not so complex formalisms. However, for the purpose of assuring that ontology knowledge is always up to date, they need maintenance. Ontology Learning is an area which studies the details of update and maintenance of ontologies. It is worth noting that relevant literature already presents first results on automatic maintenance of ontologies, but still in a very early stage. Human-based processes are still the current way to update and maintain an ontology, which turns this into a cumbersome task. The generation of new knowledge aimed for ontology growth can be done based in Data Mining techniques, which is an area that studies techniques for data processing, pattern discovery and knowledge extraction in IT systems. This work aims at proposing a novel semi-automatic method for knowledge extraction from unstructured data sources, using Data Mining techniques, namely through pattern discovery, focused in improving the precision of concept and its semantic relations present in an ontology. In order to verify the applicability of the proposed method, a proof of concept was developed, presenting its results, which were applied in building and construction sector.
Resumo:
The personal data protection is presented as an indisputably complex and transversal subject and gives an account of this report, a result of curricular internship at the Portuguese Commission for Data Protection. The Commission is the competent authority for the control and supervision of personal data processing. The subject around which this report was prepared is the protection of personal data, analyzed in several aspects. The protection of personal data is, for some time, a topic that raises many concerns, because it is closely linked to fundamental rights constitutionally protected. Fundamental rights inherent in each of us are a result of Article 1 of the Constitution of the Portuguese Republic, in the sense that the dignity of the human person is affirmed as the first value around which the Portuguese legal system will have to be based. In other words, is the dignity of the human person the highest value in the Portuguese legal system. Was the development of societies to the point that we know today that has led to the importance to the personal data of citizens. In modern societies, it is possible to know everything about everyone and the curiosity of others seems not to worry about the injuries that affect the rights of citizens. Where new technologies make excuses for the excessive processing of personal data and where subjects do not seem to bother about their personal data crossing the world, it is important that jurisdictions give value the protection of personal data and the implications of its misuse, in that as these are the mirror of identity each of us and can be used against their owners, causing irreparable damage to the their fundamental rights. Being understood as protection of personal data the possibility of each citizen to decide the use of their data and how they can be used, we can say that its protection depends essentially on each of us, as holders of personal data. Therefore, the protection of our data begins in ourselves.
Resumo:
In orthopaedics, the management and treatment of osteochondral (OC) defects remains an ongoing clinical challenge. Autologous osteochondral mosaicplasty has been used as a valid option for OC treatments although donor site morbidity remains a source of concern [1]. Engineering a whole structure capable of mimicking different tissues (cartilage and subchondral bone) in an integrated manner could be a possible approach to regenerate OC defects. In our group we have been proposing the use of bilayered structures to regenerate osteochondral defects [2,3]. The present study aims to investigate the pre-clinical performance of bilayered hydrogels and spongy-like hydrogels in in vivo models (mice and rabbit, respectively), in both subcutaneous and orthotopic models. The bilayered structures were produced from Low Acyl Gellan Gum (LAGG) from Sigma-Aldrich, USA. Cartilage-like layers were obtained from a 2wt% LAGG solution. The bone-like layers were made of 2wt% LAGG with incorporation of hydroxyapatite at 20% and 30% (w/v). Hydrogels and spongy-like were subcutaneouly implanted in mice to evaluate the inflammatory response. Then, OC defects were induced in rabbit knee to create a critical size defect (4 mm diameter and 5 mm depth), and then hydrogels and sponges implanted. Both structures followed different processing methods. The hydrogels were injected allowing in situ crosslinking. Unlike, the spongy-like were pre-formed by freeze-drying. The studies concerning subcutaneous implantation and critical size OC defect were performed for 2 and 4 weeks time, respectively. Cellular behavior and inflammatory responses were assessed by means of histology staining and biochemical function and matrix deposition by immunohistochemistry. Additionally, both OC structures stability and new cartilage and bone formation were evaluated by using vivo- computed tomography (Scanco 80). The results showed no acute inflammatory response for both approaches. New tissue formation and integration in the adjacent tissues were also observed, which present different characteristic behaviors when comparing hydrogels and sponges response. As future insights, a novel strategy for regeneration of OC defects can be designed encompassing both, hydrogels and spongy-like structures and cellular approaches. References: 1. Espregueira-Mendes J. et al. Osteochondral transplantation using autografts from the upper tibio-fibular joint for the treatment of knee cartilage lesions. Knee Surgery, Sports Traumatology, Arthroscopy 20,1136, 2012. 2. Oliveira JM. et al, Novel hydroxyapatite/chitosan bilayered scaffold for osteochondral tissue-engineering applications: Scaffold design and its performance when seeded with goat bone marrow stromal cells. Biomaterials 27, 6123, 2006. 3. Pereira D R. et al. Gellan Gum-Based Hydrogel Bilayered Scaffolds for Osteochondral Tissue Engineering. Key Engineering Materials 587, 255, 2013.
Resumo:
Relatório de estágio de mestrado em Ensino do 1.º e 2.º Ciclo do Ensino Básico
Resumo:
Relatório de estágio de mestrado em Ensino de Matemática no 3.º Ciclo do Ensino Básico e no Ensino Secundário
Resumo:
Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e Computadores