987 resultados para R software
Resumo:
Introdução – O presente estudo avaliou o efeito da cafeína no valor da razão contraste ruído (CNR) em imagens SWI. Objetivos – Avaliar o efeito da cafeína qualitativamente e quantificado pelo cálculo do valor CNR em imagens de magnitude e MIP para as estruturas: veia cerebral interna, seio sagital superior, tórcula e artéria cerebral média. Metodologia – A população do estudo incluiu 24 voluntários saudáveis que estiveram pelo menos 24h privados da ingestão de cafeína. Adquiriram-se imagens SWI antes e após a ingestão de 100ml de café. Os voluntários foram subdivididos em quatro grupos de seis indivíduos/grupo e avaliados separadamente após decorrido um intervalo de tempo diferente para cada grupo (15, 25, 30 ou 45min pós-cafeína). Utilizou-se um scanner Siemens Avanto 1,5 T com bobine standard de crânio e os parâmetros: T2* GRE 3D de alta resolução no plano axial, TR=49; TE=40; FA=15; FOV=187x230; matriz=221x320. O processamento de imagem foi efetuado no software OsiriX® e a análise estatística no GraphPadPrism®. Resultados e Discussão – As alterações de sinal e diferenças de contraste predominaram nas estruturas venosas e não foram significantes na substância branca, LCR e artéria cerebral média. Os valores CNR pré-cafeína diferiram significativamente do pós-cafeína nas imagens de magnitude e MIP na veia cerebral interna e nas imagens de magnitude do seio sagital superior e da tórcula (p<0,0001). Não se verificaram diferenças significativas entre os grupos avaliados nos diferentes tempos pós-cafeína. Conclusões – Especulamos que a cafeína possa vir a ser usada como agente de contraste nas imagens SWI barato, eficaz e de fácil administração.
Resumo:
Microarray allow to monitoring simultaneously thousands of genes, where the abundance of the transcripts under a same experimental condition at the same time can be quantified. Among various available array technologies, double channel cDNA microarray experiments have arisen in numerous technical protocols associated to genomic studies, which is the focus of this work. Microarray experiments involve many steps and each one can affect the quality of raw data. Background correction and normalization are preprocessing techniques to clean and correct the raw data when undesirable fluctuations arise from technical factors. Several recent studies showed that there is no preprocessing strategy that outperforms others in all circumstances and thus it seems difficult to provide general recommendations. In this work, it is proposed to use exploratory techniques to visualize the effects of preprocessing methods on statistical analysis of cancer two-channel microarray data sets, where the cancer types (classes) are known. For selecting differential expressed genes the arrow plot was used and the graph of profiles resultant from the correspondence analysis for visualizing the results. It was used 6 background methods and 6 normalization methods, performing 36 pre-processing methods and it was analyzed in a published cDNA microarray database (Liver) available at http://genome-www5.stanford.edu/ which microarrays were already classified by cancer type. All statistical analyses were performed using the R statistical software.
Resumo:
Conferência: 2nd Experiment at International Conference (Exp at)- Univ Coimbra, Coimbra, Portugal - Sep 18-20, 2013
Resumo:
Learning and teaching processes, like all human activities, can be mediated through the use of tools. Information and communication technologies are now widespread within education. Their use in the daily life of teachers and learners affords engagement with educational activities at any place and time and not necessarily linked to an institution or a certificate. In the absence of formal certification, learning under these circumstances is known as informal learning. Despite the lack of certification, learning with technology in this way presents opportunities to gather information about and present new ways of exploiting an individual’s learning. Cloud technologies provide ways to achieve this through new architectures, methodologies, and workflows that facilitate semantic tagging, recognition, and acknowledgment of informal learning activities. The transparency and accessibility of cloud services mean that institutions and learners can exploit existing knowledge to their mutual benefit. The TRAILER project facilitates this aim by providing a technological framework using cloud services, a workflow, and a methodology. The services facilitate the exchange of information and knowledge associated with informal learning activities ranging from the use of social software through widgets, computer gaming, and remote laboratory experiments. Data from these activities are shared among institutions, learners, and workers. The project demonstrates the possibility of gathering information related to informal learning activities independently of the context or tools used to carry them out.
Resumo:
In the actual world, the impact of the software buying decisions has a rising relevance in social and economic terms. This research tries to explain it focusing on the organizations buying decisions of Operating Systems and Office Suites for personal computers and the impact on the competition between incumbent and alternative players in the market in these software categories, although the research hypotheses and conclusions may extend to other software categories and platforms. We concluded that in this market beside brand image, product features or price, other factors could have influence in the buying choices. Network effect, switching costs, local network effect, lock-in or consumer heterogeneity all have influence in the buying decision, protecting the incumbent and making it difficult for the competitive alternatives, based mainly on product features and price, to gain market share to the incumbent. This happens in a stronger way in the Operating Systems category.
Resumo:
The development of neonatal intensive care has led to an increase in the prevalence of children with low birth weight and associated morbidity. The objectives of this study are to verify (1) The association between birth weight (BW) and neuromotor performance? (2) Is the neuromotor performance of twins within the normal range? (3) Are intra-pair similarities in neuromotor development of Monozygotic (MZ) and Disygotic (DZ) twins of unequal magnitude? The sample consisted of 191 children (78 MZ and 113 DZ), 8.9+3.1 years of age and with an average BW of 2246.3+485.4g. In addition to gestational characteristics, sports participation and Zurich Neuromotor Assessment (ZNA) were observed at childhood age. The statistical analysis was carried out with software SPSS 18.0, the STATA 10 and the ZNA performance scores. The level of significance was 0.05. For the neuromotor items high intra and inter-investigator reliabilities were obtained (0.793<R<1). BW, gestational length and Apgar 5´ accounted for <26% of the variance. Twins showed elevated percentages of subjects (32.7%<76.9%) with low performance
Resumo:
In the standard Schumpeterian-growth models only follower firms invest in R&D activities and larger economies grow faster. Since these results are counterfactual, this paper reveals that leader firms often support R&D activities and economic growth can be independent of the market size. In particular, the maintenance of R&D leadership increases with: (i) the technological-knowledge gap between leader and followers, since a firm-specific learning effect of accumulated technological knowledge from past R&D is considered, (ii) the leaders’ strategies that delay the next successful R&D supported by some follower firm, (iii) the market size, and (iv) the up-grade of each innovation.
Resumo:
A repository of learning objects is a system that stores electronic resources in a technology-mediated learning process. The need for this kind of repository is growing as more educators become eager to use digital educa- tional contents and more of it becomes available. The sharing and use of these resources relies on the use of content and communication standards as a means to describe and exchange educational resources, commonly known as learning objects. This paper presents the design and implementation of a service-oriented reposi- tory of learning objects called crimsonHex. This repository supports new definitions of learning objects for specialized domains and we illustrate this feature with the definition of programming exercises as learning objects and its validation by the repository. The repository is also fully compliant with existing commu- nication standards and we propose extensions by adding new functions, formalizing message interchange and providing a REST interface. To validate the interoperability features of the repository, we developed a repository plug-in for Moodle that is expected to be included in the next release of this popular learning management system.
Resumo:
The Robuter is a robotic mobile platform that is located in the “Hands-On” Laboratory of the IPP-Hurray! Research Group, at the School of Engineering of the Polytechnic Institute of Porto. Recently, the Robuter was subject of an upgrading process addressing two essential areas: the Hardware Architecture and the Software Architecture. This upgrade in process was triggered due to technical problems on-board of the robot and also to the fact that the hardware/software architecture has become obsolete. This Technical Report overviews the most important aspects of the new Hardware and Software Architectures of the Robuter. This document also presents a first approach on the first steps towards the use of the Robuter platform, and provides some hints on future work that may be carried out using this mobile platform.
Resumo:
The development of new products or processes involves the creation, re-creation and integration of conceptual models from the related scientific and technical domains. Particularly, in the context of collaborative networks of organisations (CNO) (e.g. a multi-partner, international project) such developments can be seriously hindered by conceptual misunderstandings and misalignments, resulting from participants with different backgrounds or organisational cultures, for example. The research described in this article addresses this problem by proposing a method and the tools to support the collaborative development of shared conceptualisations in the context of a collaborative network of organisations. The theoretical model is based on a socio-semantic perspective, while the method is inspired by the conceptual integration theory from the cognitive semantics field. The modelling environment is built upon a semantic wiki platform. The majority of the article is devoted to developing an informal ontology in the context of a European R&D project, studied using action research. The case study results validated the logical structure of the method and showed the utility of the method.
Resumo:
A partir da década de noventa do século passado, começaram a surgir no mercado ferramentas de cálculo com o objetivo de agilizar a conceção do projeto de engenharia da construção. Até ao final da década de setenta os computadores existentes eram enormes, apenas entidades de grande poder económico os podiam adquirir. Na década de oitenta surgiu no mercado o PC, Personal Computer, estas pequenas máquinas começaram a ser adquiridas pela generalidade das empresas e em Portugal no final desta década era possível encontrar indivíduos que já possuíam o seu PC. Na década de noventa, a saída de recém-formados das instituições de ensino superior, fomentou no mercado o aparecimento de empresas de informática dedicadas à conceção de software de acordo com as necessidades do próprio mercado, daí resultando software comercial à medida e software comercial de prateleira (COTS, Commercial Off-The-Shelf)). O software comercial, ao ser utilizado por um grande número de pessoas, atingindo facilmente, no caso do COTS, os milhares, tem condições para evoluir de acordo com as exigências sistemáticas do próprio mercado, atingindo elevados patamares no cumprimento de requisitos de qualidade, nomeadamente no que concerne à funcionalidade, fiabilidade, usabilidade, manutenibilidade, eficiência, portabilidade e qualidade na utilização. A utilização de software comercial na área do projeto de engenharia da construção é hoje em dia uma prática absolutamente generalizada. A seleção do software pode tornar-se um processo complexo especialmente naquelas áreas em que existe grande oferta. A utilização de critérios de avaliação bem definidos poderá agilizar o processo e dar maiores garantias no momento da decisão final. Neste documento apresenta-se uma proposta de metodologia para avaliação e comparação de softwares.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações
Resumo:
In the past years, Software Architecture has attracted increased attention by academia and industry as the unifying concept to structure the design of complex systems. One particular research area deals with the possibility of reconfiguring architectures to adapt the systems they describe to new requirements. Reconfiguration amounts to adding and removing components and connections, and may have to occur without stopping the execution of the system being reconfigured. This work contributes to the formal description of such a process. Taking as a premise that a single formalism hardly ever satisfies all requirements in every situation, we present three approaches, each one with its own assumptions about the systems it can be applied to and with different advantages and disadvantages. Each approach is based on work of other researchers and has the aesthetic concern of changing as little as possible the original formalism, keeping its spirit. The first approach shows how a given reconfiguration can be specified in the same manner as the system it is applied to and in a way to be efficiently executed. The second approach explores the Chemical Abstract Machine, a formalism for rewriting multisets of terms, to describe architectures, computations, and reconfigurations in a uniform way. The last approach uses a UNITY-like parallel programming design language to describe computations, represents architectures by diagrams in the sense of Category Theory, and specifies reconfigurations by graph transformation rules.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Eletrónica e Telecomunicações