849 resultados para Build tools
Resumo:
Dissertação apresentada à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Audiovisual e Multimédia.
Resumo:
Nos últimos anos tem-se verificado um acentuado aumento na utilização de dispositivos moveis a nível internacional, pelo que as aplicações desenvolvidas para este tipo específico de dispositivos, conhecidas por apps, tem vindo a ganhar uma enorme popularidade. São cada vez mais as empresas que procuram estar presentes nos mais diversos sistemas operativos móveis, com o objectivo de suportar e desenvolver o seu negócio, alargando o seu leque de possíveis consumidores. Neste sentido surgiram diversas ferramentas com a função de facilitar o desenvolvimento de aplicações móveis, denominadas frameworks multi-plataforma. Estas frameworks conduziram ao aparecimento de plataformas web, que permitem criar aplicações multi-plataforma sem ser obrigatório ter conhecimentos em programação. Assim, e a partir da análise de vários criadores online de aplicações móveis identificados e das diferentes estratégias de desenvolvimento de aplicações móveis existentes, foi proposta a implementação de uma plataforma web capaz de criar aplicações nativas Android e iOS, dois dos sistemas operativos mais utilizados na actualidade. Apos desenvolvida a plataforma web, designada MobileAppBuilder, foi avaliada a sua Qualidade e as aplicações criadas pela mesma, através do preenchimento de um questionário por parte de 10 indivíduos com formação em Engenharia Informática, resultando numa classificação geral de ”excelente”. De modo a analisar o desempenho das aplicações produzidas pela plataforma desenvolvida, foram realizados testes comparativos entre uma aplicação da MobileAppBuilder e duas homologas de dois dos criadores online estudados, nomeadamente Andromo e Como. Os resultados destes testes revelaram que a MobileAppBuilder gera aplicações menos pesadas, mais rápidas e mais eficientes em alguns aspetos, nomeadamente no arranque.
Resumo:
The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.
Resumo:
Building software for Web 2.0 and the Social Media world is non-trivial. It requires understanding how to create infrastructure that will survive at Web scale, meaning that it may have to deal with tens of millions of individual items of data, and cope with hits from hundreds of thousands of users every minute. It also requires you to build tools that will be part of a much larger ecosystem of software and application families. In this lecture we will look at how traditional relational database systems have tried to cope with the scale of Web 2.0, and explore the NoSQL movement that seeks to simplify data-storage and create ultra-swift data systems at the expense of immediate consistency. We will also look at the range of APIs, libraries and interoperability standards that are trying to make sense of the Social Media world, and ask what trends we might be seeing emerge.
Resumo:
El propósito central del estudio consiste en descifrar, entender, contextualizar y reinventar el trabajo directivo actual. En el inicio del texto se busca comprender las realidades sobre el trabajo directivo desde la perspectiva y estudios de dos autores principales, Henry Mintzberg y Stefan Tengblad, los cuales exponen desde otro ángulo el día a día de los directivos; posteriormente se presenta la influencia que tiene el contexto nacional en los diferentes comportamientos y modelos de dirección, haciendo una comparación entre los directivos Suecos y Estado Unidenses para concluir las variables culturales, económicas, políticas y sociales que modifican y personalizan el trabajo directivo; finalmente en la última sección, después de comprender el trabajo directivo desde su naturaleza y sus fuentes externas de influencia, se propone reinventarlo o desde diferentes postulados y modelos, entendiendo que no existe una solución o fórmula mágica, sino unas herramientas que cada directivo debe definir y construir a lo largo de su vida.
Resumo:
An important objective of the INTEGRATE project1 is to build tools that support the efficient execution of post-genomic multi-centric clinical trials in breast cancer, which includes the automatic assessment of the eligibility of patients for available trials. The population suited to be enrolled in a trial is described by a set of free-text eligibility criteria that are both syntactically and semantically complex. At the same time, the assessment of the eligibility of a patient for a trial requires the (machineprocessable) understanding of the semantics of the eligibility criteria in order to further evaluate if the patient data available for example in the hospital EHR satisfies these criteria. This paper presents an analysis of the semantics of the clinical trial eligibility criteria based on relevant medical ontologies in the clinical research domain: SNOMED-CT, LOINC, MedDRA. We detect subsets of these widely-adopted ontologies that characterize the semantics of the eligibility criteria of trials in various clinical domains and compare these sets. Next, we evaluate the occurrence frequency of the concepts in the concrete case of breast cancer (which is our first application domain) in order to provide meaningful priorities for the task of binding/mapping these ontology concepts to the actual patient data. We further assess the effort required to extend our approach to new domains in terms of additional semantic mappings that need to be developed.
Resumo:
This lecture covers the use of Agile design tools: Storyboards and Scenarios, used in conjunction with Personas. These are also used in participatory design.
Resumo:
Construction projects are risky. However, the characteristics of the risk highly depend on the type of procurement being adopted for managing the project. A build-operate-transfer (BOT) project is recognized as one of the most risky project schemes. There are instances of project failure where a BOT scheme was employed. Ineffective rts are increasingly being managed using various risk management tools and techniques. However, application of those tools depends on the nature of the project, organization's policy, project management strategy, risk attitude of the project team members, and availability of the resources. Understanding of the contents and contexts of BOT projects, together with a thorough understanding of risk management tools and techniques, helps select processes of risk management for effective project implementation in a BOT scheme. This paper studies application of risk management tools and techniques in BOT projects through reviews of relevant literatures and develops a model for selecting risk management process for BOT projects. The application to BOT projects is considered from the viewpoints of the major project participants. Discussion is also made with regard to political risks. This study would contribute to the establishment of a framework for systematic risk management in BOT projects.
Resumo:
Grade three students used tablet computers with a pre-selected series of applications over a seven-month period at school and through a community afterschool program. The study determined that these students benefited from differentiated learning in the school environment and online collaborative play in the afterschool centre. Benefits of the exposure to digital tools included: intergenerational learning as children assisted both parents and teachers with digital applications; problem-solving; and enhanced collaborative play for students across environments. Although this study makes a contribution to the field of digital literacy and young learners, the researchers conclude further investigation is warranted, in regards to the inter-relationships between home, school and community as spaces for the learning and teaching of digital technologies.
Resumo:
Virtual screening is a central technique in drug discovery today. Millions of molecules can be tested in silico with the aim to only select the most promising and test them experimentally. The topic of this thesis is ligand-based virtual screening tools which take existing active molecules as starting point for finding new drug candidates. One goal of this thesis was to build a model that gives the probability that two molecules are biologically similar as function of one or more chemical similarity scores. Another important goal was to evaluate how well different ligand-based virtual screening tools are able to distinguish active molecules from inactives. One more criterion set for the virtual screening tools was their applicability in scaffold-hopping, i.e. finding new active chemotypes. In the first part of the work, a link was defined between the abstract chemical similarity score given by a screening tool and the probability that the two molecules are biologically similar. These results help to decide objectively which virtual screening hits to test experimentally. The work also resulted in a new type of data fusion method when using two or more tools. In the second part, five ligand-based virtual screening tools were evaluated and their performance was found to be generally poor. Three reasons for this were proposed: false negatives in the benchmark sets, active molecules that do not share the binding mode, and activity cliffs. In the third part of the study, a novel visualization and quantification method is presented for evaluation of the scaffold-hopping ability of virtual screening tools.
Resumo:
Introduction: Biomedical scientists need to choose among hundreds of publicly available bioinformatics applications, tools, and databases. Librarian challenges include raising awareness to valuable resources, as well as providing support in finding and evaluating specific resources. Our objective is to implement an education program in bioinformatics similar to those offered in other North American academic libraries. Description: Our initial target clientele included four research departments of the Faculty of Medicine at Universite´ de Montréal. In January 2010, I attended two departmental meetings and interviewed a few stakeholders in order to propose a basic bioinformatics service: one-to-one consultations and a workshop on NCBI databases. The response was favourable. The workshop was thus offered once a month during the Winter and Fall semesters, and participants were invited to evaluate the workshop via an online survey. In addition, a bioinformatics subject guide was launched on the library’s website in December 2010. Outcomes: One hundred and two participants attended one of the nine NCBI workshops offered in 2010; most were graduate students (74%). The survey’s response rate was 54%. A majority of respondents thought that the bioinformatics resources featured in the workshop were relevant (95%) and that the difficulty level of exercises was appropriate (84%). Respondents also thought that their future information searches would be more efficient (93%) and that the workshop should be integrated in a course (78%). Furthermore, five bioinformatics-related reference questions were answered and two one-to-one consultations with students were performed. Discussion: The success of our bioinformatics service is growing. Future directions include extending the service to other biomedical departments, integrating the workshop in an undergraduate course, promoting the subject guide to other francophone universities, and creating a bioinformatics blog that would feature specific databases, news, and library resources.
Resumo:
In early stages of architectural design, as in other design domains, the language used is often very abstract. In architectural design, for example, architects and their clients use experiential terms such as "private" or "open" to describe spaces. If we are to build programs that can help designers during this early-stage design, we must give those programs the capability to deal with concepts on the level of such abstractions. The work reported in this thesis sought to do that, focusing on two key questions: How are abstract terms such as "private" and "open" translated into physical form? How might one build a tool to assist designers with this process? The Architect's Collaborator (TAC) was built to explore these issues. It is a design assistant that supports iterative design refinement, and that represents and reasons about how experiential qualities are manifested in physical form. Given a starting design and a set of design goals, TAC explores the space of possible designs in search of solutions that satisfy the goals. It employs a strategy we've called dependency-directed redesign: it evaluates a design with respect to a set of goals, then uses an explanation of the evaluation to guide proposal and refinement of repair suggestions; it then carries out the repair suggestions to create new designs. A series of experiments was run to study TAC's behavior. Issues of control structure, goal set size, goal order, and modification operator capabilities were explored. In addition, TAC's use as a design assistant was studied in an experiment using a house in the process of being redesigned. TAC's use as an analysis tool was studied in an experiment using Frank Lloyd Wright's Prairie houses.
Resumo:
The objective with this study has been to build general models of the mechanics in tree felling with chain-saw and to compare felling torque for different tools. The theoretical models are completed and validated with a comparative study. The study includes a great number of felling tools of which some are used with different methods. Felling torque was measured using a naturally like measuring arrangement where a tree is cut at about 3.7 m height and then anchored with a dynamometer to a tree opposite to the felling direction. Notch and felling cut was made as ordinary with exception that the hinge was made extra thin to reduce bending resistance. The tree was consequently not felled during the trials and several combinations of felling tools and individuals could be used on the same tree.The results show big differences between tools, methods and persons. The differences were, however, not general, but could vary depending on conditions (first of all tree diameters). Tools and methods that push or pull on the stem are little affected by the size of the tree, while tools that press on the stump are very much dependent of a large stump-diameter. Hand force asserted on a simple pole is consequently a powerful tool on small trees. For trees of medium size there are several alternative methods with different sizes and brands of felling levers and wedges. Larger and more ungainly tools and methods like tree pusher, winch, etc. develop very high felling torque on all tree sizes. On large trees also the felling wedge and especially the use of several wedges together develop very high felling torque.
Resumo:
As neuroscience gains social traction and entices media attention, the notion that education has much to benefit from brain research becomes increasingly popular. However, it has been argued that the fundamental bridge toward education is cognitive psychology, not neuroscience. We discuss four specific cases in which neuroscience synergizes with other disciplines to serve education, ranging from very general physiological aspects of human learning such as nutrition, exercise and sleep, to brain architectures that shape the way we acquire language and reading, and neuroscience tools that increasingly allow the early detection of cognitive deficits, especially in preverbal infants. Neuroscience methods, tools and theoretical frameworks have broadened our understanding of the mind in a way that is highly relevant to educational practice. Although the bridge’s cement is still fresh, we argue why it is prime time to march over it.
Resumo:
The Swiss Swiss Consultant Trust Fund (CTF) support covered the period from July to December 2007 and comprised four main tasks: (1) Analysis of historic land degradation trends in the four watersheds of Zerafshan, Surkhob, Toirsu, and Vanj; (2) Translation of standard CDE GIS training materials into Russian and Tajik to enable local government staff and other specialists to use geospatial data and tools; (3) Demonstration of geospatial tools that show land degradation trends associated with land use and vegetative cover data in the project areas, (4) Preliminary training of government staff in using appropriate data, including existing information, global datasets, inexpensive satellite imagery and other datasets and webbased visualization tools like spatial data viewers, etc. The project allowed building of local awareness of, and skills in, up-to-date, inexpensive, easy-to-use GIS technologies, data sources, and applications relevant to natural resource management and especially to sustainable land management. In addition to supporting the implementation of the World Bank technical assistance activity to build capacity in the use of geospatial tools for natural resource management, the Swiss CTF support also aimed at complementing the Bank supervision work on the ongoing Community Agriculture and Watershed Management Project (CAWMP).