978 resultados para software framework
Resumo:
The Corporate world is becoming more and more competitive. This leads organisations to adapt to this reality, by adopting more efficient processes, which result in a decrease in cost as well as an increase of product quality. One of these processes consists in making proposals to clients, which necessarily include a cost estimation of the project. This estimation is the main focus of this project. In particular, one of the goals is to evaluate which estimation models fit the Altran Portugal software factory the most, the organization where the fieldwork of this thesis will be carried out. There is no broad agreement about which is the type of estimation model more suitable to be used in software projects. Concerning contexts where there is plenty of objective information available to be used as input to an estimation model, model-based methods usually yield better results than the expert judgment. However, what happens more frequently is not having this volume and quality of information, which has a negative impact in the model-based methods performance, favouring the usage of expert judgement. In practice, most organisations use expert judgment, making themselves dependent on the expert. A common problem found is that the performance of the expert’s estimation depends on his previous experience with identical projects. This means that when new types of projects arrive, the estimation will have an unpredictable accuracy. Moreover, different experts will make different estimates, based on their individual experience. As a result, the company will not directly attain a continuous growing knowledge about how the estimate should be carried. Estimation models depend on the input information collected from previous projects, the size of the project database and the resources available. Altran currently does not store the input information from previous projects in a systematic way. It has a small project database and a team of experts. Our work is targeted to companies that operate in similar contexts. We start by gathering information from the organisation in order to identify which estimation approaches can be applied considering the organization’s context. A gap analysis is used to understand what type of information the company would have to collect so that other approaches would become available. Based on our assessment, in our opinion, expert judgment is the most adequate approach for Altran Portugal, in the current context. We analysed past development and evolution projects from Altran Portugal and assessed their estimates. This resulted in the identification of common estimation deviations, errors, and patterns, which lead to the proposal of metrics to help estimators produce estimates leveraging past projects quantitative and qualitative information in a convenient way. This dissertation aims to contribute to more realistic estimates, by identifying shortcomings in the current estimation process and supporting the self-improvement of the process, by gathering as much relevant information as possible from each finished project.
Resumo:
A discinesia ciliar primária (DCP) resulta de disfunção ciliar no ser humano, estando associada a um conjunto de sintomas muito diversificados. É uma doença respiratória rara caracterizada por infecções respiratórias, situs inversus, infertilidade e hidrocefalia. Em Portugal não existe nenhum centro de diagnóstico da doença. Mas a inten-ção de criar um surgiu, seguindo o método de centros de diagnóstico para DCP utilizado noutros países. Este diagnóstico consiste em recolher amostras dos cílios do nariz, através do método de escovagem nasal e obter a gravação do batimento das células ciliadas por uma câmara de alta velocidade acoplada a um microscópio com objectivas de alta resolução. É possível estudar a DCP através da análise do comportamento físico dos cílios, e, para uma melhor abordagem, foi desenvolvido um programa executável, em C#, para análise destas amostras. Este, após a escolha de uma zona de interesse da sequência de imagens pelo utilizador (ROI), detecta as frequências do bati-mento ciliar, indicando uma lista com as percentagens das frequências obtidas e cria um mapa de frequências do ROI. A ferramenta permite ainda calcular o comprimento do cílio e realizar um estudo do movimento do mesmo, algo que ainda não foi abordado por outros programas. O código desenvolvido permitirá, assim, obter um diagnóstico de DCP em Por-tugal, rápido e nalguns casos com um melhor desempenho do que a inspecção visual seguida noutros centros de diagnóstico.
Resumo:
Cloud computing has been one of the most important topics in Information Technology which aims to assure scalable and reliable on-demand services over the Internet. The expansion of the application scope of cloud services would require cooperation between clouds from different providers that have heterogeneous functionalities. This collaboration between different cloud vendors can provide better Quality of Services (QoS) at the lower price. However, current cloud systems have been developed without concerns of seamless cloud interconnection, and actually they do not support intercloud interoperability to enable collaboration between cloud service providers. Hence, the PhD work is motivated to address interoperability issue between cloud providers as a challenging research objective. This thesis proposes a new framework which supports inter-cloud interoperability in a heterogeneous computing resource cloud environment with the goal of dispatching the workload to the most effective clouds available at runtime. Analysing different methodologies that have been applied to resolve various problem scenarios related to interoperability lead us to exploit Model Driven Architecture (MDA) and Service Oriented Architecture (SOA) methods as appropriate approaches for our inter-cloud framework. Moreover, since distributing the operations in a cloud-based environment is a nondeterministic polynomial time (NP-complete) problem, a Genetic Algorithm (GA) based job scheduler proposed as a part of interoperability framework, offering workload migration with the best performance at the least cost. A new Agent Based Simulation (ABS) approach is proposed to model the inter-cloud environment with three types of agents: Cloud Subscriber agent, Cloud Provider agent, and Job agent. The ABS model is proposed to evaluate the proposed framework.
Resumo:
The Intel R Xeon PhiTM is the first processor based on Intel’s MIC (Many Integrated Cores) architecture. It is a co-processor specially tailored for data-parallel computations, whose basic architectural design is similar to the ones of GPUs (Graphics Processing Units), leveraging the use of many integrated low computational cores to perform parallel computations. The main novelty of the MIC architecture, relatively to GPUs, is its compatibility with the Intel x86 architecture. This enables the use of many of the tools commonly available for the parallel programming of x86-based architectures, which may lead to a smaller learning curve. However, programming the Xeon Phi still entails aspects intrinsic to accelerator-based computing, in general, and to the MIC architecture, in particular. In this thesis we advocate the use of algorithmic skeletons for programming the Xeon Phi. Algorithmic skeletons abstract the complexity inherent to parallel programming, hiding details such as resource management, parallel decomposition, inter-execution flow communication, thus removing these concerns from the programmer’s mind. In this context, the goal of the thesis is to lay the foundations for the development of a simple but powerful and efficient skeleton framework for the programming of the Xeon Phi processor. For this purpose we build upon Marrow, an existing framework for the orchestration of OpenCLTM computations in multi-GPU and CPU environments. We extend Marrow to execute both OpenCL and C++ parallel computations on the Xeon Phi. We evaluate the newly developed framework, several well-known benchmarks, like Saxpy and N-Body, will be used to compare, not only its performance to the existing framework when executing on the co-processor, but also to assess the performance on the Xeon Phi versus a multi-GPU environment.
Resumo:
Nesta Tese foi desenvolvida uma plataforma online multiutilizador, que tem como objetivo principal comparar algoritmos de análise de imagens para de-terminar o seu grau de eficácia. Um exemplo de aplicação é a comparação de algoritmos de análise de imagens da retina para deteção de drusas. A compa-ração é feita considerando um dos algoritmos como referência padrão e sobre o este são avaliados os restantes. O funcionamento da plataforma é semelhante à de um fórum, onde é possível a um utilizador criar tópicos publicando imagens e seu descritivo. Após a cria-ção do tópico qualquer utilizador pode visualizar o mesmo, dando a hipótese de comentar ou de acrescentar imagens processadas com os seus próprios al-goritmos de análise. Com o aumento de imagens processadas, obtém-se uma base de dados de algoritmos de análise de imagens sobre a qual é possível avaliar o seu grau de eficácia. A plataforma pretende também criar comunidades onde os utilizadores pos-sam interagir uns com os outros comentando sobre os tópicos, contribuindo assim para o melhoramento dos algoritmos. Deste modo, além de uma base de dados que qualquer utilizador pode usar, obtém-se uma fonte de informação disponibilizada por outros profissionais da área.
Resumo:
Contém CD com áudio como anexo
Resumo:
Nowadays, the consumption of goods and services on the Internet are increasing in a constant motion. Small and Medium Enterprises (SMEs) mostly from the traditional industry sectors are usually make business in weak and fragile market sectors, where customized products and services prevail. To survive and compete in the actual markets they have to readjust their business strategies by creating new manufacturing processes and establishing new business networks through new technological approaches. In order to compete with big enterprises, these partnerships aim the sharing of resources, knowledge and strategies to boost the sector’s business consolidation through the creation of dynamic manufacturing networks. To facilitate such demand, it is proposed the development of a centralized information system, which allows enterprises to select and create dynamic manufacturing networks that would have the capability to monitor all the manufacturing process, including the assembly, packaging and distribution phases. Even the networking partners that come from the same area have multi and heterogeneous representations of the same knowledge, denoting their own view of the domain. Thus, different conceptual, semantic, and consequently, diverse lexically knowledge representations may occur in the network, causing non-transparent sharing of information and interoperability inconsistencies. The creation of a framework supported by a tool that in a flexible way would enable the identification, classification and resolution of such semantic heterogeneities is required. This tool will support the network in the semantic mapping establishments, to facilitate the various enterprises information systems integration.
Resumo:
As the complexity of markets and the dynamicity of systems evolve, the need for interoperable systems capable of strengthening enterprise communication effectiveness increases. This is particularly significant when it comes to collaborative enterprise networks, like manufacturing supply chains, where several companies work, communicate, and depend on each other, in order to achieve a specific goal. Once interoperability is achieved, that is once all network parties are able to communicate with and understand each other, organisations are able to exchange information along a stable environment that follows agreed laws. However, as markets adapt to new requirements and demands, an evolutionary behaviour is triggered giving space to interoperability problems, thus disrupting the sustainability of interoperability and raising the need to develop monitoring activities capable of detecting and preventing unexpected behaviour. This work seeks to contribute to the development of monitoring techniques for interoperable SOA-based enterprise networks. It focuses on the automatic detection of harmonisation breaking events during real-time communications, and strives to develop and propose a methodological approach to handle these disruptions with minimal or no human intervention, hence providing existing service-based networks with the ability to detect and promptly react to interoperability issues.
Resumo:
Digital Businesses have become a major driver for economic growth and have seen an explosion of new startups. At the same time, it also includes mature enterprises that have become global giants in a relatively short period of time. Digital Businesses have unique characteristics that make the running and management of a Digital Business much different from traditional offline businesses. Digital businesses respond to online users who are highly interconnected and networked. This enables a rapid flow of word of mouth, at a pace far greater than ever envisioned when dealing with traditional products and services. The relatively low cost of incremental user addition has led to a variety of innovation in pricing of digital products, including various forms of free and freemium pricing models. This thesis explores the unique characteristics and complexities of Digital Businesses and its implications on the design of Digital Business Models and Revenue Models. The thesis proposes an Agent Based Modeling Framework that can be used to develop Simulation Models that simulate the complex dynamics of Digital Businesses and the user interactions between users of a digital product. Such Simulation models can be used for a variety of purposes such as simple forecasting, analysing the impact of market disturbances, analysing the impact of changes in pricing models and optimising the pricing for maximum revenue generation or a balance between growth in usage and revenue generation. These models can be developed for a mature enterprise with a large historical record of user growth rate as well as for early stage enterprises without much historical data. Through three case studies, the thesis demonstrates the applicability of the Framework and its potential applications.
Resumo:
The life of humans and most living beings depend on sensation and perception for the best assessment of the surrounding world. Sensorial organs acquire a variety of stimuli that are interpreted and integrated in our brain for immediate use or stored in memory for later recall. Among the reasoning aspects, a person has to decide what to do with available information. Emotions are classifiers of collected information, assigning a personal meaning to objects, events and individuals, making part of our own identity. Emotions play a decisive role in cognitive processes as reasoning, decision and memory by assigning relevance to collected information. The access to pervasive computing devices, empowered by the ability to sense and perceive the world, provides new forms of acquiring and integrating information. But prior to data assessment on its usefulness, systems must capture and ensure that data is properly managed for diverse possible goals. Portable and wearable devices are now able to gather and store information, from the environment and from our body, using cloud based services and Internet connections. Systems limitations in handling sensorial data, compared with our sensorial capabilities constitute an identified problem. Another problem is the lack of interoperability between humans and devices, as they do not properly understand human’s emotional states and human needs. Addressing those problems is a motivation for the present research work. The mission hereby assumed is to include sensorial and physiological data into a Framework that will be able to manage collected data towards human cognitive functions, supported by a new data model. By learning from selected human functional and behavioural models and reasoning over collected data, the Framework aims at providing evaluation on a person’s emotional state, for empowering human centric applications, along with the capability of storing episodic information on a person’s life with physiologic indicators on emotional states to be used by new generation applications.
Resumo:
Introduction. The genera Enterococcus, Staphylococcus and Streptococcus are recognized as important Gram-positive human pathogens. The aim of this study was to evaluate the performance of Vitek 2 in identifying Gram-positive cocci and their antimicrobial susceptibilities. Methods. One hundred four isolates were analyzed to determine the accuracy of the automated system for identifying the bacteria and their susceptibility to oxacillin and vancomycin. Results. The system correctly identified 77.9% and 97.1% of the isolates at the species and genus levels, respectively. Additionally, 81.8% of the Vitek 2 results agreed with the known antimicrobial susceptibility profiles. Conclusion. Vitek 2 correctly identified the commonly isolated strains; however, the limitations of the method may lead to ambiguous findings.
Resumo:
Software Product Line (SPL) engineering aims at achieving efficient development of software products in a specific domain. New products are obtained via a process which entails creating a new configuration specifying the desired product’s features. This configuration must necessarily conform to a variability model, that describes the scope of the SPL, or else it is not viable. To ensure this, configuration tools are used that do not allow invalid configurations to be expressed. A different concern, however, is making sure that a product addresses the stakeholders’ needs as best as possible. The stakeholders may not be experts on the domain, so they may have unrealistic expectations. Also, the scope of the SPL is determined not only by the domain but also by limitations of the development platforms. It is therefore possible that the desired set of features goes beyond what is possible to currently create with the SPL. This means that configuration tools should provide support not only for creating valid products, but also for improving satisfaction of user concerns. We address this goal by providing a user-centric configuration process that offers suggestions during the configuration process, based on the use of soft constraints, and identifying and explaining potential conflicts that may arise. Suggestions help mitigating stakeholder uncertainty and poor domain knowledge, by helping them address well known and desirable domain-related concerns. On the other hand, automated conflict identification and explanation helps the stakeholders to understand the trade-offs required for realizing their vision, allowing informed resolution of conflicts. Additionally, we propose a prototype-based approach to configuration, that addresses the order-dependency issues by allowing the complete (or partial) specification of the features in a single step. A subsequent resolution process will then identify possible repairs, or trade-offs, that may be required for viabilization.
Resumo:
Information systems are widespread and used by anyone with computing devices as well as corporations and governments. It is often the case that security leaks are introduced during the development of an application. Reasons for these security bugs are multiple but among them one can easily identify that it is very hard to define and enforce relevant security policies in modern software. This is because modern applications often rely on container sharing and multi-tenancy where, for instance, data can be stored in the same physical space but is logically mapped into different security compartments or data structures. In turn, these security compartments, to which data is classified into in security policies, can also be dynamic and depend on runtime data. In this thesis we introduce and develop the novel notion of dependent information flow types, and focus on the problem of ensuring data confidentiality in data-centric software. Dependent information flow types fit within the standard framework of dependent type theory, but, unlike usual dependent types, crucially allow the security level of a type, rather than just the structural data type itself, to depend on runtime values. Our dependent function and dependent sum information flow types provide a direct, natural and elegant way to express and enforce fine grained security policies on programs. Namely programs that manipulate structured data types in which the security level of a structure field may depend on values dynamically stored in other fields The main contribution of this work is an efficient analysis that allows programmers to verify, during the development phase, whether programs have information leaks, that is, it verifies whether programs protect the confidentiality of the information they manipulate. As such, we also implemented a prototype typechecker that can be found at http://ctp.di.fct.unl.pt/DIFTprototype/.