913 resultados para automated warehouse
Resumo:
The refinement calculus is a well-established theory for deriving program code from specifications. Recent research has extended the theory to handle timing requirements, as well as functional ones, and we have developed an interactive programming tool based on these extensions. Through a number of case studies completed using the tool, this paper explains how the tool helps the programmer by supporting the many forms of variables needed in the theory. These include simple state variables as in the untimed calculus, trace variables that model the evolution of properties over time, auxiliary variables that exist only to support formal reasoning, subroutine parameters, and variables shared between parallel processes.
Resumo:
Many organisations need to extract useful information from huge amounts of movement data. One example is found in maritime transportation, where the automated identification of a diverse range of traffic routes is a key management issue for improving the maintenance of ports and ocean routes, and accelerating ship traffic. This paper addresses, in a first stage, the research challenge of developing an approach for the automated identification of traffic routes based on clustering motion vectors rather than reconstructed trajectories. The immediate benefit of the proposed approach is to avoid the reconstruction of trajectories in terms of their geometric shape of the path, their position in space, their life span, and changes of speed, direction and other attributes over time. For clustering the moving objects, an adapted version of the Shared Nearest Neighbour algorithm is used. The motion vectors, with a position and a direction, are analysed in order to identify clusters of vectors that are moving towards the same direction. These clusters represent traffic routes and the preliminary results have shown to be promising for the automated identification of traffic routes with different shapes and densities, as well as for handling noise data.
Resumo:
Background: Regulating mechanisms of branching morphogenesis of fetal lung rat explants have been an essential tool for molecular research. This work presents a new methodology to accurately quantify the epithelial, outer contour and peripheral airway buds of lung explants during cellular development from microscopic images. Methods: The outer contour was defined using an adaptive and multi-scale threshold algorithm whose level was automatically calculated based on an entropy maximization criterion. The inner lung epithelial was defined by a clustering procedure that groups small image regions according to the minimum description length principle and local statistical properties. Finally, the number of peripheral buds were counted as the skeleton branched ends from a skeletonized image of the lung inner epithelial. Results: The time for lung branching morphometric analysis was reduced in 98% in contrast to the manual method. Best results were obtained in the first two days of cellular development, with lesser standard deviations. Non-significant differences were found between the automatic and manual results in all culture days. Conclusions: The proposed method introduces a series of advantages related to its intuitive use and accuracy, making the technique suitable to images with different lightning characteristics and allowing a reliable comparison between different researchers.
Resumo:
Regulating mechanisms of branchingmorphogenesis of fetal lung rat explants have been an essential tool formolecular research.This work presents a new methodology to accurately quantify the epithelial, outer contour, and peripheral airway buds of lung explants during cellular development frommicroscopic images. Methods.Theouter contour was defined using an adaptive and multiscale threshold algorithm whose level was automatically calculated based on an entropy maximization criterion. The inner lung epithelium was defined by a clustering procedure that groups small image regions according to the minimum description length principle and local statistical properties. Finally, the number of peripheral buds was counted as the skeleton branched ends from a skeletonized image of the lung inner epithelia. Results. The time for lung branching morphometric analysis was reduced in 98% in contrast to themanualmethod. Best results were obtained in the first two days of cellular development, with lesser standard deviations. Nonsignificant differences were found between the automatic and manual results in all culture days. Conclusions. The proposed method introduces a series of advantages related to its intuitive use and accuracy, making the technique suitable to images with different lighting characteristics and allowing a reliable comparison between different researchers.
Resumo:
With the purpose of at lowering costs and reendering the demanded information available to users with no access to the internet, service companies have adopted automated interaction technologies in their call centers, which may or may not meet the expectations of users. Based on different areas of knowledge (man-machine interaction, consumer behavior and use of IT) 13 propositions are raised and a research is carried out in three parts: focus group, field study with users and interviews with experts. Eleven automated service characteristics which support the explanation for user satisfaction are listed, a preferences model is proposed and evidence in favor or against each of the 13 propositions is brought in. With balance scorecard concepts, a managerial assessment model is proposed for the use of automated call center technology. In future works, the propositions may become verifiable hypotheses through conclusive empirical research.
Resumo:
INTRODUCTION: The correct identification of the underlying cause of death and its precise assignment to a code from the International Classification of Diseases are important issues to achieve accurate and universally comparable mortality statistics These factors, among other ones, led to the development of computer software programs in order to automatically identify the underlying cause of death. OBJECTIVE: This work was conceived to compare the underlying causes of death processed respectively by the Automated Classification of Medical Entities (ACME) and the "Sistema de Seleção de Causa Básica de Morte" (SCB) programs. MATERIAL AND METHOD: The comparative evaluation of the underlying causes of death processed respectively by ACME and SCB systems was performed using the input data file for the ACME system that included deaths which occurred in the State of S. Paulo from June to December 1993, totalling 129,104 records of the corresponding death certificates. The differences between underlying causes selected by ACME and SCB systems verified in the month of June, when considered as SCB errors, were used to correct and improve SCB processing logic and its decision tables. RESULTS: The processing of the underlying causes of death by the ACME and SCB systems resulted in 3,278 differences, that were analysed and ascribed to lack of answer to dialogue boxes during processing, to deaths due to human immunodeficiency virus [HIV] disease for which there was no specific provision in any of the systems, to coding and/or keying errors and to actual problems. The detailed analysis of these latter disclosed that the majority of the underlying causes of death processed by the SCB system were correct and that different interpretations were given to the mortality coding rules by each system, that some particular problems could not be explained with the available documentation and that a smaller proportion of problems were identified as SCB errors. CONCLUSION: These results, disclosing a very low and insignificant number of actual problems, guarantees the use of the version of the SCB system for the Ninth Revision of the International Classification of Diseases and assures the continuity of the work which is being undertaken for the Tenth Revision version.
Resumo:
To determine the precision and agreement of the hemoglobin (Hb) measurements in capillary and venous blood samples by the HemoCue® and an automated counter. Hb was determined by both equipaments in blood samples of 29 pregnant women. The HemoCue® showed low repeatability of Hb measurements in duplicate in capillary (CR=0.53 g/dL, CV=13.6%) and venous blood (CR=0.53 g/dL, CV=13.6%). Hb measurements in capillary blood were higher than those in venous blood (12.4 and 11.7 g/dL, respectively; p<0.05). There was high agreement between Hb in capillary blood by the HemoCue® and in venous blood by the counter (r icc=0.86; p<0.01), and also between the diagnosis of anemia by both equipments (k=0.81; p<0.01). The HemoCue® seems to be more appropriate for capillary blood and require training of the measurers.
Resumo:
A genetic algorithm used to design radio-frequency binary-weighted differential switched capacitor arrays (RFDSCAs) is presented in this article. The algorithm provides a set of circuits all having the same maximum performance. This article also describes the design, implementation, and measurements results of a 0.25 lm BiCMOS 3-bit RFDSCA. The experimental results show that the circuit presents the expected performance up to 40 GHz. The similarity between the evolutionary solutions, circuit simulations, and measured results indicates that the genetic synthesis method is a very useful tool for designing optimum performance RFDSCAs.
Resumo:
The paper presents a RFDSCA automated synthesis procedure. This algorithm determines several RFDSCA circuits from the top-level system specifications all with the same maximum performance. The genetic synthesis tool optimizes a fitness function proportional to the RFDSCA quality factor and uses the epsiv-concept and maximin sorting scheme to achieve a set of solutions well distributed along a non-dominated front. To confirm the results of the algorithm, three RFDSCAs were simulated in SpectreRF and one of them was implemented and tested. The design used a 0.25 mum BiCMOS process. All the results (synthesized, simulated and measured) are very close, which indicate that the genetic synthesis method is a very useful tool to design optimum performance RFDSCAs.
Resumo:
Most of the traditional software and database development approaches tend to be serial, not evolutionary and certainly not agile, especially on data-oriented aspects. Most of the more commonly used methodologies are strict, meaning they’re composed by several stages each with very specific associated tasks. A clear example is the Rational Unified Process (RUP), divided into Business Modeling, Requirements, Analysis & Design, Implementation, Testing and Deployment. But what happens when the needs of a well design and structured plan, meet the reality of a small starting company that aims to build an entire user experience solution. Here resource control and time productivity is vital, requirements are in constant change, and so is the product itself. In order to succeed in this environment a highly collaborative and evolutionary development approach is mandatory. The implications of constant changing requirements imply an iterative development process. Project focus is on Data Warehouse development and business modeling. This area is usually a tricky one. Business knowledge is part of the enterprise, how they work, their goals, what is relevant for analyses are internal business processes. Throughout this document it will be explained why Agile Modeling development was chosen. How an iterative and evolutionary methodology, allowed for reasonable planning and documentation while permitting development flexibility, from idea to product. More importantly how it was applied on the development of a Retail Focused Data Warehouse. A productized Data Warehouse built on the knowledge of not one but several client needs. One that aims not just to store usual business areas but create an innovative sets of business metrics by joining them with store environment analysis, converting Business Intelligence into Actionable Business Intelligence.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
No âmbito da Unidade Curricular Dissertação, inserida no segundo ano do Mestrado em Engenharia Mecânica – Gestão Industrial do Instituto Superior de Engenharia do Porto, foi proposto um projecto que está a ser desenvolvido pela equipa de Engenharia e Gestão Industrial, da unidade de negócios AMT, intitulado por: “Projecto Índia – Desenvolvimento da nova unidade fabril” Este projecto tem como principal objectivo o desenvolvimento de uma fábrica de excelência na Índia de fabricação de componentes de média tensão, isto é, com processos logísticos bem definidos e com linhas de produção o mais automatizadas possivel. Esta nova fábrica de raiz, vai ser gerida e decalcada do modelo atualmente existe na fábrica da EFACEC de componentes de média tensão existente em Portugal. Numa primeira fase do desenvolvimento do projecto, foi seleccionado um edifício com cerca de 1600m2 em Nashik, uma localidade a cerca de 171 Km de Mumbai, onde se encontram 80% dos fornecedores da EFACEC. Foram identificados os produtos a serem fabricados e quantificada a respectiva procura anual. Foi efectuado o balanceamento de cada uma das linhas e desenhado o layout. Neste layout contemplou-se as áreas de produção, laboratório, gabinetes de chefes de equipa, expedição, recepção e armazém. Após a definição das áreas de montagem de cada produto, iniciou-se a concepção das linhas de produção, sobretudo automáticas, com a definição da cadência de produção. A linha de fabricação que é especialmente detalhada neste documento é a linha de montagem dos comandos CI. Este é o produto com mais procura. Foi também definido o processo logístico do fluxo interno da fábrica. Nas linhas de produção foi implementado o sistema de controlo de fluxo baseado em cartões Kanban e no armazém criou-se um novo conceito de controlo e localização de produtos, o “Aquiles”. O Aquiles permite automaticamente e através da leitura de código de barras, indexar os artigos nas estantes. Cada artigo e cada estante e/ou localização estão codificados e no momento de recepção de material o código do artigo é associado ao código da estante. No âmbito de explorar todas as soluções possíveis para a um melhor desenvolvimento desta nova fábrica foram abordados temas como “JIT”, “Pull Flow”, “Kanban”, “Takttime”.
Resumo:
Esta dissertação incide sobre a problemática da construção de um data warehouse para a empresa AdClick que opera na área de marketing digital. O marketing digital é um tipo de marketing que utiliza os meios de comunicação digital, com a mesma finalidade do método tradicional que se traduz na divulgação de bens, negócios e serviços e a angariação de novos clientes. Existem diversas estratégias de marketing digital tendo em vista atingir tais objetivos, destacando-se o tráfego orgânico e tráfego pago. Onde o tráfego orgânico é caracterizado pelo desenvolvimento de ações de marketing que não envolvem quaisquer custos inerentes à divulgação e/ou angariação de potenciais clientes. Por sua vez o tráfego pago manifesta-se pela necessidade de investimento em campanhas capazes de impulsionar e atrair novos clientes. Inicialmente é feita uma abordagem do estado da arte sobre business intelligence e data warehousing, e apresentadas as suas principais vantagens as empresas. Os sistemas business intelligence são necessários, porque atualmente as empresas detêm elevados volumes de dados ricos em informação, que só serão devidamente explorados fazendo uso das potencialidades destes sistemas. Nesse sentido, o primeiro passo no desenvolvimento de um sistema business intelligence é concentrar todos os dados num sistema único integrado e capaz de dar apoio na tomada de decisões. É então aqui que encontramos a construção do data warehouse como o sistema único e ideal para este tipo de requisitos. Nesta dissertação foi elaborado o levantamento das fontes de dados que irão abastecer o data warehouse e iniciada a contextualização dos processos de negócio existentes na empresa. Após este momento deu-se início à construção do data warehouse, criação das dimensões e tabelas de factos e definição dos processos de extração e carregamento dos dados para o data warehouse. Assim como a criação das diversas views. Relativamente ao impacto que esta dissertação atingiu destacam-se as diversas vantagem a nível empresarial que a empresa parceira neste trabalho retira com a implementação do data warehouse e os processos de ETL para carregamento de todas as fontes de informação. Sendo que algumas vantagens são a centralização da informação, mais flexibilidade para os gestores na forma como acedem à informação. O tratamento dos dados de forma a ser possível a extração de informação a partir dos mesmos.
Resumo:
Submitted in part fulfillment of the requirements for the degree of Master in Computer Science
Resumo:
This paper discusses the results of applied research on the eco-driving domain based on a huge data set produced from a fleet of Lisbon's public transportation buses for a three-year period. This data set is based on events automatically extracted from the control area network bus and enriched with GPS coordinates, weather conditions, and road information. We apply online analytical processing (OLAP) and knowledge discovery (KD) techniques to deal with the high volume of this data set and to determine the major factors that influence the average fuel consumption, and then classify the drivers involved according to their driving efficiency. Consequently, we identify the most appropriate driving practices and styles. Our findings show that introducing simple practices, such as optimal clutch, engine rotation, and engine running in idle, can reduce fuel consumption on average from 3 to 5l/100 km, meaning a saving of 30 l per bus on one day. These findings have been strongly considered in the drivers' training sessions.