925 resultados para Automation and robotics


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Reliability has emerged as a critical design constraint especially in memories. Designers are going to great lengths to guarantee fault free operation of the underlying silicon by adopting redundancy-based techniques, which essentially try to detect and correct every single error. However, such techniques come at a cost of large area, power and performance overheads which making many researchers to doubt their efficiency especially for error resilient systems where 100% accuracy is not always required. In this paper, we present an alternative method focusing on the confinement of the resulting output error induced by any reliability issues. By focusing on memory faults, rather than correcting every single error the proposed method exploits the statistical characteristics of any target application and replaces any erroneous data with the best available estimate of that data. To realize the proposed method a RISC processor is augmented with custom instructions and special-purpose functional units. We apply the method on the proposed enhanced processor by studying the statistical characteristics of the various algorithms involved in a popular multimedia application. Our experimental results show that in contrast to state-of-the-art fault tolerance approaches, we are able to reduce runtime and area overhead by 71.3% and 83.3% respectively.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Die Automatisierung logistischer Prozesse stellt aufgrund dynamischer Prozesseigenschaften und wirtschaftlicher Anforderungen eine große technische Herausforderung dar. Es besteht der Bedarf nach neuartigen hochflexiblen Automatisierungs- und Roboterlösungen, die in der Lage sind, variable Güter zu handhaben oder verschiedene Prozesse bzw. Funktionalitäten auszuführen. Im Rahmen dieses Beitrages wird die Steigerung der Flexibilität anhand von zwei konkreten Beispielen aus den Bereichen Stückguthandhabung und Materialflusstechnik adressiert.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study provides the first spatially detailed and complete inventory of Ambrosia pollen sources in Italy – the third largest centre of ragweed in Europe. The inventory relies on a well tested top-down approach that combines local knowledge, detailed land cover, pollen observations and a digital elevation model that assumes permanent ragweed populations mainly grow below 745m. The pollen data were obtained from 92 volumetric pollen traps located throughout Italy during 2004-2013. Land cover is derived from Corine Land cover information with 100m resolution. The digital elevation model is based on the NASA shuttle radar mission with 90m resolution. The inventory is produced using a combination of ArcGIS and Python for automation and validated using cross-correlation and has a final resolution of 5km x 5km. The method includes a harmonization of the inventory with other European inventories for the Pannonian Plain, France and Austria in order to provide a coherent picture of all major ragweed sources. The results show that the mean annual pollen index varies from 0 in South Italy to 6779 in the Po Valley. The results also show that very large pollen indexes are observed in the Milan region, but this region has smaller amounts of ragweed habitats compared to other parts of the Po Valley and known ragweed areas in France and the Pannonian Plain. A significant decrease in Ambrosia pollen concentrations was recorded in 2013 by pollen monitoring stations located in the Po Valley, particularly in the Northwest of Milan. This was the same year as the appearance of the Ophraella communa leaf beetle in Northern Italy. These results suggest that ragweed habitats near to the Milan region have very high densities of Ambrosia plants compared to other known ragweed habitats in Europe. The Milan region therefore appears to contain habitats with the largest ragweed infestation in Europe, but a smaller amount of habitats is a likely cause the pollen index to be lower compared to central parts of the Pannonian Plain. A low number of densely packed habitats may have increased the impact of the Ophraella beetle and might account for the documented decrease in airborne Ambrosia pollen levels, an event that cannot be explained by meteorology alone. Further investigations that model atmospheric pollen before and after the appearance of the beetle in this part of Northern Italy are needed to assess the influence of the beetle on airborne Ambrosia pollen concentrations. Future work will focus on short distance transport episodes for stations located in the Po Valley, and long distance transport events for stations in Central Italy that exhibit peaks in daily airborne Ambrosia pollen levels.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Contemporary integrated circuits are designed and manufactured in a globalized environment leading to concerns of piracy, overproduction and counterfeiting. One class of techniques to combat these threats is circuit obfuscation which seeks to modify the gate-level (or structural) description of a circuit without affecting its functionality in order to increase the complexity and cost of reverse engineering. Most of the existing circuit obfuscation methods are based on the insertion of additional logic (called “key gates”) or camouflaging existing gates in order to make it difficult for a malicious user to get the complete layout information without extensive computations to determine key-gate values. However, when the netlist or the circuit layout, although camouflaged, is available to the attacker, he/she can use advanced logic analysis and circuit simulation tools and Boolean SAT solvers to reveal the unknown gate-level information without exhaustively trying all the input vectors, thus bringing down the complexity of reverse engineering. To counter this problem, some ‘provably secure’ logic encryption algorithms that emphasize methodical selection of camouflaged gates have been proposed previously in literature [1,2,3]. The contribution of this paper is the creation and simulation of a new layout obfuscation method that uses don't care conditions. We also present proof-of-concept of a new functional or logic obfuscation technique that not only conceals, but modifies the circuit functionality in addition to the gate-level description, and can be implemented automatically during the design process. Our layout obfuscation technique utilizes don’t care conditions (namely, Observability and Satisfiability Don’t Cares) inherent in the circuit to camouflage selected gates and modify sub-circuit functionality while meeting the overall circuit specification. Here, camouflaging or obfuscating a gate means replacing the candidate gate by a 4X1 Multiplexer which can be configured to perform all possible 2-input/ 1-output functions as proposed by Bao et al. [4]. It is important to emphasize that our approach not only obfuscates but alters sub-circuit level functionality in an attempt to make IP piracy difficult. The choice of gates to obfuscate determines the effort required to reverse engineer or brute force the design. As such, we propose a method of camouflaged gate selection based on the intersection of output logic cones. By choosing these candidate gates methodically, the complexity of reverse engineering can be made exponential, thus making it computationally very expensive to determine the true circuit functionality. We propose several heuristic algorithms to maximize the RE complexity based on don’t care based obfuscation and methodical gate selection. Thus, the goal of protecting the design IP from malicious end-users is achieved. It also makes it significantly harder for rogue elements in the supply chain to use, copy or replicate the same design with a different logic. We analyze the reverse engineering complexity by applying our obfuscation algorithm on ISCAS-85 benchmarks. Our experimental results indicate that significant reverse engineering complexity can be achieved at minimal design overhead (average area overhead for the proposed layout obfuscation methods is 5.51% and average delay overhead is about 7.732%). We discuss the strengths and limitations of our approach and suggest directions that may lead to improved logic encryption algorithms in the future. References: [1] R. Chakraborty and S. Bhunia, “HARPOON: An Obfuscation-Based SoC Design Methodology for Hardware Protection,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 28, no. 10, pp. 1493–1502, 2009. [2] J. A. Roy, F. Koushanfar, and I. L. Markov, “EPIC: Ending Piracy of Integrated Circuits,” in 2008 Design, Automation and Test in Europe, 2008, pp. 1069–1074. [3] J. Rajendran, M. Sam, O. Sinanoglu, and R. Karri, “Security Analysis of Integrated Circuit Camouflaging,” ACM Conference on Computer Communications and Security, 2013. [4] Bao Liu, Wang, B., "Embedded reconfigurable logic for ASIC design obfuscation against supply chain attacks,"Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,6, 24-28 March 2014.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Neuropeptides affect the activity of the myriad of neuronal circuits in the brain. They are under tight spatial and chemical control and the dynamics of their release and catabolism directly modify neuronal network activity. Understanding neuropeptide functioning requires approaches to determine their chemical and spatial heterogeneity within neural tissue, but most imaging techniques do not provide the complete information desired. To provide chemical information, most imaging techniques used to study the nervous system require preselection and labeling of the peptides of interest; however, mass spectrometry imaging (MSI) detects analytes across a broad mass range without the need to target a specific analyte. When used with matrix-assisted laser desorption/ionization (MALDI), MSI detects analytes in the mass range of neuropeptides. MALDI MSI simultaneously provides spatial and chemical information resulting in images that plot the spatial distributions of neuropeptides over the surface of a thin slice of neural tissue. Here a variety of approaches for neuropeptide characterization are developed. Specifically, several computational approaches are combined with MALDI MSI to create improved approaches that provide spatial distributions and neuropeptide characterizations. After successfully validating these MALDI MSI protocols, the methods are applied to characterize both known and unidentified neuropeptides from neural tissues. The methods are further adapted from tissue analysis to be able to perform tandem MS (MS/MS) imaging on neuronal cultures to enable the study of network formation. In addition, MALDI MSI has been carried out over the timecourse of nervous system regeneration in planarian flatworms resulting in the discovery of two novel neuropeptides that may be involved in planarian regeneration. In addition, several bioinformatic tools are developed to predict final neuropeptide structures and associated masses that can be compared to experimental MSI data in order to make assignments of neuropeptide identities. The integration of computational approaches into the experimental design of MALDI MSI has allowed improved instrument automation and enhanced data acquisition and analysis. These tools also make the methods versatile and adaptable to new sample types.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The search for patterns or motifs in data represents a problem area of key interest to finance and economic researchers. In this paper we introduce the Motif Tracking Algorithm, a novel immune inspired pattern identification tool that is able to identify unknown motifs of a non specified length which repeat within time series data. The power of the algorithm comes from the fact that it uses a small number of parameters with minimal assumptions regarding the data being examined or the underlying motifs. Our interest lies in applying the algorithm to financial time series data to identify unknown patterns that exist. The algorithm is tested using three separate data sets. Particular suitability to financial data is shown by applying it to oil price data. In all cases the algorithm identifies the presence of a motif population in a fast and efficient manner due to the utilisation of an intuitive symbolic representation. The resulting population of motifs is shown to have considerable potential value for other applications such as forecasting and algorithm seeding.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The anticipated growth of air traffic worldwide requires enhanced Air Traffic Management (ATM) technologies and procedures to increase the system capacity, efficiency, and resilience, while reducing environmental impact and maintaining operational safety. To deal with these challenges, new automation and information exchange capabilities are being developed through different modernisation initiatives toward a new global operational concept called Trajectory Based Operations (TBO), in which aircraft trajectory information becomes the cornerstone of advanced ATM applications. This transformation will lead to higher levels of system complexity requiring enhanced Decision Support Tools (DST) to aid humans in the decision making processes. These will rely on accurate predicted aircraft trajectories, provided by advanced Trajectory Predictors (TP). The trajectory prediction process is subject to stochastic effects that introduce uncertainty into the predictions. Regardless of the assumptions that define the aircraft motion model underpinning the TP, deviations between predicted and actual trajectories are unavoidable. This thesis proposes an innovative method to characterise the uncertainty associated with a trajectory prediction based on the mathematical theory of Polynomial Chaos Expansions (PCE). Assuming univariate PCEs of the trajectory prediction inputs, the method describes how to generate multivariate PCEs of the prediction outputs that quantify their associated uncertainty. Arbitrary PCE (aPCE) was chosen because it allows a higher degree of flexibility to model input uncertainty. The obtained polynomial description can be used in subsequent prediction sensitivity analyses thanks to the relationship between polynomial coefficients and Sobol indices. The Sobol indices enable ranking the input parameters according to their influence on trajectory prediction uncertainty. The applicability of the aPCE-based uncertainty quantification detailed herein is analysed through a study case. This study case represents a typical aircraft trajectory prediction problem in ATM, in which uncertain parameters regarding aircraft performance, aircraft intent description, weather forecast, and initial conditions are considered simultaneously. Numerical results are compared to those obtained from a Monte Carlo simulation, demonstrating the advantages of the proposed method. The thesis includes two examples of DSTs (Demand and Capacity Balancing tool, and Arrival Manager) to illustrate the potential benefits of exploiting the proposed uncertainty quantification method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The quality and the speed for genome sequencing has advanced at the same time that technology boundaries are stretched. This advancement has been divided so far in three generations. The first-generation methods enabled sequencing of clonal DNA populations. The second-generation massively increased throughput by parallelizing many reactions while the third-generation methods allow direct sequencing of single DNA molecules. The first techniques to sequence DNA were not developed until the mid-1970s, when two distinct sequencing methods were developed almost simultaneously, one by Alan Maxam and Walter Gilbert, and the other one by Frederick Sanger. The first one is a chemical method to cleave DNA at specific points and the second one uses ddNTPs, which synthesizes a copy from the DNA chain template. Nevertheless, both methods generate fragments of varying lengths that are further electrophoresed. Moreover, it is important to say that until the 1990s, the sequencing of DNA was relatively expensive and it was seen as a long process. Besides, using radiolabeled nucleotides also compounded the problem through safety concerns and prevented the automation. Some advancements within the first generation include the replacement of radioactive labels by fluorescent labeled ddNTPs and cycle sequencing with thermostable DNA polymerase, which allows automation and signal amplification, making the process cheaper, safer and faster. Another method is Pyrosequencing, which is based on the “sequencing by synthesis” principle. It differs from Sanger sequencing, in that it relies on the detection of pyrophosphate release on nucleotide incorporation. By the end of the last millennia, parallelization of this method started the Next Generation Sequencing (NGS) with 454 as the first of many methods that can process multiple samples, calling it the 2º generation sequencing. Here electrophoresis was completely eliminated. One of the methods that is sometimes used is SOLiD, based on sequencing by ligation of fluorescently dye-labeled di-base probes which competes to ligate to the sequencing primer. Specificity of the di-base probe is achieved by interrogating every 1st and 2nd base in each ligation reaction. The widely used Solexa/Illumina method uses modified dNTPs containing so called “reversible terminators” which blocks further polymerization. The terminator also contains a fluorescent label, which can be detected by a camera. Now, the previous step towards the third generation was in charge of Ion Torrent, who developed a technique that is based in a method of “sequencing-by-synthesis”. Its main feature is the detection of hydrogen ions that are released during base incorporation. Likewise, the third generation takes into account nanotechnology advancements for the processing of unique DNA molecules to a real time synthesis sequencing system like PacBio; and finally, the NANOPORE, projected since 1995, also uses Nano-sensors forming channels obtained from bacteria that conducts the sample to a sensor that allows the detection of each nucleotide residue in the DNA strand. The advancements in terms of technology that we have nowadays have been so quick, that it makes wonder: ¿How do we imagine the next generation?

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação de Mestrado apresentada ao Instituto Superior de Psicologia Aplicada para obtenção de grau de Mestre na especialidade de Psicologia Clínica.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The human immune system has numerous properties that make it ripe for exploitation in the computational domain, such as robustness and fault tolerance, and many different algorithms, collectively termed Artificial Immune Systems (AIS), have been inspired by it. Two generations of AIS are currently in use, with the first generation relying on simplified immune models and the second generation utilising interdisciplinary collaboration to develop a deeper understanding of the immune system and hence produce more complex models. Both generations of algorithms have been successfully applied to a variety of problems, including anomaly detection, pattern recognition, optimisation and robotics. In this chapter an overview of AIS is presented, its evolution is discussed, and it is shown that the diversification of the field is linked to the diversity of the immune system itself, leading to a number of algorithms as opposed to one archetypal system. Two case studies are also presented to help provide insight into the mechanisms of AIS; these are the idiotypic network approach and the Dendritic Cell Algorithm.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This document aims to improve the quality of the production test in vertical tanks with free water drain pipes, through a device to control the draining system. This proposal consists of an interface detector close to the tank bottle and a control valve on the pipe-drain; they are attached to a remote supervisor system, which will be minimizing the human influence in the conclusion of the test result. And for more consciousness the work shows the importance of the wells production test in the attendance and diagnosis of the productive process, informing the large number of tests executed and problems of the procedure adopted in the field today. There are many possible sources of uncertainty in this kind of test as shown in the experiments realized in the field; the object prototype of this dissertation will be made in the field, based upon the definition of parameters and characteristics of the devices proposal. For a better definition of the draining process the action results of the assessment test are shown, especially changed some for the understand ing of the real process. It shows the proposal details and the configuration that will be used in the tank of Monte Alegre s field Production Station, explaining the interface detector kind and the control system. It is the base to a pilot project now in development, named as the new project classified in the status of the new technology and production improvement of PETROBRAS in Rio Grande do Norte and Ceará. This dissertation concludes that the automation of the conventional test with the draining system will bring benefits both economically as metrologically, because it reduces the uncertainty of the test procedures with free water draining, and also decreases the number of tests with problems

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tese (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Mecânica, 2016.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the medical field images obtained from high definition cameras and other medical imaging systems are an integral part of medical diagnosis. The analysis of these images are usually performed by the physicians who sometimes need to spend long hours reviewing the images before they are able to come up with a diagnosis and then decide on the course of action. In this dissertation we present a framework for a computer-aided analysis of medical imagery via the use of an expert system. While this problem has been discussed before, we will consider a system based on mobile devices. Since the release of the iPhone on April 2003, the popularity of mobile devices has increased rapidly and our lives have become more reliant on them. This popularity and the ease of development of mobile applications has now made it possible to perform on these devices many of the image analyses that previously required a personal computer. All of this has opened the door to a whole new set of possibilities and freed the physicians from their reliance on their desktop machines. The approach proposed in this dissertation aims to capitalize on these new found opportunities by providing a framework for analysis of medical images that physicians can utilize from their mobile devices thus remove their reliance on desktop computers. We also provide an expert system to aid in the analysis and advice on the selection of medical procedure. Finally, we also allow for other mobile applications to be developed by providing a generic mobile application development framework that allows for access of other applications into the mobile domain. In this dissertation we outline our work leading towards development of the proposed methodology and the remaining work needed to find a solution to the problem. In order to make this difficult problem tractable, we divide the problem into three parts: the development user interface modeling language and tooling, the creation of a game development modeling language and tooling, and the development of a generic mobile application framework. In order to make this problem more manageable, we will narrow down the initial scope to the hair transplant, and glaucoma domains.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A seleção deste tema deve-se ao fato de para além de ser um tema bastante interessante, o fato de possuir também uma perspetiva futura muito interessante, sobretudo quando contextualizado como solução de problemas tais como dependência energética, poluição e excesso de população. Apesar de muitos dos conceitos ligados ao tema não serem assim tão recentes, a verdade é que os recentes avanços tecnológicos, (informática e robótica), associados a novas ideologias, (preservação do meio ambiente e energias renováveis), e a diferentes conjunturas, nomeadamente a crise económica de 2009, trouxe uma nova perspetiva sobre o tema e sobre as suas potencialidades. Consultando a Internet é possível reunir varias informações sobre como construir com inteligência passando por temáticas como os edifícios Inteligentes, a sustentabilidade ou até mesmo os serviços de saúde em casa, contudo ainda é algo difícil perceber qual a importância que estes aspetos podem vir a desempenhar no futuro. Isto advém em grande medida, da própria dificuldade dos autores em concordar sobre o que realmente é um edifício inteligente e qual pode ser a sua importância para as cidades no futuro. Desta forma o objetivo deste trabalho passa por abordar os conceitos base do que são edifícios inteligentes desde a sua génese até ao presente, não só a partir do ponto de vista tecnológico como também do ponto de vista ambiental e da forma como estes se articulam, principalmente num período de dificuldade económica como o vivido nos dias de hoje. Hoje em dia torna-se demasiado redutor só pensar os edifícios inteligentes a partir da sua componente tecnológica uma vez que a sua adaptabilidade permite que sejam uma solução para um número muito maior de problemas, alguns dos quais onde a componente tecnológica perde o seu lugar de destaque para a componente ambiental, onde um design inteligente pode potenciar grandes melhorias com o mínimo de gastos económicos. Em conclusão um edifício Inteligente é nos dias de hoje muito mais que a soma das suas partes seja ela tecnológica, económica ou ambiental. A “verdadeira” Inteligência está em conjugar o melhor de cada vertente potenciando o que têm de melhor de forma a proporcionarem uma qualidade de vida acrescida ao utilizador sem comprometer o meio e sem implicar gastos económicos incomportáveis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The pervasive availability of connected devices in any industrial and societal sector is pushing for an evolution of the well-established cloud computing model. The emerging paradigm of the cloud continuum embraces this decentralization trend and envisions virtualized computing resources physically located between traditional datacenters and data sources. By totally or partially executing closer to the network edge, applications can have quicker reactions to events, thus enabling advanced forms of automation and intelligence. However, these applications also induce new data-intensive workloads with low-latency constraints that require the adoption of specialized resources, such as high-performance communication options (e.g., RDMA, DPDK, XDP, etc.). Unfortunately, cloud providers still struggle to integrate these options into their infrastructures. That risks undermining the principle of generality that underlies the cloud computing scale economy by forcing developers to tailor their code to low-level APIs, non-standard programming models, and static execution environments. This thesis proposes a novel system architecture to empower cloud platforms across the whole cloud continuum with Network Acceleration as a Service (NAaaS). To provide commodity yet efficient access to acceleration, this architecture defines a layer of agnostic high-performance I/O APIs, exposed to applications and clearly separated from the heterogeneous protocols, interfaces, and hardware devices that implement it. A novel system component embodies this decoupling by offering a set of agnostic OS features to applications: memory management for zero-copy transfers, asynchronous I/O processing, and efficient packet scheduling. This thesis also explores the design space of the possible implementations of this architecture by proposing two reference middleware systems and by adopting them to support interactive use cases in the cloud continuum: a serverless platform and an Industry 4.0 scenario. A detailed discussion and a thorough performance evaluation demonstrate that the proposed architecture is suitable to enable the easy-to-use, flexible integration of modern network acceleration into next-generation cloud platforms.