973 resultados para automated systems


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Nowadays, technological advancements have brought industry and research towards the automation of various processes. Automation brings a reduction in costs and an improvement in product quality. For this reason, companies are pushing research to investigate new technologies. The agriculture industry has always looked towards automating various processes, from product processing to storage. In the last years, the automation of harvest and cultivation phases also has become attractive, pushed by the advancement of autonomous driving. Nevertheless, ADAS systems are not enough. Merging different technologies will be the solution to obtain total automation of agriculture processes. For example, sensors that estimate products' physical and chemical properties can be used to evaluate the maturation level of fruit. Therefore, the fusion of these technologies has a key role in industrial process automation. In this dissertation, ADAS systems and sensors for precision agriculture will be both treated. Several measurement procedures for characterizing commercial 3D LiDARs will be proposed and tested to cope with the growing need for comparison tools. Axial errors and transversal errors have been investigated. Moreover, a measurement method and setup for evaluating the fog effect on 3D LiDARs will be proposed. Each presented measurement procedure has been tested. The obtained results highlight the versatility and the goodness of the proposed approaches. Regarding the precision agriculture sensors, a measurement approach for the Moisture Content and density estimation of crop directly on the field is presented. The approach regards the employment of a Near Infrared spectrometer jointly with Partial Least Square statistical analysis. The approach and the model will be described together with a first laboratory prototype used to evaluate the NIRS approach. Finally, a prototype for on the field analysis is realized and tested. The test results are promising, evidencing that the proposed approach is suitable for Moisture Content and density estimation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports on a system for automated agent negotiation, based on a formal and executable approach to capture the behavior of parties involved in a negotiation. It uses the JADE agent framework, and its major distinctive feature is the use of declarative negotiation strategies. The negotiation strategies are expressed in a declarative rules language, defeasible logic, and are applied using the implemented system DR-DEVICE. The key ideas and the overall system architecture are described, and a particular negotiation case is presented in detail.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fabrication of heavy-duty printer heads involves a great deal of grinding work. Previously in the printer manufacturing industry, four grinding procedures were manually conducted in four grinding machines, respectively. The productivity of the whole grinding process was low due to the long loading time. Also, the machine floor space occupation was large because of the four separate grinding machines. The manual operation also caused inconsistent quality. This paper reports the system and process development of a highly integrated and automated high-speed grinding system for printer heads. The developed system, which is believed to be the first of its kind, not only produces printer heads of consistently good quality, but also significantly reduces the cycle time and machine floor space occupation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantification of stress echocardiography may overcome the training requirements and subjective nature of visual wall motion score (WMS) assessment, but quantitative approaches may be difficult to apply and require significant time for image processing. The integral of long-axis myocardial velocity is displacement, which may be represented as a color map over the left ventricular myocardium. This study was designed to explore the feasibility and accuracy of measuring long-axis myocardial displacement, derived from tissue Doppler, for the detection of coronary artery disease (CAD) during dobutamine stress echocardiography (DBE). One hundred thirty patients underwent standard DBE, including 30 patients at low risk of CAD, 30 patients with normal coronary angiography (both groups studied to define normal ranges of displacement), and 70 patients who underwent coronary angiography in whom the accuracy of normal ranges was tested. Regional myocardial displacement was obtained by analysis of color tissue Doppler apical images acquired at peak stress. Displacement was compared with WMS, and with the presence of CAD by angiography. The analysis time was 3.2 +/- 1.5 minutes per patient. Segmental displacement was correlated with wall motion (normal 7.4 +/- 3.2 mm, ischemia 5.8 +/- 4.2 mm, viability 4.6 +/- 3.0 mm, scar 4.5 +/- 3.5 mm, p <0.001). Reversal of normal base-apex displacement was an insensitive (19%) but specific (90%) marker of CAD. The sum of displacements within each vascular territory had a sensitivity and specificity of 89% and 79%, respectively, for prediction of significant CAD, compared with 86% and 78%, respectively, for WMS (p = NS). The displacements in the basal segments had a sensitivity and specificity of 83% and 78%, respectively (p = NS). Regional myocardial displacement during DBE is feasible and offers a fast and accurate method for the diagnosis of CAD. (C),2002 by Excerpta Medica, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ARINC specification 653-2 describes the interface between application software and underlying middleware in a distributed real-time avionics system. The real-time workload in this system comprises of partitions, where each partition consists of one or more processes. Processes incur blocking and preemption overheads and can communicate with other processes in the system. In this work we develop compositional techniques for automated scheduling of such partitions and processes. At present, system designers manually schedule partitions based on interactions they have with the partition vendors. This approach is not only time consuming, but can also result in under utilization of resources. In contrast, the technique proposed in this paper is a principled approach for scheduling ARINC-653 partitions and therefore should facilitate system integration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Submitted in part fulfillment of the requirements for the degree of Master in Computer Science

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OCEANS, 2001. MTS/IEEE Conference and Exhibition (Volume:2 )

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design of work organisation systems with automated equipment is facing new challenges and the emergence of new concepts. The social aspects that are related with new concepts on the complex work environments (CWE) are becoming more relevant for that design. The work with autonomous systems implies options in the design of workplaces. Especially that happens in such complex environments. The concepts of “agents”, “co-working” or “human-centred technical systems” reveal new dimensions related to human-computer interaction (HCI). With an increase in the number and complexity of those human-technology interfaces, the capacities of human intervention can become limited, originating further problems. The case of robotics is used to exemplify the issues related with automation in working environments and the emergence of new HCI approaches that would include social implications. We conclude that studies on technology assessment of industrial robotics and autonomous agents on manufacturing environment should also focus on the human involvement strategies in organisations. A needed participatory strategy implies a new approach to workplaces design. This means that the research focus must be on the relation between technology and social dimensions not as separate entities, but integrated in the design of an interaction system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: Enterobacteriaceae strains are a leading cause of bloodstream infections (BSI). The aim of this study is to assess differences in clinical outcomes of patients with BSI caused by Enterobacteriaceae strains before and after introduction of an automated microbiologic system by the microbiology laboratory. METHODS: We conducted a retrospective cohort study aimed to evaluate the impact of the introduction of an automated microbiologic system (Phoenix(tm) automated microbiology system, Becton, Dickinson and Company (BD) - Diagnostic Systems, Sparks, MD, USA) on the outcomes of BSIs caused by Enterobacteriaceae strains. The study was undertaken at Hospital São Paulo, a 750-bed teaching hospital in São Paulo, Brazil. Patients with BSI caused by Enterobacteriaceae strains before the introduction of the automated system were compared with patients with BSI caused by the same pathogens after the introduction of the automated system with regard to treatment adequacy, clinical cure/improvement and 14- and 28-day mortality rates. RESULTS: We evaluated 90 and 106 patients in the non-automated and automated testing periods, respectively. The most prevalent species in both periods were Klebsiella spp. and Proteus spp. Clinical cure/improvement occurred in 70% and 67.9% in non-automated and automated period, respectively (p=0.75). 14-day mortality rates were 22.2% and 30% (p=0.94) and 28-day mortality rates were 24.5% and 40.5% (p= 0.12). There were no significant differences between the two testing periods with regard to treatment adequacy, clinical cure/improvement and 14- and 28-day mortality rates. CONCLUSIONS: Introduction of the BD Phoenix(tm) automated microbiology system did not impact the clinical outcomes of BSIs caused by Enterobacteriaceae strains in our setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

IP networks are currently the major communication infrastructure used by an increasing number of applications and heterogeneous services, including voice services. In this context, the Session Initiation Protocol (SIP) is a signaling protocol widely used for controlling multimedia communication sessions such as voice or video calls over IP networks, thus performing vital functions in an extensive set of public and enter- prise solutions. However, the SIP protocol dissemination also entails some challenges, such as the complexity associated with the testing/validation processes of IMS/SIP networks. As a consequence, manual IMS/SIP testing solutions are inherently costly and time consuming tasks, being crucial to develop automated approaches in this specific area. In this perspective, this article presents an experimental approach for automated testing/validation of SIP scenarios in IMS networks. For that purpose, an automation framework is proposed allowing to replicate the configuration of SIP equipment from the pro- duction network and submit such equipment to a battery of tests in the testing network. The proposed solution allows to drastically reduce the test and validation times when compared with traditional manual approaches, also allowing to enhance testing reliability and coverage. The automation framework comprises of some freely available tools which are conveniently integrated with other specific modules implemented within the context of this work. In order to illustrate the advantages of the proposed automated framework, a real case study taken from a PT Inovação customer is presented comparing the time required to perform a manual SIP testing approach with the one time required when using the proposed auto- mated framework. The presented results clearly corroborate the advantages of using the presented framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Identificación y caracterización del problema. Uno de los problemas más importantes asociados con la construcción de software es la corrección del mismo. En busca de proveer garantías del correcto funcionamiento del software, han surgido una variedad de técnicas de desarrollo con sólidas bases matemáticas y lógicas conocidas como métodos formales. Debido a su naturaleza, la aplicación de métodos formales requiere gran experiencia y conocimientos, sobre todo en lo concerniente a matemáticas y lógica, por lo cual su aplicación resulta costosa en la práctica. Esto ha provocado que su principal aplicación se limite a sistemas críticos, es decir, sistemas cuyo mal funcionamiento puede causar daños de magnitud, aunque los beneficios que sus técnicas proveen son relevantes a todo tipo de software. Poder trasladar los beneficios de los métodos formales a contextos de desarrollo de software más amplios que los sistemas críticos tendría un alto impacto en la productividad en tales contextos. Hipótesis. Contar con herramientas de análisis automático es un elemento de gran importancia. Ejemplos de esto son varias herramientas potentes de análisis basadas en métodos formales, cuya aplicación apunta directamente a código fuente. En la amplia mayoría de estas herramientas, la brecha entre las nociones a las cuales están acostumbrados los desarrolladores y aquellas necesarias para la aplicación de estas herramientas de análisis formal sigue siendo demasiado amplia. Muchas herramientas utilizan lenguajes de aserciones que escapan a los conocimientos y las costumbres usuales de los desarrolladores. Además, en muchos casos la salida brindada por la herramienta de análisis requiere cierto manejo del método formal subyacente. Este problema puede aliviarse mediante la producción de herramientas adecuadas. Otro problema intrínseco a las técnicas automáticas de análisis es cómo se comportan las mismas a medida que el tamaño y complejidad de los elementos a analizar crece (escalabilidad). Esta limitación es ampliamente conocida y es considerada crítica en la aplicabilidad de métodos formales de análisis en la práctica. Una forma de atacar este problema es el aprovechamiento de información y características de dominios específicos de aplicación. Planteo de objetivos. Este proyecto apunta a la construcción de herramientas de análisis formal para contribuir a la calidad, en cuanto a su corrección funcional, de especificaciones, modelos o código, en el contexto del desarrollo de software. Más precisamente, se busca, por un lado, identificar ambientes específicos en los cuales ciertas técnicas de análisis automático, como el análisis basado en SMT o SAT solving, o el model checking, puedan llevarse a niveles de escalabilidad superiores a los conocidos para estas técnicas en ámbitos generales. Se intentará implementar las adaptaciones a las técnicas elegidas en herramientas que permitan su uso a desarrolladores familiarizados con el contexto de aplicación, pero no necesariamente conocedores de los métodos o técnicas subyacentes. Materiales y métodos a utilizar. Los materiales a emplear serán bibliografía relevante al área y equipamiento informático. Métodos. Se emplearán los métodos propios de la matemática discreta, la lógica y la ingeniería de software. Resultados esperados. Uno de los resultados esperados del proyecto es la individualización de ámbitos específicos de aplicación de métodos formales de análisis. Se espera que como resultado del desarrollo del proyecto surjan herramientas de análisis cuyo nivel de usabilidad sea adecuado para su aplicación por parte de desarrolladores sin formación específica en los métodos formales utilizados. Importancia del proyecto. El principal impacto de este proyecto será la contribución a la aplicación práctica de técnicas formales de análisis en diferentes etapas del desarrollo de software, con la finalidad de incrementar su calidad y confiabilidad. A crucial factor for software quality is correcteness. Traditionally, formal approaches to software development concentrate on functional correctness, and tackle this problem basically by being based on well defined notations founded on solid mathematical grounds. This makes formal methods better suited for analysis, due to their precise semantics, but they are usually more complex, and require familiarity and experience with the manipulation of mathematical definitions. So, their acceptance by software engineers is rather restricted, and formal methods applications have been confined to critical systems. Nevertheless, it is obvious that the advantages that formal methods provide apply to any kind of software system. It is accepted that appropriate software tool support for formal analysis is essential, if one seeks providing support for software development based on formal methods. Indeed, some of the relatively recent sucesses of formal methods are accompanied by good quality tools that automate powerful analysis mechanisms, and are even integrated in widely used development environments. Still, most of these tools either concentrate on code analysis, and in many cases are still far from being simple enough to be employed by software engineers without experience in formal methods. Another important problem for the adoption of tool support for formal methods is scalability. Automated software analysis is intrinsically complex, and thus techniques do not scale well in the general case. In this project, we will attempt to identify particular modelling, design, specification or coding activities in software development processes where to apply automated formal analysis techniques. By focusing in very specific application domains, we expect to find characteristics that might be exploited to increase the scalability of the corresponding analyses, compared to the general case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract : In the subject of fingerprints, the rise of computers tools made it possible to create powerful automated search algorithms. These algorithms allow, inter alia, to compare a fingermark to a fingerprint database and therefore to establish a link between the mark and a known source. With the growth of the capacities of these systems and of data storage, as well as increasing collaboration between police services on the international level, the size of these databases increases. The current challenge for the field of fingerprint identification consists of the growth of these databases, which makes it possible to find impressions that are very similar but coming from distinct fingers. However and simultaneously, this data and these systems allow a description of the variability between different impressions from a same finger and between impressions from different fingers. This statistical description of the withinand between-finger variabilities computed on the basis of minutiae and their relative positions can then be utilized in a statistical approach to interpretation. The computation of a likelihood ratio, employing simultaneously the comparison between the mark and the print of the case, the within-variability of the suspects' finger and the between-variability of the mark with respect to a database, can then be based on representative data. Thus, these data allow an evaluation which may be more detailed than that obtained by the application of rules established long before the advent of these large databases or by the specialists experience. The goal of the present thesis is to evaluate likelihood ratios, computed based on the scores of an automated fingerprint identification system when the source of the tested and compared marks is known. These ratios must support the hypothesis which it is known to be true. Moreover, they should support this hypothesis more and more strongly with the addition of information in the form of additional minutiae. For the modeling of within- and between-variability, the necessary data were defined, and acquired for one finger of a first donor, and two fingers of a second donor. The database used for between-variability includes approximately 600000 inked prints. The minimal number of observations necessary for a robust estimation was determined for the two distributions used. Factors which influence these distributions were also analyzed: the number of minutiae included in the configuration and the configuration as such for both distributions, as well as the finger number and the general pattern for between-variability, and the orientation of the minutiae for within-variability. In the present study, the only factor for which no influence has been shown is the orientation of minutiae The results show that the likelihood ratios resulting from the use of the scores of an AFIS can be used for evaluation. Relatively low rates of likelihood ratios supporting the hypothesis known to be false have been obtained. The maximum rate of likelihood ratios supporting the hypothesis that the two impressions were left by the same finger when the impressions came from different fingers obtained is of 5.2 %, for a configuration of 6 minutiae. When a 7th then an 8th minutia are added, this rate lowers to 3.2 %, then to 0.8 %. In parallel, for these same configurations, the likelihood ratios obtained are on average of the order of 100,1000, and 10000 for 6,7 and 8 minutiae when the two impressions come from the same finger. These likelihood ratios can therefore be an important aid for decision making. Both positive evolutions linked to the addition of minutiae (a drop in the rates of likelihood ratios which can lead to an erroneous decision and an increase in the value of the likelihood ratio) were observed in a systematic way within the framework of the study. Approximations based on 3 scores for within-variability and on 10 scores for between-variability were found, and showed satisfactory results. Résumé : Dans le domaine des empreintes digitales, l'essor des outils informatisés a permis de créer de puissants algorithmes de recherche automatique. Ces algorithmes permettent, entre autres, de comparer une trace à une banque de données d'empreintes digitales de source connue. Ainsi, le lien entre la trace et l'une de ces sources peut être établi. Avec la croissance des capacités de ces systèmes, des potentiels de stockage de données, ainsi qu'avec une collaboration accrue au niveau international entre les services de police, la taille des banques de données augmente. Le défi actuel pour le domaine de l'identification par empreintes digitales consiste en la croissance de ces banques de données, qui peut permettre de trouver des impressions très similaires mais provenant de doigts distincts. Toutefois et simultanément, ces données et ces systèmes permettent une description des variabilités entre différentes appositions d'un même doigt, et entre les appositions de différents doigts, basées sur des larges quantités de données. Cette description statistique de l'intra- et de l'intervariabilité calculée à partir des minuties et de leurs positions relatives va s'insérer dans une approche d'interprétation probabiliste. Le calcul d'un rapport de vraisemblance, qui fait intervenir simultanément la comparaison entre la trace et l'empreinte du cas, ainsi que l'intravariabilité du doigt du suspect et l'intervariabilité de la trace par rapport à une banque de données, peut alors se baser sur des jeux de données représentatifs. Ainsi, ces données permettent d'aboutir à une évaluation beaucoup plus fine que celle obtenue par l'application de règles établies bien avant l'avènement de ces grandes banques ou par la seule expérience du spécialiste. L'objectif de la présente thèse est d'évaluer des rapports de vraisemblance calcul és à partir des scores d'un système automatique lorsqu'on connaît la source des traces testées et comparées. Ces rapports doivent soutenir l'hypothèse dont il est connu qu'elle est vraie. De plus, ils devraient soutenir de plus en plus fortement cette hypothèse avec l'ajout d'information sous la forme de minuties additionnelles. Pour la modélisation de l'intra- et l'intervariabilité, les données nécessaires ont été définies, et acquises pour un doigt d'un premier donneur, et deux doigts d'un second donneur. La banque de données utilisée pour l'intervariabilité inclut environ 600000 empreintes encrées. Le nombre minimal d'observations nécessaire pour une estimation robuste a été déterminé pour les deux distributions utilisées. Des facteurs qui influencent ces distributions ont, par la suite, été analysés: le nombre de minuties inclus dans la configuration et la configuration en tant que telle pour les deux distributions, ainsi que le numéro du doigt et le dessin général pour l'intervariabilité, et la orientation des minuties pour l'intravariabilité. Parmi tous ces facteurs, l'orientation des minuties est le seul dont une influence n'a pas été démontrée dans la présente étude. Les résultats montrent que les rapports de vraisemblance issus de l'utilisation des scores de l'AFIS peuvent être utilisés à des fins évaluatifs. Des taux de rapports de vraisemblance relativement bas soutiennent l'hypothèse que l'on sait fausse. Le taux maximal de rapports de vraisemblance soutenant l'hypothèse que les deux impressions aient été laissées par le même doigt alors qu'en réalité les impressions viennent de doigts différents obtenu est de 5.2%, pour une configuration de 6 minuties. Lorsqu'une 7ème puis une 8ème minutie sont ajoutées, ce taux baisse d'abord à 3.2%, puis à 0.8%. Parallèlement, pour ces mêmes configurations, les rapports de vraisemblance sont en moyenne de l'ordre de 100, 1000, et 10000 pour 6, 7 et 8 minuties lorsque les deux impressions proviennent du même doigt. Ces rapports de vraisemblance peuvent donc apporter un soutien important à la prise de décision. Les deux évolutions positives liées à l'ajout de minuties (baisse des taux qui peuvent amener à une décision erronée et augmentation de la valeur du rapport de vraisemblance) ont été observées de façon systématique dans le cadre de l'étude. Des approximations basées sur 3 scores pour l'intravariabilité et sur 10 scores pour l'intervariabilité ont été trouvées, et ont montré des résultats satisfaisants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Advances in clinical virology for detecting respiratory viruses have been focused on nucleic acids amplification techniques, which have converted in the reference method for the diagnosis of acute respiratory infections of viral aetiology. Improvements of current commercial molecular assays to reduce hands-on-time rely on two strategies, a stepwise automation (semi-automation) and the complete automation of the whole procedure. Contributions to the former strategy have been the use of automated nucleic acids extractors, multiplex PCR, real-time PCR and/or DNA arrays for detection of amplicons. Commercial fully-automated molecular systems are now available for the detection of respiratory viruses. Some of them could convert in point-of-care methods substituting antigen tests for detection of respiratory syncytial virus and influenza A and B viruses. This article describes laboratory methods for detection of respiratory viruses. A cost-effective and rational diagnostic algorithm is proposed, considering technical aspects of the available assays, infrastructure possibilities of each laboratory and clinic-epidemiologic factors of the infection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Among various advantages, their small size makes model organisms preferred subjects of investigation. Yet, even in model systems detailed analysis of numerous developmental processes at cellular level is severely hampered by their scale. For instance, secondary growth of Arabidopsis hypocotyls creates a radial pattern of highly specialized tissues that comprises several thousand cells starting from a few dozen. This dynamic process is difficult to follow because of its scale and because it can only be investigated invasively, precluding comprehensive understanding of the cell proliferation, differentiation, and patterning events involved. To overcome such limitation, we established an automated quantitative histology approach. We acquired hypocotyl cross-sections from tiled high-resolution images and extracted their information content using custom high-throughput image processing and segmentation. Coupled with automated cell type recognition through machine learning, we could establish a cellular resolution atlas that reveals vascular morphodynamics during secondary growth, for example equidistant phloem pole formation. DOI: http://dx.doi.org/10.7554/eLife.01567.001.