972 resultados para Automated sorting system


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scientific workflows provide the means to define, execute and reproduce computational experiments. However, reusing existing workflows still poses challenges for workflow designers. Workflows are often too large and too specific to reuse in their entirety, so reuse is more likely to happen for fragments of workflows. These fragments may be identified manually by users as sub-workflows, or detected automatically. In this paper we present the FragFlow approach, which detects workflow fragments automatically by analyzing existing workflow corpora with graph mining algorithms. FragFlow detects the most common workflow fragments, links them to the original workflows and visualizes them. We evaluate our approach by comparing FragFlow results against user-defined sub-workflows from three different corpora of the LONI Pipeline system. Based on this evaluation, we discuss how automated workflow fragment detection could facilitate workflow reuse.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automated Teller Machines (ATMs) are sensitive self-service systems that require important investments in security and testing. ATM certifications are testing processes for machines that integrate software components from different vendors and are performed before their deployment for public use. This project was originated from the need of optimization of the certification process in an ATM manufacturing company. The process identifies compatibility problems between software components through testing. It is composed by a huge number of manual user tasks that makes the process very expensive and error-prone. Moreover, it is not possible to fully automate the process as it requires human intervention for manipulating ATM peripherals. This project presented important challenges for the development team. First, this is a critical process, as all the ATM operations rely on the software under test. Second, the context of use of ATMs applications is vastly different from ordinary software. Third, ATMs’ useful lifetime is beyond 15 years and both new and old models need to be supported. Fourth, the know-how for efficient testing depends on each specialist and it is not explicitly documented. Fifth, the huge number of tests and their importance implies the need for user efficiency and accuracy. All these factors led us conclude that besides the technical challenges, the usability of the intended software solution was critical for the project success. This business context is the motivation of this Master Thesis project. Our proposal focused in the development process applied. By combining user-centered design (UCD) with agile development we ensured both the high priority of usability and the early mitigation of software development risks caused by all the technology constraints. We performed 23 development iterations and finally we were able to provide a working solution on time according to users’ expectations. The evaluation of the project was carried out through usability tests, where 4 real users participated in different tests in the real context of use. The results were positive, according to different metrics: error rate, efficiency, effectiveness, and user satisfaction. We discuss the problems found, the benefits and the lessons learned in the process. Finally, we measured the expected project benefits by comparing the effort required by the current and the new process (once the new software tool is adopted). The savings corresponded to 40% less effort (man-hours) per certification. Future work includes additional evaluation of product usability in a real scenario (with customers) and the measuring of benefits in terms of quality improvement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report automated DNA sequencing in 16-channel microchips. A microchip prefilled with sieving matrix is aligned on a heating plate affixed to a movable platform. Samples are loaded into sample reservoirs by using an eight-tip pipetting device, and the chip is docked with an array of electrodes in the focal plane of a four-color scanning detection system. Under computer control, high voltage is applied to the appropriate reservoirs in a programmed sequence that injects and separates the DNA samples. An integrated four-color confocal fluorescent detector automatically scans all 16 channels. The system routinely yields more than 450 bases in 15 min in all 16 channels. In the best case using an automated base-calling program, 543 bases have been called at an accuracy of >99%. Separations, including automated chip loading and sample injection, normally are completed in less than 18 min. The advantages of DNA sequencing on capillary electrophoresis chips include uniform signal intensity and tolerance of high DNA template concentration. To understand the fundamentals of these unique features we developed a theoretical treatment of cross-channel chip injection that we call the differential concentration effect. We present experimental evidence consistent with the predictions of the theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several mutations that cause severe forms of the human disease autosomal dominant retinitis pigmentosa cluster in the C-terminal region of rhodopsin. Recent studies have implicated the C-terminal domain of rhodopsin in its trafficking on specialized post-Golgi membranes to the rod outer segment of the photoreceptor cell. Here we used synthetic peptides as competitive inhibitors of rhodopsin trafficking in the frog retinal cell-free system to delineate the potential regulatory sequence within the C terminus of rhodopsin and model the effects of severe retinitis pigmentosa alleles on rhodopsin sorting. The rhodopsin C-terminal sequence QVS(A)PA is highly conserved among different species. Peptides that correspond to the C terminus of bovine (amino acids 324–348) and frog (amino acids 330–354) rhodopsin inhibited post-Golgi trafficking by 50% and 60%, respectively, and arrested newly synthesized rhodopsin in the trans-Golgi network. Peptides corresponding to the cytoplasmic loops of rhodopsin and other control peptides had no effect. When three naturally occurring mutations: Q344ter (lacking the last five amino acids QVAPA), V345M, and P347S were introduced into the frog C-terminal peptide, the inhibitory activity of the peptides was no longer detectable. These observations suggest that the amino acids QVS(A)PA comprise a signal that is recognized by specific factors in the trans-Golgi network. A lack of recognition of this sequence, because of mutations in the last five amino acids causing autosomal dominant retinitis pigmentosa, most likely results in abnormal post-Golgi membrane formation and in an aberrant subcellular localization of rhodopsin.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neural connections in the adult central nervous system are highly precise. In the visual system, retinal ganglion cells send their axons to target neurons in the lateral geniculate nucleus (LGN) in such a way that axons originating from the two eyes terminate in adjacent but nonoverlapping eye-specific layers. During development, however, inputs from the two eyes are intermixed, and the adult pattern emerges gradually as axons from the two eyes sort out to form the layers. Experiments indicate that the sorting-out process, even though it occurs in utero in higher mammals and always before vision, requires retinal ganglion cell signaling; blocking retinal ganglion cell action potentials with tetrodotoxin prevents the formation of the layers. These action potentials are endogenously generated by the ganglion cells, which fire spontaneously and synchronously with each other, generating "waves" of activity that travel across the retina. Calcium imaging of the retina shows that the ganglion cells undergo correlated calcium bursting to generate the waves and that amacrine cells also participate in the correlated activity patterns. Physiological recordings from LGN neurons in vitro indicate that the quasiperiodic activity generated by the retinal ganglion cells is transmitted across the synapse between ganglion cells to drive target LGN neurons. These observations suggest that (i) a neural circuit within the immature retina is responsible for generating specific spatiotemporal patterns of neural activity; (ii) spontaneous activity generated in the retina is propagated across central synapses; and (iii) even before the photoreceptors are present, nerve cell function is essential for correct wiring of the visual system during early development. Since spontaneously generated activity is known to be present elsewhere in the developing CNS, this process of activity-dependent wiring could be used throughout the nervous system to help refine early sets of neural connections into their highly precise adult patterns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Washington, D.C.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Federal Highway Administration, Washington, D.C.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Federal Highway Administration, Washington, D.C.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Federal Transit Administration, Washington, D.C.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Federal Highway Administration, Office of Safety and Traffic Operations Research and Development, McLean, Va.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The c-fms gene encodes the receptor for macrophage colony-stimulating factor (CSF-1). The gene is expressed selectively in the macrophage and trophoblast cell lineages. Previous studies have indicated that sequences in intron 2 control transcript elongation in tissue-specific and regulated expression of c-fms. In humans, an alternative promoter was implicated in expression of the gene in trophoblasts. We show that in mice, c-fms transcripts in trophoblasts initiate from multiple points within the 2-kilobase (kb) region flanking the first coding exon. A reporter gene construct containing 3.5 kb of 5' flanking sequence and the down-stream intron 2 directed expression of enhanced green fluorescent protein (EGFP) to both trophoblasts and macrophages. EGFP was detected in trophoblasts from the earliest stage of implantation examined at embryonic day 7.5. During embryonic development, EGFP highlighted the large numbers of c-fms-positive macrophages, including those that originate from the yolk sac. In adult mice, EGFP location Was consistent with known F4/80-positive macrophage populations, including Langerhans cells of the skin, and permitted convenient sorting of isolated tissue macrophages from disaggregated tissue. Expression of EGFP in transgenic mice was dependent on intron 2 as no lines with detectable EGFP expression were obtained where either all of intron 2 or a conserved enhancer element FIRE (the Fms intronic regulatory element) was removed. We have therefore defined the elements required to generate myeloid- and trophoblast-specific transgenes as well as a model system for the study of mononuclear phagocyte development and function. (C) 2003 by The American Society of Hematology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Good quality concept lattice drawings are required to effectively communicate logical structure in Formal Concept Analysis. Data analysis frameworks such as the Toscana System use manually arranged concept lattices to avoid the problem of automatically producing high quality lattices. This limits Toscana systems to a finite number of concept lattices that have been prepared a priori. To extend the use of formal concept analysis, automated techniques are required that can produce high quality concept lattice drawings on demand. This paper proposes and evaluates an adaption of layer diagrams to improve automated lattice drawing. © Springer-Verlag Berlin Heidelberg 2006.