839 resultados para automated
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
Acknowledgements We thank Andrew Spink (Noldus Information Technology) and the Blogging Birds team members Peter Kindness and Abdul Adeniyi for their valuable contributions to this paper. John Fryxell, Chris Thaxter and Arjun Amar provided valuable comments on an earlier version. The study was part of the Digital Conservation project of dot.rural, the University of Aberdeen’s Digital Economy Research Hub, funded by RCUK (grant reference EP/G066051/1).
Resumo:
© 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. Acknowledgments The authors thank H. H. Nguyen for his early development work on the BeeWatch interface; E. O'Mahony, I. Pearce, and R. Comont for identifying numerous photographed bumblebees; B. Darvill, D. Ewing, and G. Perkins for enabling our partnership with the Bumblebee Conservation Trust; and S. Blake for his investments in developing the NLG feedback. The study was part of the Digital Conservation project of dot.rural, the University of Aberdeen's Digital Economy Research Hub, funded by RCUK (grant reference EP/G066051/1).
Resumo:
We report automated DNA sequencing in 16-channel microchips. A microchip prefilled with sieving matrix is aligned on a heating plate affixed to a movable platform. Samples are loaded into sample reservoirs by using an eight-tip pipetting device, and the chip is docked with an array of electrodes in the focal plane of a four-color scanning detection system. Under computer control, high voltage is applied to the appropriate reservoirs in a programmed sequence that injects and separates the DNA samples. An integrated four-color confocal fluorescent detector automatically scans all 16 channels. The system routinely yields more than 450 bases in 15 min in all 16 channels. In the best case using an automated base-calling program, 543 bases have been called at an accuracy of >99%. Separations, including automated chip loading and sample injection, normally are completed in less than 18 min. The advantages of DNA sequencing on capillary electrophoresis chips include uniform signal intensity and tolerance of high DNA template concentration. To understand the fundamentals of these unique features we developed a theoretical treatment of cross-channel chip injection that we call the differential concentration effect. We present experimental evidence consistent with the predictions of the theory.
Resumo:
A de novo sequencing program for proteins is described that uses tandem MS data from electron capture dissociation and collisionally activated dissociation of electrosprayed protein ions. Computer automation is used to convert the fragment ion mass values derived from these spectra into the most probable protein sequence, without distinguishing Leu/Ile. Minimum human input is necessary for the data reduction and interpretation. No extra chemistry is necessary to distinguish N- and C-terminal fragments in the mass spectra, as this is determined from the electron capture dissociation data. With parts-per-million mass accuracy (now available by using higher field Fourier transform MS instruments), the complete sequences of ubiquitin (8.6 kDa) and melittin (2.8 kDa) were predicted correctly by the program. The data available also provided 91% of the cytochrome c (12.4 kDa) sequence (essentially complete except for the tandem MS-resistant region K13–V20 that contains the cyclic heme). Uncorrected mass values from a 6-T instrument still gave 86% of the sequence for ubiquitin, except for distinguishing Gln/Lys. Extensive sequencing of larger proteins should be possible by applying the algorithm to pieces of ≈10-kDa size, such as products of limited proteolysis.
Resumo:
The 5' noncoding region of poliovirus RNA contains an internal ribosome entry site (IRES) for cap-independent initiation of translation. Utilization of the IRES requires the participation of one or more cellular proteins that mediate events in the translation initiation reaction, but whose biochemical roles have not been defined. In this report, we identify a cellular RNA binding protein isolated from the ribosomal salt wash of uninfected HeLa cells that specifically binds to stem-loop IV, a domain located in the central part of the poliovirus IRES. The protein was isolated by specific RNA affinity chromatography, and 55% of its sequence was determined by automated liquid chromatography-tandem mass spectrometry. The sequence obtained matched that of poly(rC) binding protein 2 (PCBP2), previously identified as an RNA binding protein from human cells. PCBP2, as well as a related protein, PCBP1, was over-expressed in Escherichia coli after cloning the cDNAs into an expression plasmid to produce a histidine-tagged fusion protein. Specific interaction between recombinant PCBP2 and poliovirus stem-loop IV was demonstrated by RNA mobility shift analysis. The closely related PCBP1 showed no stable interaction with the RNA. Stem-loop IV RNA containing a three nucleotide insertion that abrogates translation activity and virus viability was unable to bind PCBP2.
Resumo:
Detection of loss of heterozygosity (LOH) by comparison of normal and tumor genotypes using PCR-based microsatellite loci provides considerable advantages over traditional Southern blotting-based approaches. However, current methodologies are limited by several factors, including the numbers of loci that can be evaluated for LOH in a single experiment, the discrimination of true alleles versus "stutter bands," and the use of radionucleotides in detecting PCR products. Here we describe methods for high throughput simultaneous assessment of LOH at multiple loci in human tumors; these methods rely on the detection of amplified microsatellite loci by fluorescence-based DNA sequencing technology. Data generated by this approach are processed by several computer software programs that enable the automated linear quantitation and calculation of allelic ratios, allowing rapid ascertainment of LOH. As a test of this approach, genotypes at a series of loci on chromosome 4 were determined for 58 carcinomas of the uterine cervix. The results underscore the efficacy, sensitivity, and remarkable reproducibility of this approach to LOH detection and provide subchromosomal localization of two regions of chromosome 4 commonly altered in cervical tumors.
Resumo:
Transmission of human immunodeficiency virus 1 (HIV-1) from an infected women to her offspring during gestation and delivery was found to be influenced by the infant's major histocompatibility complex class II DRB1 alleles. Forty-six HIV-infected infants and 63 seroreverting infants, born with passively acquired anti-HIV antibodies but not becoming detectably infected, were typed by an automated nucleotide-sequence-based technique that uses low-resolution PCR to select either the simpler Taq or the more demanding T7 sequencing chemistry. One or more DR13 alleles, including DRB1*1301, 1302, and 1303, were found in 31.7% of seroreverting infants and 15.2% of those becoming HIV-infected [OR (odds ratio) = 2.6 (95% confidence interval 1.0-6.8); P = 0.048]. This association was influenced by ethnicity, being seen more strongly among the 80 Black and Hispanic children [OR = 4.3 (1.2-16.4); P = 0.023], with the most pronounced effect among Black infants where 7 of 24 seroreverters inherited these alleles with none among 12 HIV-infected infants (Haldane OR = 12.3; P = 0.037). The previously recognized association of DR13 alleles with some situations of long-term nonprogression of HIV suggests that similar mechanisms may regulate both the occurrence of infection and disease progression after infection. Upon examining for residual associations, only only the DR2 allele DRB1*1501 was associated with seroreversion in Caucasoid infants (OR = 24; P = 0.004). Among Caucasoids the DRB1*03011 allele was positively associated with the occurrence of HIV infection (P = 0.03).
Resumo:
An automated oligonucleotide synthesizer has been developed that can simultaneously and rapidly synthesize up to 96 different oligonucleotides in a 96-well microtiter format using phosphoramidite synthesis chemistry. A modified 96-well plate is positioned under reagent valve banks, and appropriate reagents are delivered into individual wells containing the growing oligonucleotide chain, which is bound to a solid support. Each well has a filter bottom that enables the removal of spent reagents while retaining the solid support matrix. A seal design is employed to control synthesis environment and the entire instrument is automated via computer control. Synthesis cycle times for 96 couplings are < 11 min, allowing a plate of 96 20-mers to be synthesized in < 5 hr. Oligonucleotide synthesis quality is comparable to commercial machines, with average coupling efficiencies routinely > 98% across the entire 96-well plate. No significant well-to-well variations in synthesis quality have been observed in > 6000 oligonucleotides synthesized to date. The reduced reagent usage and increased capacity allow the overall synthesis cost to drop by at least a factor of 10. With the development of this instrument, it is now practical and cost-effective to synthesize thousands to tens of thousands of oligonucleotides.
Resumo:
Con il seguente documento si vuole esaminare l'utilita trasportistica dei sistemi di trasporto del tipo Automated People Mover passando da un inquadramento storico generale all'uso dei sistemi nell'ambito aeroportuale dall'inizio degli anni '60 fino ai tempi piu moderni. Dopo avere analizzato le configurazioni e le caratteristiche tipiche dei sistemi APM in ambito aeroportuale si definira quale tipologia di infrastruttura e stata privilegiata negli aeroporti europei e mondiali. Successivamente si prendera in esame lo stato dell'arte dei sistemi APM installati nell'aeroporto principale di Parigi e uno dei piu trafficati del mondo, l'Aeroporto Internazionale Charles de Gaulle; e del sistema installato nell'Aeroporto Internazionale Galileo Galilei di Pisa.
Resumo:
-tabletutorial- illustrates how Stata can be used to export statistical results and generate customized reports. Part 1 explains how results from Stata routines can be accessed and how they can be exported using the -file- comand or a wrapper such as, e.g., -mat2txt-. Part 2 shows how model estimation results can be archived using -estwrite- and how models can be tabulated and exported to LaTeX, MS Excel, or MS Word using -estout-. Part 3 illustrates how to set up automatic reports in LaTeX or MS Word. The tutorial is based on a talk given at CEPS/INSTEAD in Luxembourg in October 2008. After install, type -help tabletutorial- to start the tutorial (in Stata 8, type -whelp tabletutorial-). The -mat2txt-, -estwrite-, and -estout- packages, also available from SSC, are required to run the examples.
Resumo:
This study was carried out to detect differences in locomotion and feeding behavior in lame (group L; n = 41; gait score ≥ 2.5) and non-lame (group C; n = 12; gait score ≤ 2) multiparous Holstein cows in a cross-sectional study design. A model for automatic lameness detection was created, using data from accelerometers attached to the hind limbs and noseband sensors attached to the head. Each cow's gait was videotaped and scored on a 5-point scale before and after a period of 3 consecutive days of behavioral data recording. The mean value of 3 independent experienced observers was taken as a definite gait score and considered to be the gold standard. For statistical analysis, data from the noseband sensor and one of two accelerometers per cow (randomly selected) of 2 out of 3 randomly selected days was used. For comparison between group L and group C, the T-test, the Aspin-Welch Test and the Wilcoxon Test were used. The sensitivity and specificity for lameness detection was determined with logistic regression and ROC-analysis. Group L compared to group C had significantly lower eating and ruminating time, fewer eating chews, ruminating chews and ruminating boluses, longer lying time and lying bout duration, lower standing time, fewer standing and walking bouts, fewer, slower and shorter strides and a lower walking speed. The model considering the number of standing bouts and walking speed was the best predictor of cows being lame with a sensitivity of 90.2% and specificity of 91.7%. Sensitivity and specificity of the lameness detection model were considered to be very high, even without the use of halter data. It was concluded that under the conditions of the study farm, accelerometer data were suitable for accurately distinguishing between lame and non-lame dairy cows, even in cases of slight lameness with a gait score of 2.5.