918 resultados para High-speed digital imaging
Resumo:
The modern computer systems that are in use nowadays are mostly processor-dominant, which means that their memory is treated as a slave element that has one major task – to serve execution units data requirements. This organization is based on the classical Von Neumann's computer model, proposed seven decades ago in the 1950ties. This model suffers from a substantial processor-memory bottleneck, because of the huge disparity between the processor and memory working speeds. In order to solve this problem, in this paper we propose a novel architecture and organization of processors and computers that attempts to provide stronger match between the processing and memory elements in the system. The proposed model utilizes a memory-centric architecture, wherein the execution hardware is added to the memory code blocks, allowing them to perform instructions scheduling and execution, management of data requests and responses, and direct communication with the data memory blocks without using registers. This organization allows concurrent execution of all threads, processes or program segments that fit in the memory at a given time. Therefore, in this paper we describe several possibilities for organizing the proposed memory-centric system with multiple data and logicmemory merged blocks, by utilizing a high-speed interconnection switching network.
Resumo:
En aplicaciones como la conformación en frío, donde los metales duros recubiertos con películas de naturaleza cerámica son ampliamente empleados, la existencia de un contacto mecánico repetitivo induce tensiones Hertzianas y origina el fallo por fatiga. En este trabajo, se investigan diversos recubrimientos cerámicos depositados por deposición física desde fase vapor sobre calidades diferentes de metal duro y un acero rápido pulvimetalúrgico para evaluar sus respectivas respuesta al contacto y comportamiento a fatiga. El trabajo experimental incluye la caracterización de los sistemas mediante ensayos de rayado y nanoindentación y la evaluación de las curvas tensión-deformación de indentación esférica de los sustratos, tanto desnudos como recubiertos, poniendo especial atención en determinar las tensiones de contacto críticas asociadas a la deformación plástica y a la aparición de grietas circulares en la superficie recubierta. A este estudio, le siguen numerosos ensayos a fatiga a cargas inferiores a aquéllas identificadas como críticas bajo carga monotónica y para un número de ciclos comprendido entre 1.000 y 1.000.000 de ciclos. Los resultados experimentales indican que las películas cerámicas no parecen desempeñar un papel relevante en la aparición de la cedencia plástica, siendo la deformación plástica global controlada por la deformación del sustrato. No obstante, para tensiones elevadas de indentación durante el régimen plástico, existe la aparición de grietas circulares en los recubrimientos cerámicos. Además, la aparición de las mismas es sensible a la fatiga por contacto. Este análisis mecánico se complementa con una inspección detallada del daño generado en profundidad y superficie.
Resumo:
El uso intensivo y prolongado de computadores de altas prestaciones para ejecutar aplicaciones computacionalmente intensivas, sumado al elevado número de elementos que los componen, incrementan drásticamente la probabilidad de ocurrencia de fallos durante su funcionamiento. El objetivo del trabajo es resolver el problema de tolerancia a fallos para redes de interconexión de altas prestaciones, partiendo del diseño de políticas de encaminamiento tolerantes a fallos. Buscamos resolver una determinada cantidad de fallos de enlaces y nodos, considerando sus factores de impacto y probabilidad de aparición. Para ello aprovechamos la redundancia de caminos de comunicación existentes, partiendo desde enfoques de encaminamiento adaptativos capaces de cumplir con las cuatro fases de la tolerancia a fallos: detección del error, contención del daño, recuperación del error, y tratamiento del fallo y continuidad del servicio. La experimentación muestra una degradación de prestaciones menor al 5%. En el futuro, se tratará la pérdida de información en tránsito.
Resumo:
Transmission electon microscopy has been employed for the rapid detection of mycoplasma in sera and cell cultures. High speed centrifugation of sera or low speed centrifugation of cell debris, followed by negative staining of the resuspended pellet, detected mycoplasma contamination more frequently than a culture method followed by direct fluorescence (DAPI), which was used as a control procedure. The appearance of the mycoplasma cell border and content gives some information about particle viability.
Resumo:
This paper analyzes both theoretically and empirically the relationship between distance and frequency of scheduled transportation services. We study the interaction between a monopoly firm providing high-speed scheduled service and personal trans- portation (i.e., car). Most interestingly, the carrier chooses to increase frequency of service on longer routes when competing with personal transportation because provid- ing a higher frequency (at extra cost) it can also charge higher fares that can boost its profits. However, when driving is not a relevant option, frequency of service de- creases for longer flights consistently with prior studies. An empirical application of our analysis to the European airline industry con?rms the predictions of our theoretical model.
Resumo:
El entorno aéreo es, a día de hoy, uno de los escenarios más complicados a la hora de establecer enlaces de comunicación fiables. Esto es debido, principalmente, a las altas velocidades a las que circulan los aviones, que propician una gran degradación del rendimiento des sistema si no se estima de forma continua el canal. Además el entorno aéreo es susceptible a sufrir muchos otros efectos que provocan la degradación de la señal, como la difracción, la reflexión, etc. Por este motivo en este proyecto se hace un estudio de dos escenarios típicos de vuelo: arrival (aterrizaje) y on route (vuelo en ruta). En el escenario on route los aviones circulan a más de el doble de velocidad que en el escenario arrival, de esta manera se podrá ver el efecto de sufrir un doppler mayor. Para realizar el estudio se utiliza un sistema multiportadora con solapamiento de subcanales, OFDM, y se toman inicialmente parámetros típicos de la tecnología WiMAX, que se variarán con el objetivo de mejorar el rendimiento del sistema.
Resumo:
Report for the scientific sojourn carried out at the Université Catholique de Louvain, Belgium, from March until June 2007. In the first part, the impact of important geometrical parameters such as source and drain thickness, fin spacing, spacer width, etc. on the parasitic fringing capacitance component of multiple-gate field-effect transistors (MuGFET) is deeply analyzed using finite element simulations. Several architectures such as single gate, FinFETs (double gate), triple-gate represented by Pi-gate MOSFETs are simulated and compared in terms of channel and fringing capacitances for the same occupied die area. Simulations highlight the great impact of diminishing the spacing between fins for MuGFETs and the trade-off between the reduction of parasitic source and drain resistances and the increase of fringing capacitances when Selective Epitaxial Growth (SEG) technology is introduced. The impact of these technological solutions on the transistor cut-off frequencies is also discussed. The second part deals with the study of the effect of the volume inversion (VI) on the capacitances of undoped Double-Gate (DG) MOSFETs. For that purpose, we present simulation results for the capacitances of undoped DG MOSFETs using an explicit and analytical compact model. It monstrates that the transition from volume inversion regime to dual gate behaviour is well simulated. The model shows an accurate dependence on the silicon layer thickness,consistent withtwo dimensional numerical simulations, for both thin and thick silicon films. Whereas the current drive and transconductance are enhanced in volume inversion regime, our results show thatintrinsic capacitances present higher values as well, which may limit the high speed (delay time) behaviour of DG MOSFETs under volume inversion regime.
Resumo:
This paper presents a theoretical and empirical analysis of the relationship be- tween frequency of scheduled transportation services and their substitutability with personal transportation (using distance as a proxy). We study the interaction between a monopoly firm providing a high-speed scheduled service and private transportation (i.e., car). Interestingly, the carrier chooses to increase the frequency of service on longer routes when competing with personal transportation because by providing higher frequency (at extra cost) it can also charge higher fares which can boost its profits. However, in line with the results of earlier studies, frequency decreases for longer flights when driving is not a viable option. An empirical application of our analysis to the European airline industry confirms the predictions of our theoretical model. Keywords: short-haul routes; long-haul routes; flight frequency; distance JEL Classification Numbers: L13; L2; L93
Resumo:
The 30 M m3 rockslide that occurred on the east face of Turtle Mountain in the Crowsnest Pass area (Alberta) in 1903 is one of the most famous landslides in the world. In this paper, the structural features of the South part of Turtle Mountain are investigated in order to understand the present-day scar morphology and to identify the most important failure mechanisms. The structural features were mapped using a high resolution digital elevation model (DEM) in order to have a large overview of the relevant structural features. At the same time, a field survey was carried out and small scale fractures were analyzed in different parts of southern Turtle Mountain in order to confirm the DEM analysis. Results allow to identify six main discontinuity sets that influence the Turtle Mountain morphology. These discontinuity sets were then used to identify the potential failure mechanisms affecting Third Peak and South Peak area.
Resumo:
High Performance Computing is a rapidly evolving area of computer science which attends to solve complicated computational problems with the combination of computational nodes connected through high speed networks. This work concentrates on the networks problems that appear in such networks and specially focuses on the Deadlock problem that can decrease the efficiency of the communication or even destroy the balance and paralyze the network. Goal of this work is the Deadlock avoidance with the use of virtual channels, in the switches of the network where the problem appears. The deadlock avoidance assures that will not be loss of data inside network, having as result the increased latency of the served packets, due to the extra calculation that the switches have to make to apply the policy.
Resumo:
PURPOSE: This study investigated maximal cardiometabolic response while running in a lower body positive pressure treadmill (antigravity treadmill (AG)), which reduces body weight (BW) and impact. The AG is used in rehabilitation of injuries but could have potential for high-speed running, if workload is maximally elevated. METHODS: Fourteen trained (nine male) runners (age 27 ± 5 yr; 10-km personal best, 38.1 ± 1.1 min) completed a treadmill incremental test (CON) to measure aerobic capacity and heart rate (V˙O2max and HRmax). They completed four identical tests (48 h apart, randomized order) on the AG at BW of 100%, 95%, 90%, and 85% (AG100 to AG85). Stride length and rate were measured at peak velocities (Vpeak). RESULTS: V˙O2max (mL·kg·min) was similar across all conditions (men: CON = 66.6 (3.0), AG100 = 65.6 (3.8), AG95 = 65.0 (5.4), AG90 = 65.6 (4.5), and AG85 = 65.0 (4.8); women: CON = 63.0 (4.6), AG100 = 61.4 (4.3), AG95 = 60.7 (4.8), AG90 = 61.4 (3.3), and AG85 = 62.8 (3.9)). Similar results were found for HRmax, except for AG85 in men and AG100 and AG90 in women, which were lower than CON. Vpeak (km·h) in men was 19.7 (0.9) in CON, which was lower than every other condition: AG100 = 21.0 (1.9) (P < 0.05), AG95 = 21.4 (1.8) (P < 0.01), AG90 = 22.3 (2.1) (P < 0.01), and AG85 = 22.6 (1.6) (P < 0.001). In women, Vpeak (km·h) was similar between CON (17.8 (1.1) ) and AG100 (19.3 (1.0)) but higher at AG95 = 19.5 (0.4) (P < 0.05), AG90 = 19.5 (0.8) (P < 0.05), and AG85 = 21.2 (0.9) (P < 0.01). CONCLUSIONS: The AG can be used at maximal exercise intensities at BW of 85% to 95%, reaching faster running speeds than normally feasible. The AG could be used for overspeed running programs at the highest metabolic response levels.
Resumo:
The basal sliding surfaces in large rockslides are often composed of several surfaces and possess a complex geometry. The exact morphology and location in three dimensions of the sliding surface remains generally unknown, in spite of extensive field and subsurface investigations, such as those at the Åknes rockslide (western Norway). This knowledge is crucial for volume estimations, failure mechanisms, and numerical slope stability modeling. This paper focuses on the geomorphologic characterization of the basal sliding surface of a postglacial rockslide scar in the vicinity of Åknes. This scar displays a stepped basal sliding surface formed by dip slopes of the gneiss foliation linked together by steeply dipping fractures. A detailed characterization of the rockslide scar by means of high-resolution digital elevation models permits statistical parameters of dip angle, spacing, persistence, and roughness of foliation surfaces and step fractures to be obtained. The characteristics are used for stochastic simulations of stepped basal sliding surfaces at the Åknes rockslide. These findings are compared with previous models based on geophysical investigations. This study discusses the investigation of rockslide scars and rock outcrops for a better understanding of potential rockslides. This work identifies possible basal sliding surface locations, which is a valuable input for volume estimates, design and location of monitoring instrumentation, and numerical slope stability modeling.
Resumo:
Projecte de recerca elaborat a partir d’una estada a la Satandford University, EEUU, entre 2007 i 2009. Els darrers anys, hi ha hagut un avanç espectacular en la tecnologia aplicada a l’anàlisi del genoma i del proteoma (microarrays, PCR quantitativa real time, electroforesis dos dimensions, espectroscòpia de masses, etc.) permetent la resolució de mostres complexes i la detecció quantitativa de diferents gens i proteïnes en un sol experiment. A més a més, la seva importància radica en la capacitat d’identificar potencials dianes terapèutiques i possibles fàrmacs, així com la seva aplicació en el disseny i desenvolupament de noves eines de diagnòstic. L’aplicabilitat de les tècniques actuals, però, està limitada al nivell al que el teixit pot ser disseccionat. Si bé donen valuosa informació sobre expressió de gens i proteïnes implicades en una malaltia o en resposta a un fàrmac per exemple, en cap cas, s’obté una informació in situ ni es pot obtenir informació espacial o una resolució temporal, així com tampoc s’obté informació de sistemes in vivo. L’objectiu d’aquest projecte és desenvolupar i validar un nou microscopi, d’alta resolució, ultrasensible i de fàcil ús, que permeti tant la detecció de metabòlits, gens o proteïnes a la cèl•lula viva en temps real com l’estudi de la seva funció. Obtenint així una descripció detallada de les interaccions entre proteïnes/gens que es donen dins la cèl•lula. Aquest microscopi serà un instrument sensible, selectiu, ràpid, robust, automatitzat i de cost moderat que realitzarà processos de cribatge d’alt rendiment (High throughput screening) genètics, mèdics, químics i farmacèutics (per aplicacions diagnòstiques i de identificació i selecció de compostos actius) de manera més eficient. Per poder realitzar aquest objectius el microscopi farà ús de les més noves tecnologies: 1)la microscopia òptica i d’imatge, per millorar la visualització espaial i la sensibilitat de l’imatge; 2) la utilització de nous mètodes de detecció incloent els més moderns avanços en nanopartícules; 3) la creació de mètodes informàtics per adquirir, emmagatzemar i processar les imatges obtingudes.
Resumo:
The widespread use of digital imaging devices for surveillance (CCTV) and entertainment (e.g., mobile phones, compact cameras) has increased the number of images recorded and opportunities to consider the images as traces or documentation of criminal activity. The forensic science literature focuses almost exclusively on technical issues and evidence assessment [1]. Earlier steps in the investigation phase have been neglected and must be considered. This article is the first comprehensive description of a methodology to event reconstruction using images. This formal methodology was conceptualised from practical experiences and applied to different contexts and case studies to test and refine it. Based on this practical analysis, we propose a systematic approach that includes a preliminary analysis followed by four main steps. These steps form a sequence for which the results from each step rely on the previous step. However, the methodology is not linear, but it is a cyclic, iterative progression for obtaining knowledge about an event. The preliminary analysis is a pre-evaluation phase, wherein potential relevance of images is assessed. In the first step, images are detected and collected as pertinent trace material; the second step involves organising and assessing their quality and informative potential. The third step includes reconstruction using clues about space, time and actions. Finally, in the fourth step, the images are evaluated and selected as evidence. These steps are described and illustrated using practical examples. The paper outlines how images elicit information about persons, objects, space, time and actions throughout the investigation process to reconstruct an event step by step. We emphasise the hypothetico-deductive reasoning framework, which demonstrates the contribution of images to generating, refining or eliminating propositions or hypotheses. This methodology provides a sound basis for extending image use as evidence and, more generally, as clues in investigation and crime reconstruction processes.
Resumo:
The unstable rock slope, Stampa, above the village of Flåm, Norway, shows signs of both active and postglacial gravitational deformation over an area of 11 km2. Detailed structural field mapping, annual differential Global Navigation Satellite System (GNSS) surveys, as well as geomorphic analysis of high-resolution digital elevation models based on airborne and terrestrial laser scanning indicate that slope deformation is complex and spatially variable. Numerical modeling was used to investigate the influence of former rockslide activity and to better understand the failure mechanism. Field observations, kinematic analysis and numerical modeling indicate a strong structural control of the unstable area. Based on the integration of the above analyses, we propose that the failure mechanism is dominated by (1) a toppling component, (2) subsiding bilinear wedge failure and (3) planar sliding along the foliation at the toe of the unstable slope. Using differential GNSS, 18 points were measured annually over a period of up to 6 years. Two of these points have an average yearly movement of around 10 mm/year. They are located at the frontal cliff on almost completely detached blocks with volumes smaller than 300,000 m3. Large fractures indicate deep-seated gravitational deformation of volumes reaching several 100 million m3, but the movement rates in these areas are below 2 mm/year. Two different lobes of prehistoric rock slope failures were dated with terrestrial cosmogenic nuclides. While the northern lobe gave an average age of 4,300 years BP, the southern one resulted in two different ages (2,400 and 12,000 years BP), which represent most likely multiple rockfall events. This reflects the currently observable deformation style with unstable blocks in the northern part in between Joasete and Furekamben and no distinct blocks but a high rockfall activity around Ramnanosi in the south. With a relative susceptibility analysis it is concluded that small collapses of blocks along the frontal cliff will be more frequent. Larger collapses of free-standing blocks along the cliff with volumes > 100,000 m3, thus large enough to reach the fjord, cannot be ruled out. A larger collapse involving several million m3 is presently considered of very low likelihood.