899 resultados para compression bandages


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, a new methodology is devised to obtain the fracture properties of nuclear fuel cladding in the hoop direction. The proposed method combines ring compression tests and a finite element method that includes a damage model based on cohesive crack theory, applied to unirradiated hydrogen-charged ZIRLOTM nuclear fuel cladding. Samples with hydrogen concentrations from 0 to 2000 ppm were tested at 20 �C. Agreement between the finite element simulations and the experimental results is excellent in all cases. The parameters of the cohesive crack model are obtained from the simulations, with the fracture energy and fracture toughness being calculated in turn. The evolution of fracture toughness in the hoop direction with the hydrogen concentration (up to 2000 ppm) is reported for the first time for ZIRLOTM cladding. Additionally, the fracture micromechanisms are examined as a function of the hydrogen concentration. In the as-received samples, the micromechanism is the nucleation, growth and coalescence of voids, whereas in the samples with 2000 ppm, a combination of cuasicleavage and plastic deformation, along with secondary microcracking is observed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cyclic compression of several granular systems has been simulated with a molecular dynamics code. All the samples consisted of bidimensional, soft, frictionless and equal-sized particles that were initially arranged according to a squared lattice and were compressed by randomly generated irregular walls. The compression protocols can be described by some control variables (volume or external force acting on the walls) and by some dimensionless factors, that relate stiffness, density, diameter, damping ratio and water surface tension to the external forces, displacements and periods. Each protocol, that is associated to a dynamic process, results in an arrangement with its own macroscopic features: volume (or packing ratio), coordination number, and stress; and the differences between packings can be highly significant. The statistical distribution of the force-moment state of the particles (i.e. the equivalent average stress multiplied by the volume) is analyzed. In spite of the lack of a theoretical framework based on statistical mechanics specific for these protocols, it is shown how the obtained distributions of mean and relative deviatoric force-moment are. Then it is discussed on the nature of these distributions and on their relation to specific protocols.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In many applications (like social or sensor networks) the in- formation generated can be represented as a continuous stream of RDF items, where each item describes an application event (social network post, sensor measurement, etc). In this paper we focus on compressing RDF streams. In particular, we propose an approach for lossless RDF stream compression, named RDSZ (RDF Differential Stream compressor based on Zlib). This approach takes advantage of the structural similarities among items in a stream by combining a differential item encoding mechanism with the general purpose stream compressor Zlib. Empirical evaluation using several RDF stream datasets shows that this combi- nation produces gains in compression ratios with respect to using Zlib alone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

LHE (logarithmical hopping encoding) is a computationally efficient image compression algorithm that exploits the Weber–Fechner law to encode the error between colour component predictions and the actual value of such components. More concretely, for each pixel, luminance and chrominance predictions are calculated as a function of the surrounding pixels and then the error between the predictions and the actual values are logarithmically quantised. The main advantage of LHE is that although it is capable of achieving a low-bit rate encoding with high quality results in terms of peak signal-to-noise ratio (PSNR) and image quality metrics with full-reference (FSIM) and non-reference (blind/referenceless image spatial quality evaluator), its time complexity is O( n) and its memory complexity is O(1). Furthermore, an enhanced version of the algorithm is proposed, where the output codes provided by the logarithmical quantiser are used in a pre-processing stage to estimate the perceptual relevance of the image blocks. This allows the algorithm to downsample the blocks with low perceptual relevance, thus improving the compression rate. The performance of LHE is especially remarkable when the bit per pixel rate is low, showing much better quality, in terms of PSNR and FSIM, than JPEG and slightly lower quality than JPEG-2000 but being more computationally efficient.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Los sensores de fibra óptica son una tecnología que ha madurado en los últimos años, sin embargo, se requiere un mayor desarrollo de aplicaciones para materiales naturales como las rocas, que por ser agregados complejos pueden contener partículas minerales y fracturas de tamaño mucho mayor que las galgas eléctricas usadas tradicionalmente para medir deformaciones en las pruebas de laboratorio, ocasionando que los resultados obtenidos puedan ser no representativos. En este trabajo fueron diseñados, fabricados y probados sensores de deformación de gran área y forma curvada, usando redes de Bragg en fibra óptica (FBG) con el objetivo de obtener registros representativos en rocas que contienen minerales y estructuras de diversas composiciones, tamaños y direcciones. Se presenta el proceso de elaboración del transductor, su caracterización mecánica, su calibración y su evaluación en pruebas de compresión uniaxial en muestras de roca. Para verificar la eficiencia en la transmisión de la deformación de la roca al sensor una vez pegado, también fue realizado el análisis de la transferencia incluyendo los efectos del adhesivo, de la muestra y del transductor. Los resultados experimentales indican que el sensor desarrollado permite registro y transferencia de la deformación fiables, avance necesario para uso en rocas y otros materiales heterogénos, señalando una interesante perspectiva para aplicaciones sobre superficies irregulares, pues permite aumentar a voluntad el tamaño y forma del área de registro, posibilita también obtener mayor fiabilidad de resultados en muestras de pequeño tamaño y sugiere su conveniencia en obras, en las cuales los sistemas eléctricos tradicionales tienen limitaciones. ABSTRACT Optical fiber sensors are a technology that has matured in recent years, however, further development for rock applications is needed. Rocks contain mineral particles and features larger than electrical strain gauges traditionally used in laboratory tests, causing the results to be unrepresentative. In this work were designed, manufactured, and tested large area and curved shape strain gages, using fiber Bragg gratings in optical fiber (FBG) in order to obtain representative measurement on surface rocks samples containing minerals and structures of different compositions, sizes and directions. This reports presents the processes of manufacturing, mechanical characterization, calibration and evaluation under uniaxial compression tests on rock samples. To verify the efficiency of rock deformation transmitted to attached sensor, it was also performed the analysis of the strain transfer including the effects of the bonding, the sample and the transducer. The experimental results indicate that the developed sensor enables reliable measurements of the strain and its transmission from rock to sensor, appropriate for use in heterogeneous materials, pointing an interesting perspective for applications on irregular surfaces, allowing increasing at will the size and shape of the measurement area. This research suggests suitability of the optical strain gauge for real scale, where traditional electrical systems have demonstrated some limitations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Debido al creciente aumento del tamaño de los datos en muchos de los actuales sistemas de información, muchos de los algoritmos de recorrido de estas estructuras pierden rendimento para realizar búsquedas en estos. Debido a que la representacion de estos datos en muchos casos se realiza mediante estructuras nodo-vertice (Grafos), en el año 2009 se creó el reto Graph500. Con anterioridad, otros retos como Top500 servían para medir el rendimiento en base a la capacidad de cálculo de los sistemas, mediante tests LINPACK. En caso de Graph500 la medicion se realiza mediante la ejecución de un algoritmo de recorrido en anchura de grafos (BFS en inglés) aplicada a Grafos. El algoritmo BFS es uno de los pilares de otros muchos algoritmos utilizados en grafos como SSSP, shortest path o Betweeness centrality. Una mejora en este ayudaría a la mejora de los otros que lo utilizan. Analisis del Problema El algoritmos BFS utilizado en los sistemas de computación de alto rendimiento (HPC en ingles) es usualmente una version para sistemas distribuidos del algoritmo secuencial original. En esta versión distribuida se inicia la ejecución realizando un particionado del grafo y posteriormente cada uno de los procesadores distribuidos computará una parte y distribuirá sus resultados a los demás sistemas. Debido a que la diferencia de velocidad entre el procesamiento en cada uno de estos nodos y la transfencia de datos por la red de interconexión es muy alta (estando en desventaja la red de interconexion) han sido bastantes las aproximaciones tomadas para reducir la perdida de rendimiento al realizar transferencias. Respecto al particionado inicial del grafo, el enfoque tradicional (llamado 1D-partitioned graph en ingles) consiste en asignar a cada nodo unos vertices fijos que él procesará. Para disminuir el tráfico de datos se propuso otro particionado (2D) en el cual la distribución se haciá en base a las aristas del grafo, en vez de a los vertices. Este particionado reducía el trafico en la red en una proporcion O(NxM) a O(log(N)). Si bien han habido otros enfoques para reducir la transferecnia como: reordemaniento inicial de los vertices para añadir localidad en los nodos, o particionados dinámicos, el enfoque que se va a proponer en este trabajo va a consistir en aplicar técnicas recientes de compression de grandes sistemas de datos como Bases de datos de alto volume o motores de búsqueda en internet para comprimir los datos de las transferencias entre nodos.---ABSTRACT---The Breadth First Search (BFS) algorithm is the foundation and building block of many higher graph-based operations such as spanning trees, shortest paths and betweenness centrality. The importance of this algorithm increases each day due to it is a key requirement for many data structures which are becoming popular nowadays. These data structures turn out to be internally graph structures. When the BFS algorithm is parallelized and the data is distributed into several processors, some research shows a performance limitation introduced by the interconnection network [31]. Hence, improvements on the area of communications may benefit the global performance in this key algorithm. In this work it is presented an alternative compression mechanism. It differs with current existing methods in that it is aware of characteristics of the data which may benefit the compression. Apart from this, we will perform a other test to see how this algorithm (in a dis- tributed scenario) benefits from traditional instruction-based optimizations. Last, we will review the current supercomputing techniques and the related work being done in the area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The optimal design of a vertical cantilever beam is presented in this paper. The beam is assumed immersed in an elastic Winkler soil and subjected to several loads: a point force at the tip section, its self weight and a uniform distributed load along its length. lbe optimal design problem is to find the beam of a given length and minimum volume, such that the resultant compressive stresses are admisible. This prohlem is analyzed according to linear elasticity theory and within different alternative structural models: column, Navier-Bernoulli beam-column, Timoshenko beamcolumn (i.e. with shear strain) under conservative loads, typically, constant direction loads. Results obtained in each case are compared, in order to evaluate the sensitivity of model on the numerical results. The beam optimal design is described by the section distribution layout (area, second moment, shear area etc.) along the beam span and the corresponding beam total volume. Other situations, some of them very interesting from a theoretical point of view, with follower loads (Beck and Leipholz problems) are also discussed, leaving for future work numerical details and results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The propagation of inhomogeneous, weakly nonlinear waves is considered in a cochlear model having two degrees of freedom that represent the transverse motions of the tectorial and basilar membranes within the organ of Corti. It is assumed that nonlinearity arises from the saturation of outer hair cell active force generation. I use multiple scale asymptotics and treat nonlinearity as a correction to a linear hydroelastic wave. The resulting theory is used to explain experimentally observed features of the response of the cochlear partition to a pure tone, including: the amplification of the response in a healthy cochlea vs a dead one; the less than linear growth rate of the response to increasing sound pressure level; and the amount of distortion to be expected at high and low frequencies at basal and apical locations, respectively. I also show that the outer hair cell nonlinearity generates retrograde waves.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives: To examine the delay in presentation, diagnosis, and treatment of malignant spinal cord compression and to define the effect of this delay on motor and bladder function at the time of treatment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Constant pressure and temperature molecular dynamics techniques have been employed to investigate the changes in structure and volumes of two globular proteins, superoxide dismutase and lysozyme, under pressure. Compression (the relative changes in the proteins' volumes), computed with the Voronoi technique, is closely related with the so-called protein intrinsic compressibility, estimated by sound velocity measurements. In particular, compression computed with Voronoi volumes predicts, in agreement with experimental estimates, a negative bound water contribution to the apparent protein compression. While the use of van der Waals and molecular volumes underestimates the intrinsic compressibilities of proteins, Voronoi volumes produce results closer to experimental estimates. Remarkably, for two globular proteins of very different secondary structures, we compute identical (within statistical error) protein intrinsic compressions, as predicted by recent experimental studies. Changes in the protein interatomic distances under compression are also investigated. It is found that, on average, short distances compress less than longer ones. This nonuniform contraction underlines the peculiar nature of the structural changes due to pressure in contrast with temperature effects, which instead produce spatially uniform changes in proteins. The structural effects observed in the simulations at high pressure can explain protein compressibility measurements carried out by fluorimetric and hole burning techniques. Finally, the calculation of the proteins static structure factor shows significant shifts in the peaks at short wavenumber as pressure changes. These effects might provide an alternative way to obtain information concerning compressibilities of selected protein regions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of 3D data in mobile robotics applications provides valuable information about the robot’s environment but usually the huge amount of 3D information is unmanageable by the robot storage and computing capabilities. A data compression is necessary to store and manage this information but preserving as much information as possible. In this paper, we propose a 3D lossy compression system based on plane extraction which represent the points of each scene plane as a Delaunay triangulation and a set of points/area information. The compression system can be customized to achieve different data compression or accuracy ratios. It also supports a color segmentation stage to preserve original scene color information and provides a realistic scene reconstruction. The design of the method provides a fast scene reconstruction useful for further visualization or processing tasks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are a large number of image processing applications that work with different performance requirements and available resources. Recent advances in image compression focus on reducing image size and processing time, but offer no real-time solutions for providing time/quality flexibility of the resulting image, such as using them to transmit the image contents of web pages. In this paper we propose a method for encoding still images based on the JPEG standard that allows the compression/decompression time cost and image quality to be adjusted to the needs of each application and to the bandwidth conditions of the network. The real-time control is based on a collection of adjustable parameters relating both to aspects of implementation and to the hardware with which the algorithm is processed. The proposed encoding system is evaluated in terms of compression ratio, processing delay and quality of the compressed image when compared with the standard method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of 3D data in mobile robotics applications provides valuable information about the robot’s environment. However usually the huge amount of 3D information is difficult to manage due to the fact that the robot storage system and computing capabilities are insufficient. Therefore, a data compression method is necessary to store and process this information while preserving as much information as possible. A few methods have been proposed to compress 3D information. Nevertheless, there does not exist a consistent public benchmark for comparing the results (compression level, distance reconstructed error, etc.) obtained with different methods. In this paper, we propose a dataset composed of a set of 3D point clouds with different structure and texture variability to evaluate the results obtained from 3D data compression methods. We also provide useful tools for comparing compression methods, using as a baseline the results obtained by existing relevant compression methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Mechanical stress is often associated to interverterbal disc (IVD) degeneration and the effect of mechanical loading on IVD has been studied and reviewed.1,2 Previously, expression of heat shock proteins, HSP70 and HSP27 has been found in pathological discs.3 However, there is no direct evidence on whether IVD cells respond to the mechanical loading by expression of HSPs. The objective of this study is to investigate the stress response of IVD cells during compressive loading in an organ culture. Materials and Methods: Fresh adult bovine caudal discs were cultured with compressive loading applied at physiological range. Effect of loading type (static and dynamic) and repeated loading (2 hours per day for 2 days) were studied. Nucleus pulposus (NP) and annulus fibrosus (AF) of the IVD were retrieved at different time points: right after loading and right after resting. Positive control discs were heat shocked (43°C). Cell activity was assessed and expression of stress response genes (HSP70 and HSF1) and matrix remodeling genes (ACAN, COL2, COL1, ADAMTS4, MMP3 and MMP13) were studied. Results: Cell activity was maintained in all groups. Both NP and AF expressed high level of HSP70 in heat shock groups, confirming their expression in response to stress. In NP, expression of HSP70 was up-regulated after static loading and dynamic loading with higher fold change was observed after static loading. During repeated loading, HSP70 appeared to be upregulated right after loading and decreased after resting. Such trend was not observed in AF and HSF1 levels. Expressions of matrix remodeling genes did not change significantly with loading except ADAMTS4 decreased in AF during static loading. Conclusion: This study demonstrated that NP cells upregulate expression of HSP70 in response to loading induced stress without changing cell activity and matrix remodeling significantly. Acknowledgments: This project was funded by AO Spine (AOSPN) (grant number: SRN_2011_14) and a fellowship exchange award by AO Spine Scientific Research Network (SRN).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

COO 1469-0194.