934 resultados para compression
Resumo:
Result of impact and compression tests on Chojuro, Twentieth Century, Tsu Li, and Ya Li varieties of Asian pears indicate that Chojuro pears are the firmest and most resistant to mechanical damage. At the time of harvest, Tsu Li and Ya Li pears could resist mechanical damage nearly as well as Chojuro pears, but they become more susceptible to bruising in cold storage. Twentieth Century pears are most sensitive to impact and compression bruising. Increased time in the ripening room produces more softening and increased bruise resistance of Chojuro and Twentieth Century pears than of Tsu Li and Ya Li pears.
Resumo:
Apple fruits, cv. Granny Smith, were subjected to mechanical impact and compression loads utilizing a steel rod with a spherical tip 19 mm diameter, 50.6 g mass. Energies applied were low enough to produce enzymatic reaction: 0.0120 J for impact, and 0.0199 J for compression. Bruised material was cut and examined with a transmission electron microscope. In both compression and impact, bruises showed a central region located in the flesh parenchyma, at a distance that approximately equalled the indentor tip radius. The parenchyma cells of this region were more altered than cells from the epidermis and hypodermis. Tissues under compression presented numerous deformed parenchyma cells with broken tonoplasts and tissue degradation as predicted by several investigators. The impacted cells supported different kinds of stresses than compressed cells, resulting in the formation of intensive vesiculation, either in the vacuole or in the middle lamella region between cell walls of adjacent cells. A large proportion of parenchyma cells completely split or had initiated splitting at the middle lamella. Bruising may develop with or without cell rupture. Therefore, cell wall rupture is not essential for the development of a bruise, at least the smallest one, as predicted previously
Resumo:
A novel compression scheme is proposed, in which hollow targets with specifically curved structures initially filled with uniform matter, are driven by converging shock waves. The self-similar dynamics is analyzed for converging and diverging shock waves. The shock-compressed densities and pressures are much higher than those achieved using spherical shocks due to the geometric accumulation. Dynamic behavior is demonstrated using two-dimensional hydrodynamic simulations. The linear stability analysis for the spherical geometry reveals a new dispersion relation with cut-off mode numbers as a function of the specific heat ratio, above which eigenmode perturbations are smeared out in the converging phase.
Resumo:
The effect of the temperature on the compressive stress–strain behavior of Al/SiC nanoscale multilayers was studied by means of micropillar compression tests at 23 °C and 100 °C. The multilayers (composed of alternating layers of 60 nm in thickness of nanocrystalline Al and amorphous SiC) showed a very large hardening rate at 23 °C, which led to a flow stress of 3.1 ± 0.2 GPa at 8% strain. However, the flow stress (and the hardening rate) was reduced by 50% at 100 °C. Plastic deformation of the Al layers was the dominant deformation mechanism at both temperatures, but the Al layers were extruded out of the micropillar at 100 °C, while Al plastic flow was constrained by the SiC elastic layers at 23 °C. Finite element simulations of the micropillar compression test indicated the role played by different factors (flow stress of Al, interface strength and friction coefficient) on the mechanical behavior and were able to rationalize the differences in the stress–strain curves between 23 °C and 100 °C.
Resumo:
In this work, a new methodology is devised to obtain the fracture properties of nuclear fuel cladding in the hoop direction. The proposed method combines ring compression tests and a finite element method that includes a damage model based on cohesive crack theory, applied to unirradiated hydrogen-charged ZIRLOTM nuclear fuel cladding. Samples with hydrogen concentrations from 0 to 2000 ppm were tested at 20 �C. Agreement between the finite element simulations and the experimental results is excellent in all cases. The parameters of the cohesive crack model are obtained from the simulations, with the fracture energy and fracture toughness being calculated in turn. The evolution of fracture toughness in the hoop direction with the hydrogen concentration (up to 2000 ppm) is reported for the first time for ZIRLOTM cladding. Additionally, the fracture micromechanisms are examined as a function of the hydrogen concentration. In the as-received samples, the micromechanism is the nucleation, growth and coalescence of voids, whereas in the samples with 2000 ppm, a combination of cuasicleavage and plastic deformation, along with secondary microcracking is observed.
Resumo:
The cyclic compression of several granular systems has been simulated with a molecular dynamics code. All the samples consisted of bidimensional, soft, frictionless and equal-sized particles that were initially arranged according to a squared lattice and were compressed by randomly generated irregular walls. The compression protocols can be described by some control variables (volume or external force acting on the walls) and by some dimensionless factors, that relate stiffness, density, diameter, damping ratio and water surface tension to the external forces, displacements and periods. Each protocol, that is associated to a dynamic process, results in an arrangement with its own macroscopic features: volume (or packing ratio), coordination number, and stress; and the differences between packings can be highly significant. The statistical distribution of the force-moment state of the particles (i.e. the equivalent average stress multiplied by the volume) is analyzed. In spite of the lack of a theoretical framework based on statistical mechanics specific for these protocols, it is shown how the obtained distributions of mean and relative deviatoric force-moment are. Then it is discussed on the nature of these distributions and on their relation to specific protocols.
Resumo:
In many applications (like social or sensor networks) the in- formation generated can be represented as a continuous stream of RDF items, where each item describes an application event (social network post, sensor measurement, etc). In this paper we focus on compressing RDF streams. In particular, we propose an approach for lossless RDF stream compression, named RDSZ (RDF Differential Stream compressor based on Zlib). This approach takes advantage of the structural similarities among items in a stream by combining a differential item encoding mechanism with the general purpose stream compressor Zlib. Empirical evaluation using several RDF stream datasets shows that this combi- nation produces gains in compression ratios with respect to using Zlib alone.
Resumo:
LHE (logarithmical hopping encoding) is a computationally efficient image compression algorithm that exploits the Weber–Fechner law to encode the error between colour component predictions and the actual value of such components. More concretely, for each pixel, luminance and chrominance predictions are calculated as a function of the surrounding pixels and then the error between the predictions and the actual values are logarithmically quantised. The main advantage of LHE is that although it is capable of achieving a low-bit rate encoding with high quality results in terms of peak signal-to-noise ratio (PSNR) and image quality metrics with full-reference (FSIM) and non-reference (blind/referenceless image spatial quality evaluator), its time complexity is O( n) and its memory complexity is O(1). Furthermore, an enhanced version of the algorithm is proposed, where the output codes provided by the logarithmical quantiser are used in a pre-processing stage to estimate the perceptual relevance of the image blocks. This allows the algorithm to downsample the blocks with low perceptual relevance, thus improving the compression rate. The performance of LHE is especially remarkable when the bit per pixel rate is low, showing much better quality, in terms of PSNR and FSIM, than JPEG and slightly lower quality than JPEG-2000 but being more computationally efficient.
Resumo:
Los sensores de fibra óptica son una tecnología que ha madurado en los últimos años, sin embargo, se requiere un mayor desarrollo de aplicaciones para materiales naturales como las rocas, que por ser agregados complejos pueden contener partículas minerales y fracturas de tamaño mucho mayor que las galgas eléctricas usadas tradicionalmente para medir deformaciones en las pruebas de laboratorio, ocasionando que los resultados obtenidos puedan ser no representativos. En este trabajo fueron diseñados, fabricados y probados sensores de deformación de gran área y forma curvada, usando redes de Bragg en fibra óptica (FBG) con el objetivo de obtener registros representativos en rocas que contienen minerales y estructuras de diversas composiciones, tamaños y direcciones. Se presenta el proceso de elaboración del transductor, su caracterización mecánica, su calibración y su evaluación en pruebas de compresión uniaxial en muestras de roca. Para verificar la eficiencia en la transmisión de la deformación de la roca al sensor una vez pegado, también fue realizado el análisis de la transferencia incluyendo los efectos del adhesivo, de la muestra y del transductor. Los resultados experimentales indican que el sensor desarrollado permite registro y transferencia de la deformación fiables, avance necesario para uso en rocas y otros materiales heterogénos, señalando una interesante perspectiva para aplicaciones sobre superficies irregulares, pues permite aumentar a voluntad el tamaño y forma del área de registro, posibilita también obtener mayor fiabilidad de resultados en muestras de pequeño tamaño y sugiere su conveniencia en obras, en las cuales los sistemas eléctricos tradicionales tienen limitaciones. ABSTRACT Optical fiber sensors are a technology that has matured in recent years, however, further development for rock applications is needed. Rocks contain mineral particles and features larger than electrical strain gauges traditionally used in laboratory tests, causing the results to be unrepresentative. In this work were designed, manufactured, and tested large area and curved shape strain gages, using fiber Bragg gratings in optical fiber (FBG) in order to obtain representative measurement on surface rocks samples containing minerals and structures of different compositions, sizes and directions. This reports presents the processes of manufacturing, mechanical characterization, calibration and evaluation under uniaxial compression tests on rock samples. To verify the efficiency of rock deformation transmitted to attached sensor, it was also performed the analysis of the strain transfer including the effects of the bonding, the sample and the transducer. The experimental results indicate that the developed sensor enables reliable measurements of the strain and its transmission from rock to sensor, appropriate for use in heterogeneous materials, pointing an interesting perspective for applications on irregular surfaces, allowing increasing at will the size and shape of the measurement area. This research suggests suitability of the optical strain gauge for real scale, where traditional electrical systems have demonstrated some limitations.
Resumo:
Debido al creciente aumento del tamaño de los datos en muchos de los actuales sistemas de información, muchos de los algoritmos de recorrido de estas estructuras pierden rendimento para realizar búsquedas en estos. Debido a que la representacion de estos datos en muchos casos se realiza mediante estructuras nodo-vertice (Grafos), en el año 2009 se creó el reto Graph500. Con anterioridad, otros retos como Top500 servían para medir el rendimiento en base a la capacidad de cálculo de los sistemas, mediante tests LINPACK. En caso de Graph500 la medicion se realiza mediante la ejecución de un algoritmo de recorrido en anchura de grafos (BFS en inglés) aplicada a Grafos. El algoritmo BFS es uno de los pilares de otros muchos algoritmos utilizados en grafos como SSSP, shortest path o Betweeness centrality. Una mejora en este ayudaría a la mejora de los otros que lo utilizan. Analisis del Problema El algoritmos BFS utilizado en los sistemas de computación de alto rendimiento (HPC en ingles) es usualmente una version para sistemas distribuidos del algoritmo secuencial original. En esta versión distribuida se inicia la ejecución realizando un particionado del grafo y posteriormente cada uno de los procesadores distribuidos computará una parte y distribuirá sus resultados a los demás sistemas. Debido a que la diferencia de velocidad entre el procesamiento en cada uno de estos nodos y la transfencia de datos por la red de interconexión es muy alta (estando en desventaja la red de interconexion) han sido bastantes las aproximaciones tomadas para reducir la perdida de rendimiento al realizar transferencias. Respecto al particionado inicial del grafo, el enfoque tradicional (llamado 1D-partitioned graph en ingles) consiste en asignar a cada nodo unos vertices fijos que él procesará. Para disminuir el tráfico de datos se propuso otro particionado (2D) en el cual la distribución se haciá en base a las aristas del grafo, en vez de a los vertices. Este particionado reducía el trafico en la red en una proporcion O(NxM) a O(log(N)). Si bien han habido otros enfoques para reducir la transferecnia como: reordemaniento inicial de los vertices para añadir localidad en los nodos, o particionados dinámicos, el enfoque que se va a proponer en este trabajo va a consistir en aplicar técnicas recientes de compression de grandes sistemas de datos como Bases de datos de alto volume o motores de búsqueda en internet para comprimir los datos de las transferencias entre nodos.---ABSTRACT---The Breadth First Search (BFS) algorithm is the foundation and building block of many higher graph-based operations such as spanning trees, shortest paths and betweenness centrality. The importance of this algorithm increases each day due to it is a key requirement for many data structures which are becoming popular nowadays. These data structures turn out to be internally graph structures. When the BFS algorithm is parallelized and the data is distributed into several processors, some research shows a performance limitation introduced by the interconnection network [31]. Hence, improvements on the area of communications may benefit the global performance in this key algorithm. In this work it is presented an alternative compression mechanism. It differs with current existing methods in that it is aware of characteristics of the data which may benefit the compression. Apart from this, we will perform a other test to see how this algorithm (in a dis- tributed scenario) benefits from traditional instruction-based optimizations. Last, we will review the current supercomputing techniques and the related work being done in the area.
Resumo:
The optimal design of a vertical cantilever beam is presented in this paper. The beam is assumed immersed in an elastic Winkler soil and subjected to several loads: a point force at the tip section, its self weight and a uniform distributed load along its length. lbe optimal design problem is to find the beam of a given length and minimum volume, such that the resultant compressive stresses are admisible. This prohlem is analyzed according to linear elasticity theory and within different alternative structural models: column, Navier-Bernoulli beam-column, Timoshenko beamcolumn (i.e. with shear strain) under conservative loads, typically, constant direction loads. Results obtained in each case are compared, in order to evaluate the sensitivity of model on the numerical results. The beam optimal design is described by the section distribution layout (area, second moment, shear area etc.) along the beam span and the corresponding beam total volume. Other situations, some of them very interesting from a theoretical point of view, with follower loads (Beck and Leipholz problems) are also discussed, leaving for future work numerical details and results.
Resumo:
The propagation of inhomogeneous, weakly nonlinear waves is considered in a cochlear model having two degrees of freedom that represent the transverse motions of the tectorial and basilar membranes within the organ of Corti. It is assumed that nonlinearity arises from the saturation of outer hair cell active force generation. I use multiple scale asymptotics and treat nonlinearity as a correction to a linear hydroelastic wave. The resulting theory is used to explain experimentally observed features of the response of the cochlear partition to a pure tone, including: the amplification of the response in a healthy cochlea vs a dead one; the less than linear growth rate of the response to increasing sound pressure level; and the amount of distortion to be expected at high and low frequencies at basal and apical locations, respectively. I also show that the outer hair cell nonlinearity generates retrograde waves.
Resumo:
Objectives: To examine the delay in presentation, diagnosis, and treatment of malignant spinal cord compression and to define the effect of this delay on motor and bladder function at the time of treatment.
Resumo:
Constant pressure and temperature molecular dynamics techniques have been employed to investigate the changes in structure and volumes of two globular proteins, superoxide dismutase and lysozyme, under pressure. Compression (the relative changes in the proteins' volumes), computed with the Voronoi technique, is closely related with the so-called protein intrinsic compressibility, estimated by sound velocity measurements. In particular, compression computed with Voronoi volumes predicts, in agreement with experimental estimates, a negative bound water contribution to the apparent protein compression. While the use of van der Waals and molecular volumes underestimates the intrinsic compressibilities of proteins, Voronoi volumes produce results closer to experimental estimates. Remarkably, for two globular proteins of very different secondary structures, we compute identical (within statistical error) protein intrinsic compressions, as predicted by recent experimental studies. Changes in the protein interatomic distances under compression are also investigated. It is found that, on average, short distances compress less than longer ones. This nonuniform contraction underlines the peculiar nature of the structural changes due to pressure in contrast with temperature effects, which instead produce spatially uniform changes in proteins. The structural effects observed in the simulations at high pressure can explain protein compressibility measurements carried out by fluorimetric and hole burning techniques. Finally, the calculation of the proteins static structure factor shows significant shifts in the peaks at short wavenumber as pressure changes. These effects might provide an alternative way to obtain information concerning compressibilities of selected protein regions.
Resumo:
The use of 3D data in mobile robotics applications provides valuable information about the robot’s environment but usually the huge amount of 3D information is unmanageable by the robot storage and computing capabilities. A data compression is necessary to store and manage this information but preserving as much information as possible. In this paper, we propose a 3D lossy compression system based on plane extraction which represent the points of each scene plane as a Delaunay triangulation and a set of points/area information. The compression system can be customized to achieve different data compression or accuracy ratios. It also supports a color segmentation stage to preserve original scene color information and provides a realistic scene reconstruction. The design of the method provides a fast scene reconstruction useful for further visualization or processing tasks.