976 resultados para Compression Analysis
Resumo:
Northern Irish (and all UK-based) health care is facing major challenges. This article uses a specific theory to recommend and construct a framework to address challenges faced by the author, such as deficits in compression bandaging techniques in healing venous leg ulcers and resistance found when using evidence-based research within this practice. The article investigates the challenges faced by a newly formed community nursing team. It explores how specialist knowledge and skills are employed in tissue viability and how they enhance the management of venous leg ulceration by the community nursing team. To address these challenges and following a process of reflection, Lewin's forcefield analysis model of change management can be used as a framework for some recommendations made.
Resumo:
A set of cylindrical porous titanium test samples were produced using the three-dimensional printing and sintering method with samples sintered at 900 °C, 1000 °C, 1100 °C, 1200 °C or 1300 °C. Following compression testing, it was apparent that the stress-strain curves were similar in shape to the curves that represent cellular solids. This is despite a relative density twice as high as what is considered the threshold for defining a cellular solid. As final sintering temperature increased, the compressive behaviour developed from being elastic-brittle to elastic-plastic and while Young's modulus remained fairly constant in the region of 1.5 GPa, there was a corresponding increase in 0.2% proof stress of approximately 40-80 MPa. The cellular solid model consists of two equations that predict Young's modulus and yield or proof stress. By fitting to experimental data and consideration of porous morphology, appropriate changes to the geometry constants allow modification of the current models to predict with better accuracy the behaviour of porous materials with higher relative densities (lower porosity).
Analysis of deformation behavior and workability of advanced 9Cr-Nb-V ferritic heat resistant steels
Resumo:
Hot compression tests were carried out on 9Cr–Nb–V heat resistant steels in the temperature range of 600–1200 °C and the strain rate range of 10−2–100 s−1 to study their deformation characteristics. The full recrystallization temperature and the carbon-free bainite phase transformation temperature were determined by the slope-change points in the curve of mean flow stress versus the inverse of temperature. The parameters of the constitutive equation for the experimental steels were calculated, including the stress exponent and the activation energy. The lower carbon content in steel would increase the fraction of precipitates by increasing the volume of dynamic strain-induced (DSIT) ferrite during deformation. The ln(εc) versus ln(Z) and the ln(σc) versus ln(Z) plots for both steels have similar trends. The efficiency of power dissipation maps with instability maps merged together show excellent workability from the strain of 0.05 to 0.6. The microstructure of the experimental steels was fully recrystallized upon deformation at low Z value owing to the dynamic recrystallization (DRX), and exhibited a necklace structure under the condition of 1050 °C/0.1 s−1 due to the suppression of the secondary flow of DRX. However, there were barely any DRX grains but elongated pancake grains under the condition of 1000 °C/1 s−1 because of the suppression of the metadynamic recrystallization (MDRX).
Resumo:
An adhesive elasto-plastic contact model for the discrete element method with three dimensional non-spherical particles is proposed and investigated to achieve quantitative prediction of cohesive powder flowability. Simulations have been performed for uniaxial consolidation followed by unconfined compression to failure using this model. The model has been shown to be capable of predicting the experimental flow function (unconfined compressive strength vs. the prior consolidation stress) for a limestone powder which has been selected as a reference solid in the Europe wide PARDEM research network. Contact plasticity in the model is shown to affect the flowability significantly and is thus essential for producing satisfactory computations of the behaviour of a cohesive granular material. The model predicts a linear relationship between a normalized unconfined compressive strength and the product of coordination number and solid fraction. This linear relationship is in line with the Rumpf model for the tensile strength of particulate agglomerate. Even when the contact adhesion is forced to remain constant, the increasing unconfined strength arising from stress consolidation is still predicted, which has its origin in the contact plasticity leading to microstructural evolution of the coordination number. The filled porosity is predicted to increase as the contact adhesion increases. Under confined compression, the porosity reduces more gradually for the load-dependent adhesion compared to constant adhesion. It was found that the contribution of adhesive force to the limiting friction has a significant effect on the bulk unconfined strength. The results provide new insights and propose a micromechanical based measure for characterising the strength and flowability of cohesive granular materials.
Resumo:
This paper contributes to the understanding of lime-mortar masonry strength and deformation (which determine durability and allowable stresses/stiffness in design codes) by measuring the mechanical properties of brick bound with lime and lime-cement mortars. Based on the regression analysis of experimental results, models to estimate lime-mortar masonry compressive strength are proposed (less accurate for hydrated lime (CL90s) masonry due to the disparity between mortar and brick strengths). Also, three relationships between masonry elastic modulus and its compressive strength are proposed for cement-lime; hydraulic lime (NHL3.5 and 5); and hydrated/feebly hydraulic lime masonries respectively.
Disagreement between the experimental results and former mathematical prediction models (proposed primarily for cement masonry) is caused by a lack of provision for the significant deformation of lime masonry and the relative changes in strength and stiffness between mortar and brick over time (at 6 months and 1 year, the NHL 3.5 and 5 mortars are often stronger than the brick). Eurocode 6 provided the best predictions for the compressive strength of lime and cement-lime masonry based on the strength of their components. All models vastly overestimated the strength of CL90s masonry at 28 days however, Eurocode 6 became an accurate predictor after 6 months, when the mortar had acquired most of its final strength and stiffness.
The experimental results agreed with former stress-strain curves. It was evidenced that mortar strongly impacts masonry deformation, and that the masonry stress/strain relationship becomes increasingly non-linear as mortar strength lowers. It was also noted that, the influence of masonry stiffness on its compressive strength becomes smaller as the mortar hydraulicity increases.
Resumo:
A three-dimensional continuum damage mechanics-based material model has been implemented in an implicit Finite Element code to simulate the progressive degradation of advanced composite materials. The damage model uses seven damage variables assigned to tensile, compressive and non-linear shear damage at a laminae level. The objectivity of the numerical discretization is assured using a smeared formulation. The material model was benchmarked against experimental uniaxial coupon tests and it is shown to reproduce key aspects observable during failure, such as the inclined fracture plane in matrix compression and the shear band in a ±45° tension specimen.
Resumo:
This paper builds on previous work to show how using holistic and iterative design optimisation tools can be used to produce a commercially viable product that reduces a costly assembly into a single moulded structure. An assembly consisting of a structural metallic support and compression moulding outer shell undergo design optimisation and analysis to remove the support from the assembly process in favour of a structural moulding. The support is analysed and a sheet moulded compound (SMC) alternative is presented, this is then combined into a manufacturable shell design which is then assessed on viability as an alternative to the original.
Alongside this a robust material selection system is implemented that removes user bias towards materials for designs. This system builds on work set out by the Cambridge Material Selector and Boothroyd and Dewhurst, while using a selection of applicable materials currently available for the compression moulding process. This material selection process has been linked into the design and analysis stage, via scripts for use in the finite element environment. This builds towards an analysis toolkit that is suggested to develop and enhance manufacturability of design studies.
Resumo:
La compression des données est la technique informatique qui vise à réduire la taille de l’information pour minimiser l’espace de stockage nécessaire et accélérer la transmission des données dans les réseaux à bande passante limitée. Plusieurs techniques de compression telles que LZ77 et ses variantes souffrent d’un problème que nous appelons la redondance causée par la multiplicité d’encodages. La multiplicité d’encodages (ME) signifie que les données sources peuvent être encodées de différentes manières. Dans son cas le plus simple, ME se produit lorsqu’une technique de compression a la possibilité, au cours du processus d’encodage, de coder un symbole de différentes manières. La technique de compression par recyclage de bits a été introduite par D. Dubé et V. Beaudoin pour minimiser la redondance causée par ME. Des variantes de recyclage de bits ont été appliquées à LZ77 et les résultats expérimentaux obtenus conduisent à une meilleure compression (une réduction d’environ 9% de la taille des fichiers qui ont été compressés par Gzip en exploitant ME). Dubé et Beaudoin ont souligné que leur technique pourrait ne pas minimiser parfaitement la redondance causée par ME, car elle est construite sur la base du codage de Huffman qui n’a pas la capacité de traiter des mots de code (codewords) de longueurs fractionnaires, c’est-à-dire qu’elle permet de générer des mots de code de longueurs intégrales. En outre, le recyclage de bits s’appuie sur le codage de Huffman (HuBR) qui impose des contraintes supplémentaires pour éviter certaines situations qui diminuent sa performance. Contrairement aux codes de Huffman, le codage arithmétique (AC) peut manipuler des mots de code de longueurs fractionnaires. De plus, durant ces dernières décennies, les codes arithmétiques ont attiré plusieurs chercheurs vu qu’ils sont plus puissants et plus souples que les codes de Huffman. Par conséquent, ce travail vise à adapter le recyclage des bits pour les codes arithmétiques afin d’améliorer l’efficacité du codage et sa flexibilité. Nous avons abordé ce problème à travers nos quatre contributions (publiées). Ces contributions sont présentées dans cette thèse et peuvent être résumées comme suit. Premièrement, nous proposons une nouvelle technique utilisée pour adapter le recyclage de bits qui s’appuie sur les codes de Huffman (HuBR) au codage arithmétique. Cette technique est nommée recyclage de bits basé sur les codes arithmétiques (ACBR). Elle décrit le cadriciel et les principes de l’adaptation du HuBR à l’ACBR. Nous présentons aussi l’analyse théorique nécessaire pour estimer la redondance qui peut être réduite à l’aide de HuBR et ACBR pour les applications qui souffrent de ME. Cette analyse démontre que ACBR réalise un recyclage parfait dans tous les cas, tandis que HuBR ne réalise de telles performances que dans des cas très spécifiques. Deuxièmement, le problème de la technique ACBR précitée, c’est qu’elle requiert des calculs à précision arbitraire. Cela nécessite des ressources illimitées (ou infinies). Afin de bénéficier de cette dernière, nous proposons une nouvelle version à précision finie. Ladite technique devienne ainsi efficace et applicable sur les ordinateurs avec les registres classiques de taille fixe et peut être facilement interfacée avec les applications qui souffrent de ME. Troisièmement, nous proposons l’utilisation de HuBR et ACBR comme un moyen pour réduire la redondance afin d’obtenir un code binaire variable à fixe. Nous avons prouvé théoriquement et expérimentalement que les deux techniques permettent d’obtenir une amélioration significative (moins de redondance). À cet égard, ACBR surpasse HuBR et fournit une classe plus étendue des sources binaires qui pouvant bénéficier d’un dictionnaire pluriellement analysable. En outre, nous montrons qu’ACBR est plus souple que HuBR dans la pratique. Quatrièmement, nous utilisons HuBR pour réduire la redondance des codes équilibrés générés par l’algorithme de Knuth. Afin de comparer les performances de HuBR et ACBR, les résultats théoriques correspondants de HuBR et d’ACBR sont présentés. Les résultats montrent que les deux techniques réalisent presque la même réduction de redondance sur les codes équilibrés générés par l’algorithme de Knuth.
Resumo:
OBJECTIVES: To learn upon incidence, underlying mechanisms and effectiveness of treatment strategies in patients with central airway and pulmonary parenchymal aorto-bronchial fistulation after thoracic endovascular aortic repair (TEVAR). METHODS: Analysis of an international multicentre registry (European Registry of Endovascular Aortic Repair Complications) between 2001 and 2012 with a total caseload of 4680 TEVAR procedures (14 centres). RESULTS: Twenty-six patients with a median age of 70 years (interquartile range: 60-77) (35% female) were identified. The incidence of either central airway (aorto-bronchial) or pulmonary parenchymal (aorto-pulmonary) fistulation (ABPF) in the entire cohort after TEVAR in the study period was 0.56% (central airway 58%, peripheral parenchymal 42%). Atherosclerotic aneurysm formation was the leading indication for TEVAR in 15 patients (58%). The incidence of primary endoleaks after initial TEVAR was n = 10 (38%), of these 80% were either type I or type III endoleaks. Fourteen patients (54%) developed central left bronchial tree lesions, 11 patients (42%) pulmonary parenchymal lesions and 1 patient (4%) developed a tracheal lesion. The recognized mechanism of ABPF was external compression of the bronchial tree in 13 patients (50%), the majority being due to endoleak formation, further ischaemia due to extensive coverage of bronchial feeding arteries in 3 patients (12%). Inflammation and graft erosion accounted for 4 patients (30%) each. Cumulative survival during the entire study period was 39%. Among deaths, 71% were attributed to ABPF. There was no difference in survival in patients having either central airway or pulmonary parenchymal ABPF (33 vs 45%, log-rank P = 0.55). Survival with a radical surgical approach was significantly better when compared with any other treatment strategy in terms of overall survival (63 vs 32% and 63 vs 21% at 1 and 2 years, respectively), as well as in terms of fistula-related survival (63 vs 43% and 63 vs 43% at 1 and 2 years, respectively). CONCLUSIONS: ABPF is a rare but highly lethal complication after TEVAR. The leading mechanism behind ABPF seems to be a continuing external compression of either the bronchial tree or left upper lobe parenchyma. In this setting, persisting or newly developing endoleak formation seems to play a crucial role. Prognosis does not differ in patients with central airway or pulmonary parenchymal fistulation. Radical bronchial or pulmonary parenchymal repair in combination with stent graft removal and aortic reconstruction seems to be the most durable treatment strategy.
Resumo:
This study reports the details of the finite element analysis of eleven shear critical partially prestressed concrete T-beams having steel fibers over partial or full depth. Prestressed T-beams having a shear span to depth ratio of 2.65 and 1.59 that failed in shear have been analyzed using the ‘ANSYS’ program. The ‘ANSYS’ model accounts for the nonlinearity, such as, bond-slip of longitudinal reinforcement, postcracking tensile stiffness of the concrete, stress transfer across the cracked blocks of the concrete and load sustenance through the bridging action of steel fibers at crack interface. The concrete is modeled using ‘SOLID65’- eight-node brick element, which is capable of simulating the cracking and crushing behavior of brittle materials. The reinforcement such as deformed bars, prestressing wires and steel fibers have been modeled discretely using ‘LINK8’ – 3D spar element. The slip between the reinforcement (rebars, fibers) and the concrete has been modeled using a ‘COMBIN39’- nonlinear spring element connecting the nodes of the ‘LINK8’ element representing the reinforcement and nodes of the ‘SOLID65’ elements representing the concrete. The ‘ANSYS’ model correctly predicted the diagonal tension failure and shear compression failure of prestressed concrete beams observed in the experiment. The capability of the model to capture the critical crack regions, loads and deflections for various types of shear failures in prestressed concrete beam has been illustrated.
Resumo:
Image analysis and graphics synthesis can be achieved with learning techniques using directly image examples without physically-based, 3D models. In our technique: -- the mapping from novel images to a vector of "pose" and "expression" parameters can be learned from a small set of example images using a function approximation technique that we call an analysis network; -- the inverse mapping from input "pose" and "expression" parameters to output images can be synthesized from a small set of example images and used to produce new images using a similar synthesis network. The techniques described here have several applications in computer graphics, special effects, interactive multimedia and very low bandwidth teleconferencing.
Resumo:
This paper presents a new paradigm for signal reconstruction and superresolution, Correlation Kernel Analysis (CKA), that is based on the selection of a sparse set of bases from a large dictionary of class- specific basis functions. The basis functions that we use are the correlation functions of the class of signals we are analyzing. To choose the appropriate features from this large dictionary, we use Support Vector Machine (SVM) regression and compare this to traditional Principal Component Analysis (PCA) for the tasks of signal reconstruction, superresolution, and compression. The testbed we use in this paper is a set of images of pedestrians. This paper also presents results of experiments in which we use a dictionary of multiscale basis functions and then use Basis Pursuit De-Noising to obtain a sparse, multiscale approximation of a signal. The results are analyzed and we conclude that 1) when used with a sparse representation technique, the correlation function is an effective kernel for image reconstruction and superresolution, 2) for image compression, PCA and SVM have different tradeoffs, depending on the particular metric that is used to evaluate the results, 3) in sparse representation techniques, L_1 is not a good proxy for the true measure of sparsity, L_0, and 4) the L_epsilon norm may be a better error metric for image reconstruction and compression than the L_2 norm, though the exact psychophysical metric should take into account high order structure in images.
Resumo:
Image registration is an important component of image analysis used to align two or more images. In this paper, we present a new framework for image registration based on compression. The basic idea underlying our approach is the conjecture that two images are correctly registered when we can maximally compress one image given the information in the other. The contribution of this paper is twofold. First, we show that the image registration process can be dealt with from the perspective of a compression problem. Second, we demonstrate that the similarity metric, introduced by Li et al., performs well in image registration. Two different versions of the similarity metric have been used: the Kolmogorov version, computed using standard real-world compressors, and the Shannon version, calculated from an estimation of the entropy rate of the images
Resumo:
The images taken by the Heliospheric Imagers (HIs), part of the SECCHI imaging package onboard the pair of STEREO spacecraft, provide information on the radial and latitudinal evolution of the plasma compressed inside corotating interaction regions (CIRs). A plasma density wave imaged by the HI instrument onboard STEREO-B was found to propagate towards STEREO-A, enabling a comparison between simultaneous remotesensing and in situ observations of its structure to be performed. In situ measurements made by STEREO-A show that the plasma density wave is associated with the passage of a CIR. The magnetic field compressed after the CIR stream interface (SI) is found to have a planar distribution. Minimum variance analysis of the magnetic field vectors shows that the SI is inclined at 54° to the orbital plane of the STEREO-A spacecraft. This inclination of the CIR SI is comparable to the inclination of the associated plasma density wave observed by HI. A small-scale magnetic cloud with a flux rope topology and radial extent of 0.08 AU is also embedded prior to the SI. The pitch-angle distribution of suprathermal electrons measured by the STEREO-A SWEA instrument shows that an open magnetic field topology in the cloud replaced the heliospheric current sheet locally. These observations confirm that HI observes CIRs in difference images when a small-scale transient is caught up in the compression region.
Resumo:
We compare the use of plastically compressed collagen gels to conventional collagen gels as scaffolds onto which corneal limbal epithelial cells (LECs) are seeded to construct an artificial corneal epithelium. LECs were isolated from bovine corneas (limbus) and seeded onto either conventional uncompressed or novel compressed collagen gels and grown in culture. Scanning electron microscopy (SEM) results showed that fibers within the uncompressed gel were loose and irregularly ordered, whereas the fibers within the compressed gel were densely packed and more evenly arranged. Quantitative analysis of LECs expansion across the surface of the two gels showed similar growth rates (p > 0.05). Under SEM, the LECs, expanded on uncompressed gels, showed a rough and heterogeneous morphology, whereas on the compressed gel, the cells displayed a smooth and homogeneous morphology. Transmission electron microscopy (TEM) results showed the compressed scaffold to contain collagen fibers of regular diameter and similar orientation resembling collagen fibers within the normal cornea. TEM and light microscopy also showed that cell–cell and cell–matrix attachment, stratification, and cell density were superior in LECs expanded upon compressed collagen gels. This study demonstrated that the compressed collagen gel was an excellent biomaterial scaffold highly suited to the construction of an artificial corneal epithelium and a significant improvement upon conventional collagen gels.