816 resultados para compression refrigeration system

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are an increasing number of compression systems available for treatment of venous leg ulcers and limited evidence on the relative effectiveness of these systems. The purpose of this study was to conduct a randomised controlled trial to compare the effectiveness of a 4-layer compression bandage system with Class 3 compression hosiery on healing and quality of life in patients with venous leg ulcers. Data were collected from 103 participants on demographics, health, ulcer status, treatments, pain, depression and quality of life for 24 weeks. After 24 weeks, 86% of the 4-layer bandage group and 77% of the hosiery group were healed (p=0.24). Median time to healing for the bandage group was 10 weeks, in comparison to 14 weeks for the hosiery group (p=0.018). Cox proportional hazards regression found participants in the 4-layer system were 2.1 times (95% CI 1.2–3.5) more likely to heal than those in hosiery, while longer ulcer duration, larger ulcer area and higher depression scores significantly delayed healing. No differences between groups were found in quality of life or pain measures. Findings indicate these systems were equally effective in healing patients by 24 weeks, however a 4-layer system may produce a more rapid response.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

H. Simon and B. Szörényi have found an error in the proof of Theorem 52 of “Shifting: One-inclusion mistake bounds and sample compression”, Rubinstein et al. (2009). In this note we provide a corrected proof of a slightly weakened version of this theorem. Our new bound on the density of one-inclusion hypergraphs is again in terms of the capacity of the multilabel concept class. Simon and Szörényi have recently proved an alternate result in Simon and Szörényi (2009).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A numerical time-dependent model of an active magnetic regenerator (AMR) was developed for cooling in the kilowatt range. Earlier numerical models have been mostly developed for cooling power in the 0.4 kW range. In contrast, this paper reports the applicability of magnetic refrigeration to the 50 kW range. A packed bed active magnetic regenerator was modelled and the influence of parameters such as geometry and operating parameters were studied for different geometries. The pressure drop for AMR bed length and particle diameter was also studied. High cooling power and coefficient of performance (COP) were achieved by optimization of the diameter of the magnetocaloric powder particles and operating frequency. The optimum operating conditions of the AMR for a cooling capacity of 50 kW was determined for a temperature span of 15 K. The predicted coefficient of performance (COP) was found to be ∼6, making it an attractive alternative to vapour compression systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fire design is an essential element of the overall design procedure of structural steel members and systems. Conventionally the fire rating of load-bearing stud wall systems made of light gauge steel frames (LSF) is based on approximate prescriptive methods developed on the basis of limited fire tests. This design is limited to standard wall configurations used by the industry. Increased fire rating is provided simply by adding more plasterboards to the stud walls. This is not an acceptable situation as it not only inhibits innovation and structural and cost efficiencies but also casts doubt over the fire safety of these light gauge steel stud wall systems. Hence a detailed fire research study into the performance and effectiveness of a recently developed innovative composite panel wall system was undertaken at Queensland University of Technology using both full scale fire tests and numerical studies. Experimental results of LSF walls using the new composite panels under axial compression load have shown the improvement in fire performance and fire resistance rating. Numerical analyses are currently being undertaken using the finite element program ABAQUS. Measured temperature profiles of the studs are used in the numerical models and the results are used to calibrate against full scale test results. The validated model will be used in a detailed parametric study with an aim to develop suitable design rules within the current cold-formed steel structures and fire design standards. This paper will present the results of experimental and numerical investigations into the structural and fire behaviour of light gauge steel stud walls protected by the new composite panel. It will demonstrate the improvements provided by the new composite panel system in comparison to traditional wall systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adiabatic compression testing of components in gaseous oxygen is a test method that is utilized worldwide and is commonly required to qualify a component for ignition tolerance under its intended service. This testing is required by many industry standards organizations and government agencies; however, a thorough evaluation of the test parameters and test system influences on the thermal energy produced during the test has not yet been performed. This paper presents a background for adiabatic compression testing and discusses an approach to estimating potential differences in the thermal profiles produced by different test laboratories. A “Thermal Profile Test Fixture” (TPTF) is described that is capable of measuring and characterizing the thermal energy for a typical pressure shock by any test system. The test systems at Wendell Hull & Associates, Inc. (WHA) in the USA and at the BAM Federal Institute for Materials Research and Testing in Germany are compared in this manner and some of the data obtained is presented. The paper also introduces a new way of comparing the test method to idealized processes to perform system-by-system comparisons. Thus, the paper introduces an “Idealized Severity Index” (ISI) of the thermal energy to characterize a rapid pressure surge. From the TPTF data a “Test Severity Index” (TSI) can also be calculated so that the thermal energies developed by different test systems can be compared to each other and to the ISI for the equivalent isentropic process. Finally, a “Service Severity Index” (SSI) is introduced to characterizing the thermal energy of actual service conditions. This paper is the second in a series of publications planned on the subject of adiabatic compression testing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

n the field of tissue engineering new polymers are needed to fabricate scaffolds with specific properties depending on the targeted tissue. This work aimed at designing and developing a 3D scaffold with variable mechanical strength, fully interconnected porous network, controllable hydrophilicity and degradability. For this, a desktop-robot-based melt-extrusion rapid prototyping technique was applied to a novel tri-block co-polymer, namely poly(ethylene glycol)-block-poly(epsi-caprolactone)-block-poly(DL-lactide), PEG-PCL-P(DL)LA. This co-polymer was melted by electrical heating and directly extruded out using computer-controlled rapid prototyping by means of compressed purified air to build porous scaffolds. Various lay-down patterns (0/30/60/90/120/150°, 0/45/90/135°, 0/60/120° and 0/90°) were produced by using appropriate positioning of the robotic control system. Scanning electron microscopy and micro-computed tomography were used to show that 3D scaffold architectures were honeycomb-like with completely interconnected and controlled channel characteristics. Compression tests were performed and the data obtained agreed well with the typical behavior of a porous material undergoing deformation. Preliminary cell response to the as-fabricated scaffolds has been studied with primary human fibroblasts. The results demonstrated the suitability of the process and the cell biocompatibility of the polymer, two important properties among the many required for effective clinical use and efficient tissue-engineering scaffolding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – This paper aims to present a novel rapid prototyping (RP) fabrication methods and preliminary characterization for chitosan scaffolds. Design – A desktop rapid prototyping robot dispensing (RPBOD) system has been developed to fabricate scaffolds for tissue engineering (TE) applications. The system is a computer-controlled four-axis machine with a multiple-dispenser head. Neutralization of the acetic acid by the sodium hydroxide results in a precipitate to form a gel-like chitosan strand. The scaffold properties were characterized by scanning electron microscopy, porosity calculation and compression test. An example of fabrication of a freeform hydrogel scaffold is demonstrated. The required geometric data for the freeform scaffold were obtained from CT-scan images and the dispensing path control data were converted form its volume model. The applications of the scaffolds are discussed based on its potential for TE. Findings – It is shown that the RPBOD system can be interfaced with imaging techniques and computational modeling to produce scaffolds which can be customized in overall size and shape allowing tissue-engineered grafts to be tailored to specific applications or even for individual patients. Research limitations/implications – Important challenges for further research are the incorporation of growth factors, as well as cell seeding into the 3D dispensing plotting materials. Improvements regarding the mechanical properties of the scaffolds are also necessary. Originality/value – One of the important aspects of TE is the design scaffolds. For customized TE, it is essential to be able to fabricate 3D scaffolds of various geometric shapes, in order to repair tissue defects. RP or solid free-form fabrication techniques hold great promise for designing 3D customized scaffolds; yet traditional cell-seeding techniques may not provide enough cell mass for larger constructs. This paper presents a novel attempt to fabricate 3D scaffolds, using hydrogels which in the future can be combined with cells.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d = VC(F) bound on the graph density of a subgraph of the hypercube—oneinclusion graph. The first main result of this paper is a density bound of n [n−1 <=d-1]/[n <=d] < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d contractible simplicial complexes, extending the well-known characterization that d = 1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth—the second part to a conjectured proof of correctness for Peeling—that every class has one-inclusion minimum degree at most its VCdimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(logn) and is shown to be optimal up to an O(logk) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Buildings are one of the most significant infrastructures in modern societies. The construction and operation of modern buildings consume a considerable amount of energy and materials, therefore contribute significantly to the climate change process. In order to reduce the environmental impact of buildings, various green building rating tools have been developed. In this paper, energy uses of the building sector in Australia and over the world are first reviewed. This is then followed by discussions on the development and scopes of various green building rating tools, with a particular focus on the Green Star rating scheme developed in Australia. It is shown that Green Star has significant implications on almost every aspect of the design of HVAC systems, including the selection of air handling and distribution systems, fluid handling systems, refrigeration systems, heat rejection systems and building control systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Alternative fuels and injection technologies are a necessary component of particulate emission reduction strategies for compression ignition engines. Consequently, this study undertakes a physicochemical characterization of diesel particulate matter (DPM) for engines equipped with alternative injection technologies (direct injection and common rail) and alternative fuels (ultra low sulfur diesel, a 20% biodiesel blend, and a synthetic diesel). Particle physical properties were addressed by measuring particle number size distributions, and particle chemical properties were addressed by measuring polycyclic aromatic hydrocarbons (PAHs) and reactive oxygen species (ROS). Particle volatility was determined by passing the polydisperse size distribution through a thermodenuder set to 300 °C. The results from this study, conducted over a four point test cycle, showed that both fuel type and injection technology have an impact on particle emissions, but injection technology was the more important factor. Significant particle number emission (54%–84%) reductions were achieved at half load operation (1% increase–43% decrease at full load) with the common rail injection system; however, the particles had a significantly higher PAH fraction (by a factor of 2 to 4) and ROS concentrations (by a factor of 6 to 16) both expressed on a test-cycle averaged basis. The results of this study have significant implications for the health effects of DPM emissions from both direct injection and common rail engines utilizing various alternative fuels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compression ignition (CI) engine design is subject to many constraints which presents a multi-criteria optimisation problem that the engine researcher must solve. In particular, the modern CI engine must not only be efficient, but must also deliver low gaseous, particulate and life cycle greenhouse gas emissions so that its impact on urban air quality, human health, and global warming are minimised. Consequently, this study undertakes a multi-criteria analysis which seeks to identify alternative fuels, injection technologies and combustion strategies that could potentially satisfy these CI engine design constraints. Three datasets are analysed with the Preference Ranking Organization Method for Enrichment Evaluations and Geometrical Analysis for Interactive Aid (PROMETHEE-GAIA) algorithm to explore the impact of 1): an ethanol fumigation system, 2): alternative fuels (20 % biodiesel and synthetic diesel) and alternative injection technologies (mechanical direct injection and common rail injection), and 3): various biodiesel fuels made from 3 feedstocks (i.e. soy, tallow, and canola) tested at several blend percentages (20-100 %) on the resulting emissions and efficiency profile of the various test engines. The results show that moderate ethanol substitutions (~20 % by energy) at moderate load, high percentage soy blends (60-100 %), and alternative fuels (biodiesel and synthetic diesel) provide an efficiency and emissions profile that yields the most “preferred” solutions to this multi-criteria engine design problem. Further research is, however, required to reduce Reactive Oxygen Species (ROS) emissions with alternative fuels, and to deliver technologies that do not significantly reduce the median diameter of particle emissions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, an LPG fumigation system was fitted to a Euro III compression ignition (CI) engine to explore its impact on performance, and gaseous and particulate emissions. LPG was introduced to the intake air stream (as a secondary fuel) by using a low pressure fuel injector situated upstream of the turbocharger. LPG substitutions were test mode dependent, but varied in the range of 14-29% by energy. The engine was tested over a 5 point test cycle using ultra low sulphur diesel (ULSD), and a low and high LPG substitution at each test mode. The results show that LPG fumigation coerces the combustion into pre-mixed mode, as increases in the peak combustion pressure (and the rate of pressure rise) were observed in most tests. The emissions results show decreases in nitric oxide (NO) and particulate matter (PM2.5) emissions; however, very significant increases in carbon monoxide (CO) and hydrocarbon (HC) emissions were observed. A more detailed investigation of the particulate emissions showed that the number of particles emitted was reduced with LPG fumigation at all test settings – apart from mode 6 of the ECE R49 test cycle. Furthermore, the particles emitted generally had a slightly larger median diameter with LPG fumigation, and had a smaller semi-volatile fraction relative to ULSD. Overall, the results show that with some modifications, LPG fumigation systems could be used to extend ULSD supplies without adversely impacting on engine performance and emissions.