235 resultados para analogy calculation
Resumo:
Appropriate pipe insulation on domestic, pumped storage (split), solar water heating systems forms an integral part of energy conservation measures of well engineered systems. However, its importance over the life of the system is often overlooked. This study outlines the findings of computer modelling to quantify the energy and cost savings by using pipe insulation between the collector and storage tank. System sizes of 270 Litre storage tank, together with either selectively surfaced, flat plate collectors (4m2 area), or 30 evacuated tube collectors, were used. Insulation thicknesses of 13mm and 15mm, pipe runs both ways of 10, 15 and 20 metres and both electric and gas boosting of systems were all considered. The TRNSYS program was used to model the system performance at a representative city in each of the 6 climate zones for Australia and New Zealand, according to AS/NZS4234 – Heat Water Systems – Calculation of energy consumption and the ORER RECs calculation method. The results show: Energy savings from pipe insulation are very significant, even in mild climates such as Rockhampton. Across all climates zones, savings ranged from 0.16 to 3.5GJ per system per year, or about 2 to 23 percent of the annual load. There is very little advantage in increasing the insulation thickness from 13 to 15mm. For electricity at 19c/kWh and gas at 2 c/MJ, cost savings of between $27 and $100 per year are achieved across the climate zones. Both energy and cost savings would increase in colder climates with increased system size, solar contribution and water temperatures. The pipe insulation substantially improves the solar contribution (or fraction) and Renewable Energy Certificates (RECs), as well as giving small savings in circulating pump running costs in milder climates. Solar contribution increased by up to 23 percent points and RECs by over 7 in some cases. The study highlights the need to install and maintain the integrity of appropriate pipe insulation on solar water heaters over their life time in Australia and New Zealand.
Resumo:
Small element spacing in compact arrays results in strong mutual coupling between array elements. Performance degradation associated with the strong coupling can be avoided through the introduction of a decoupling network consisting of interconnected reactive elements. We present a systematic design procedure for decoupling networks of symmetrical arrays with more than three elements and characterized by circulant scattering parameter matrices. The elements of the decoupling network are obtained through repeated decoupling of the characteristic eigenmodes of the array, which allows the calculation of element values using closed-form expressions.
Resumo:
Reduced element spacing in antenna arrays gives rise to strong mutual coupling between array elements and may cause significant performance degradation. These effects can be alleviated by introducing a decoupling network consisting of interconnected reactive elements. The existing design approach for the synthesis of a decoupling network for circulant symmetric arrays allows calculation of element values using closed-form expressions, but the resulting circuit configuration requires multilayer technology for implementation. In this paper, a new structure for the decoupling of circulant symmetric arrays of more than four elements is presented. Element values are no longer obtained in closed form, but the resulting circuit is much simpler and can be implemented on a single layer.
Resumo:
The Dark Ages are generally held to be a time of technological and intellectual stagnation in western development. But that is not necessarily the case. Indeed, from a certain perspective, nothing could be further from the truth. In this paper we draw historical comparisons, focusing especially on the thirteenth and fourteenth centuries, between the technological and intellectual ruptures in Europe during the Dark Ages, and those of our current period. Our analysis is framed in part by Harold Innis’s2 notion of "knowledge monopolies". We give an overview of how these were affected by new media, new power struggles, and new intellectual debates that emerged in thirteenth and fourteenth century Europe. The historical salience of our focus may seem elusive. Our world has changed so much, and history seems to be an increasingly far-from-favoured method for understanding our own period and its future potentials. Yet our seemingly distant historical focus provides some surprising insights into the social dynamics that are at work today: the fracturing of established knowledge and power bases; the democratisation of certain "sacred" forms of communication and knowledge, and, conversely, the "sacrosanct" appropriation of certain vernacular forms; challenges and innovations in social and scientific method and thought; the emergence of social world-shattering media practices; struggles over control of vast networks of media and knowledge monopolies; and the enclosure of public discursive and social spaces for singular, manipulative purposes. The period between the eleventh and fourteenth centuries in Europe prefigured what we now call the Enlightenment, perhaps moreso than any other period before or after; it shaped what the Enlightenment was to become. We claim no knowledge of the future here. But in the "post-everything" society, where history is as much up for sale as it is for argument, we argue that our historical perspective provides a useful analogy for grasping the wider trends in the political economy of media, and for recognising clear and actual threats to the future of the public sphere in supposedly democratic societies.
Resumo:
The Dark Ages are generally held to be a time of technological and intellectual stagnation in western development. But that is not necessarily the case. Indeed, from a certain perspective, nothing could be further from the truth. In this paper we draw historical comparisons, focusing especially on the thirteenth and fourteenth centuries, between the technological and intellectual ruptures in Europe during the Dark Ages, and those of our current period. Our analysis is framed in part by Harold Innis’s2 notion of "knowledge monopolies". We give an overview of how these were affected by new media, new power struggles, and new intellectual debates that emerged in thirteenth and fourteenth century Europe. The historical salience of our focus may seem elusive. Our world has changed so much, and history seems to be an increasingly far-from-favoured method for understanding our own period and its future potentials. Yet our seemingly distant historical focus provides some surprising insights into the social dynamics that are at work today: the fracturing of established knowledge and power bases; the democratisation of certain "sacred" forms of communication and knowledge, and, conversely, the "sacrosanct" appropriation of certain vernacular forms; challenges and innovations in social and scientific method and thought; the emergence of social world-shattering media practices; struggles over control of vast networks of media and knowledge monopolies; and the enclosure of public discursive and social spaces for singular, manipulative purposes. The period between the eleventh and fourteenth centuries in Europe prefigured what we now call the Enlightenment, perhaps moreso than any other period before or after; it shaped what the Enlightenment was to become. We claim no knowledge of the future here. But in the "post-everything" society, where history is as much up for sale as it is for argument, we argue that our historical perspective provides a useful analogy for grasping the wider trends in the political economy of media, and for recognising clear and actual threats to the future of the public sphere in supposedly democratic societies.
Resumo:
Many optical networks are limited in speed and processing capability due to the necessity for the optical signal to be converted to an electrical signal and back again. In addition, electronically manipulated interconnects in an otherwise optical network lead to overly complicated systems. Optical spatial solitons are optical beams that propagate without spatial divergence. They are capable of phase dependent interactions, and have therefore been extensively researched as suitable all optical interconnects for over 20 years. However, they require additional external components, initially high voltage power sources were required, several years later, high power background illumination had replaced the high voltage. However, these additional components have always remained as the greatest hurdle in realising the applications of the interactions of spatial optical solitons as all optical interconnects. Recently however, self-focusing was observed in an otherwise self-defocusing photorefractive crystal. This observation raises the possibility of the formation of soliton-like fields in unbiased self-defocusing media, without the need for an applied electrical field or background illumination. This thesis will present an examination of the possibility of the formation of soliton-like low divergence fields in unbiased self-defocusing photorefractive media. The optimal incident beam and photorefractive media parameters for the formation of these fields will be presented, together with an analytical and numerical study of the effect of these parameters. In addition, preliminary examination of the interactions of two of these fields will be presented. In order to complete an analytical examination of the field propagating through the photorefractive medium, the spatial profile of the beam after propagation through the medium was determined. For a low power solution, it was found that an incident Gaussian field maintains its Gaussian profile as it propagates. This allowed the beam at all times to be described by an individual complex beam parameter, while also allowing simple analytical solutions to the appropriate wave equation. An analytical model was developed to describe the effect of the photorefractive medium on the Gaussian beam. Using this model, expressions for the required intensity dependent change in both the real and imaginary components of the refractive index were found. Numerical investigation showed that under certain conditions, a low powered Gaussian field could propagate in self-defocusing photorefractive media with divergence of approximately 0.1 % per metre. An investigation into the parameters of a Ce:BaTiO3 crystal showed that the intensity dependent absorption is wavelength dependent, and can in fact transition to intensity dependent transparency. Thus, with careful wavelength selection, the required intensity dependent change in both the real and imaginary components of the refractive index for the formation of a low divergence Gaussian field are physically realisable. A theoretical model incorporating the dependence of the change in real and imaginary components of the refractive index on propagation distance was developed. Analytical and numerical results from this model are congruent with the results from the previous model, showing low divergence fields with divergence less than 0.003 % over the propagation length of the photorefractive medium. In addition, this approach also confirmed the previously mentioned self-focusing effect of the self-defocusing media, and provided an analogy to a negative index GRIN lens with an intensity dependent focal length. Experimental results supported the findings of the numerical analysis. Two low divergence fields were found to possess the ability to interact in a Ce:BaTiO3 crystal in a soliton-like fashion. The strength of these interactions was found to be dependent on the degree of divergence of the individual beams. This research found that low-divergence fields are possible in unbiased self-defocusing photorefractive media, and that soliton-like interactions between two of these fields are possible. However, in order for these types of fields to be used in future all optical interconnects, the manipulation of these interactions, together with the ability for these fields to guide a second beam at a different wavelength, must be investigated.
Resumo:
Phospholipid (PL) molecules form the main structure of the membrane that prevents the direct contact of opposing articular cartilage layers. In this paper we conceptualise articular cartilage as a giant reverse micelle (GRM) in which the highly hydrated three-dimensional network of phospholipids is electrically charged and able to resist compressive forces during joint movement, and hence loading. Using this hypothetical base, we describe a hydrophilic-hydrophilic (HL-HL) biopair model of joint lubrication by contacting cartilages, whose mechanism is reliant on lamellar cushioning. To demonstrate the viability of our concept, the electrokinetic properties of the membranous layer on the articular surface were determined by measuring via microelectrophoresis, the adsorption of ions H, OH, Na and Cl on phospholipid membrane of liposomes, leading to the calculation of the effective surface charge density. The surface charge density was found to be -0.08 ± 0.002 cm-2 (mean ± S.D.) for phospholipid membranes, in 0.155 M NaCl solution and physiological pH. This value was approximately five times less than that measured in 0.01 M NaCl. The addition of synovial fluid (SF) to the 0.155 M NaCl solution reduced the surface charge density by 30% which was attributed to the binding of synovial fluid macromolecules to the phospholipid membrane. Our experiments show that particles charge and interact strongly with the polar core of RM. We demonstrate that particles can have strong electrostatic interactions when ions and macromolecules are solubilized by reverse micelle (RM). Since ions are solubilized by reverse micelle, the surface entropy influences the change in the charge density of the phospholipid membrane on cartilage surfaces. Reverse micelles stabilize ions maintaining equilibrium, their surface charges contribute to the stability of particles, while providing additional screening for electrostatic processes. © 2008 Elsevier Ireland Ltd. All rights reserved.
Resumo:
Emotional responses can incite and entice consumers to select a particular product from a row of similar items and thus have a considerable impact on purchase decisions. Consequently, more and more companies are challenging designers to address the emotional impact of their work and to design for emotion and consumerproduct relationships. Furthermore, the creation of emotional attachment to one’s possessions is one way of approaching a sustainable consumer-product relationship. The aim of this research is to gain a deeper understanding of the instantaneous emotional attachment that consumers form with products and its subsequent implications for product development. The foci of the study are visceral design, consumer hedonics and product rhetoric. Studied in a conglomerate they become an area of new investigation: visceral hedonic rhetoric. In this context, the term “visceral hedonic rhetoric” is defined as the properties of a product that persuasively elicit the pursuit of pleasure at an instinctual level of cognition. This study explores visceral hedonic rhetoric evident in the design of interactive products and resides within the context of emotional design research. It employs an empirical approach to understand how consumers respond hedonically on a visceral level to rhetoric in products. Specifically, it examines visceral hedonic responses given by thirty participants to the stimuli of six mobile telephones, six Mp3 players and six USB memory flash drives. The study findings demonstrate a hierarchy of visceral hedonic rhetoric evident in interactive products. This hierarchy of visceral hedonic attributes include: colour, size, shape, intrigue, material, perceived usability, portability, perceived function, novelty, analogy, brand, quality, texture and gender. However, it is the interrelationships between these visceral hedonic attributes that are the most significant findings of this research. Certain associations were revealed between product attribute combinations and consumer perception. The most predominant of these were: gender bias associated with colour selection; the creation of intrigue through a vibrant attention-grabbing colour; perceived ease of use and function; product confidence as a result of brand familiarity and perceived usability; analogous association through familiarity with similar objects and shapes; and the association of longevity with quality, novelty or recent technology. A significant outcome of the research is the distillation of visceral hedonic rhetoric design principles, and a tool to assist designers in harnessing the full potential of visceral hedonic rhetoric. This study contributes to the identification of the emerging research field of visceral hedonic rhetoric. Application of this study’s findings has the potential to provide a hedonic consumer-product relationship that is more meaningful, less disposable and more sustainable. This theory of visceral hedonic rhetoric is not only a significant contribution to design knowledge but is also generally transferable to other research domains, as later suggested in future research avenues.
Resumo:
One of the impediments to large-scale use of wind generation within power system is its variable and uncertain real-time availability. Due to the low marginal cost of wind power, its output will change the merit order of power markets and influence the Locational Marginal Price (LMP). For the large scale of wind power, LMP calculation can't ignore the essential variable and uncertain nature of wind power. This paper proposes an algorithm to estimate LMP. The estimation result of conventional Monte Carlo simulation is taken as benchmark to examine accuracy. Case study is conducted on a simplified SE Australian power system, and the simulation results show the feasibility of proposed method.
Resumo:
A magneto-rheological (MR) fluid damper is a semi-active control device that has recently begun to receive more attention in the vibration control community. However, the inherent nonlinear nature of the MR fluid damper makes it challenging to use this device to achieve high damping control system performance. Therefore the development of an accurate modeling method for a MR fluid damper is necessary to take advantage of its unique characteristics. Our goal was to develop an alternative method for modeling a MR fluid damper by using a self tuning fuzzy (STF) method based on neural technique. The behavior of the researched damper is directly estimated through a fuzzy mapping system. In order to improve the accuracy of the STF model, a back propagation and a gradient descent method are used to train online the fuzzy parameters to minimize the model error function. A series of simulations had been done to validate the effectiveness of the suggested modeling method when compared with the data measured from experiments on a test rig with a researched MR fluid damper. Finally, modeling results show that the proposed STF interference system trained online by using neural technique could describe well the behavior of the MR fluid damper without need of calculation time for generating the model parameters.
Resumo:
Radiotherapy is a cancer treatment modality in which a dose of ionising radiation is delivered to a tumour. The accurate calculation of the dose to the patient is very important in the design of an effective therapeutic strategy. This study aimed to systematically examine the accuracy of the radiotherapy dose calculations performed by clinical treatment planning systems by comparison againstMonte Carlo simulations of the treatment delivery. A suite of software tools known as MCDTK (Monte Carlo DICOM ToolKit) was developed for this purpose, and is capable of: • Importing DICOM-format radiotherapy treatment plans and producing Monte Carlo simulation input files (allowing simple simulation of complex treatments), and calibrating the results; • Analysing the predicted doses of and deviations between the Monte Carlo simulation results and treatment planning system calculations in regions of interest (tumours and organs-at-risk) and generating dose-volume histograms, so that conformity with dose prescriptions can be evaluated. The code has been tested against various treatment planning systems, linear acceleratormodels and treatment complexities. Six clinical head and neck cancer treatments were simulated and the results analysed using this software. The deviations were greatest where the treatment volume encompassed tissues on both sides of an air cavity. This was likely due to the method the planning system used to model low density media.
Resumo:
The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.
Resumo:
The mineral schlossmacherite (H3O,Ca)Al3(AsO4,PO4,SO4)2(OH)6 , a multi-cation-multi-anion mineral of the beudantite mineral subgroup has been characterised by Raman spectroscopy. The mineral and related minerals functions as a heavy metal collector and is often amorphous or poorly crystalline, such that XRD identification is difficult. The Raman spectra are dominated by an intense band at 864 cm-1, assigned to the symmetric stretching mode of the AsO43- anion. Raman bands at 809 and 819 cm-1 are assigned to the antisymmetric stretching mode of AsO43- . The sulphate anion is characterised by bands at 1000 cm-1 (ν1), and at 1031, 1082 and 1139 cm-1 (ν3). Two sets of bands in the OH stretching region are observed: firstly between 2800 and 3000 cm-1 with bands observed at 2850, 2868, 2918 cm-1 and secondly between 3300 and 3600 with bands observed at 3363, 3382, 3410, 3449 and 3537 cm-1. These bands enabled the calculation of hydrogen bond distances and show a wide range of H-bond distances.
Resumo:
The single crystal Raman spectra of natural mineral paulmooreite Pb2As2O5 from the Långban locality, Filipstad district, Värmland province, Sweden are presented for the first time. It is a monoclinic mineral containing an isolated [As2O5]4-. Depolarised and single crystal spectra of the natural and synthetic sample compare favorably and are characterized by strong bands around 186 and 140 cm-1 and three medium bands at 800 – 700 cm-1. Band assignments were made based on band symmetry and spectral comparison between experimental band positions and those resulting from Hartree-Fock calculation of an isolated [As2O5]4- ion. Spectral comparison was also made with lead arsenites such as synthetic PbAs2O4 and Pb2(AsO2)3Cl and natural finnemanite in order to determine the contribution of the terminal and bridging O in paulmooreite. Bands at 760 – 733 cm-1 were assigned to terminal As-O vibrations, whereas stretches of the bridging O occur at 562 and 503 cm-1. The single crystal spectra showed good mode separation, allowing bands to be assigned a symmetry species of Ag or Bg.
Resumo:
The quality assurance of stereotactic radiotherapy and radiosurgery treatments requires the use of small-field dose measurements that can be experimentally challenging. This study used Monte Carlo simulations to establish that PAGAT dosimetry gel can be used to provide accurate, high resolution, three-dimensional dose measurements of stereotactic radiotherapy fields. A small cylindrical container (4 cm height, 4.2 cm diameter) was filled with PAGAT gel, placed in the parietal region inside a CIRS head phantom, and irradiated with a 12 field stereotactic radiotherapy plan. The resulting three-dimensional dose measurement was read out using an optical CT scanner and compared with the treatment planning prediction of the dose delivered to the gel during the treatment. A BEAMnrc DOSXYZnrc simulation of this treatment was completed, to provide a standard against which the accuracy of the gel measurement could be gauged. The three dimensional dose distributions obtained from Monte Carlo and from the gel measurement were found to be in better agreement with each other than with the dose distribution provided by the treatment planning system's pencil beam calculation. Both sets of data showed close agreement with the treatment planning system's dose distribution through the centre of the irradiated volume and substantial disagreement with the treatment planning system at the penumbrae. The Monte Carlo calculations and gel measurements both indicated that the treated volume was up to 3 mm narrower, with steeper penumbrae and more variable out-of-field dose, than predicted by the treatment planning system. The Monte Carlo simulations allowed the accuracy of the PAGAT gel dosimeter to be verified in this case, allowing PAGAT gel to be utilised in the measurement of dose from stereotactic and other radiotherapy treatments, with greater confidence in the future.