107 resultados para mixture of distribution hypothesis
Resumo:
The presence of large number of single-phase distributed energy resources (DERs) can cause severe power quality problems in distribution networks. The DERs can be installed in random locations. This may cause the generation in a particular phase exceeds the load demand in that phase. Therefore the excess power in that phase will be fed back to the transmission network. To avoid this problem, the paper proposes the use of distribution static compensator (DSTATCOM) that needs to be connected at the first bus following a substation. When operated properly, the DSTATCOM can facilitate a set of balanced current flow from the substation, even when excess power is generated by DERs. The proposals are validated through extensive digital computer simulation studies using PSCAD and MATLAB.
Resumo:
In the context of increasing demand for potable water and the depletion of water resources, stormwater is a logical alternative. However, stormwater contains pollutants, among which metals are of particular interest due to their toxicity and persistence in the environment. Hence, it is imperative to remove toxic metals in stormwater to the levels prescribed by drinking water guidelines for potable use. Consequently, various techniques have been proposed, among which sorption using low cost sorbents is economically viable and environmentally benign in comparison to other techniques. However, sorbents show affinity towards certain toxic metals, which results in poor removal of other toxic metals. It was hypothesised in this study that a mixture of sorbents that have different metal affinity patterns can be used for the efficient removal of a range of toxic metals commonly found in stormwater. The performance of six sorbents in the sorption of Al, Cr, Cu, Pb, Ni, Zn and Cd, which are the toxic metals commonly found in urban stormwater, was investigated to select suitable sorbents for creating the mixtures. For this purpose, a multi criteria analytical protocol was developed using the decision making methods: PROMETHEE (Preference Ranking Organisation METHod for Enrichment Evaluations) and GAIA (Graphical Analysis for Interactive Assistance). Zeolite and seaweed were selected for the creation of trial mixtures based on their metal affinity pattern and the performance on predetermined selection criteria. The metal sorption mechanisms employed by seaweed and zeolite were defined using kinetics, isotherm and thermodynamics parameters, which were determined using the batch sorption experiments. Additionally, the kinetics rate-limiting steps were identified using an innovative approach using GAIA and Spearman correlation techniques developed as part of the study, to overcome the limitation in conventional graphical methods in predicting the degree of contribution of each kinetics step in limiting the overall metal removal rate. The sorption kinetics of zeolite was found to be primarily limited by intraparticle diffusion followed by the sorption reaction steps, which were governed mainly by the hydrated ionic diameter of metals. The isotherm study indicated that the metal sorption mechanism of zeolite was primarily of a physical nature. The thermodynamics study confirmed that the energetically favourable nature of sorption increased in the order of Zn < Cu < Cd < Ni < Pb < Cr < Al, which is in agreement with metal sorption affinity of zeolite. Hence, sorption thermodynamics has an influence on the metal sorption affinity of zeolite. On the other hand, the primary kinetics rate-limiting step of seaweed was the sorption reaction process followed by intraparticle diffusion. The boundary layer diffusion was also found to limit the metal sorption kinetics at low concentration. According to the sorption isotherm study, Cd, Pb, Cr and Al were sorbed by seaweed via ion exchange, whilst sorption of Ni occurred via physisorption. Furthermore, ionic bonding is responsible for the sorption of Zn. The thermodynamics study confirmed that sorption by seaweed was energetically favourable in the order of Zn < Cu < Cd < Cr . Al < Pb < Ni. However, this did not agree with the affinity series derived for seaweed suggesting a limited influence of sorption thermodynamics on metal affinity for seaweed. The investigation of zeolite-seaweed mixtures indicated that mixing sorbents have an effect on the kinetics rates and the sorption affinity. Additionally, the theoretical relationships were derived to predict the boundary layer diffusion rate, intraparticle diffusion rate, the sorption reaction rate and the enthalpy of mixtures based on that of individual sorbents. In general, low coefficient of determination (R2) for the relationships between theoretical and experimental data indicated that the relationships were not statistically significant. This was attributed to the heterogeneity of the properties of sorbents. Nevertheless, in relative terms, the intraparticle diffusion rate, sorption reaction rate and enthalpy of sorption had higher R2 values than the boundary layer diffusion rate suggesting that there was some relationship between the former set of parameters of mixtures and that of sorbents. The mixture, which contained 80% of zeolite and 20% of seaweed, showed similar affinity for the sorption of Cu, Ni, Cd, Cr and Al, which was attributed to approximately similar sorption enthalpy of the metal ions. Therefore, it was concluded that the seaweed-zeolite mixture can be used to obtain the same affinity for various metals present in a multi metal system provided the metal ions have similar enthalpy during sorption by the mixture.
Resumo:
Structural health monitoring (SHM) refers to the procedure used to assess the condition of structures so that their performance can be monitored and any damage can be detected early. Early detection of damage and appropriate retrofitting will aid in preventing failure of the structure and save money spent on maintenance or replacement and ensure the structure operates safely and efficiently during its whole intended life. Though visual inspection and other techniques such as vibration based ones are available for SHM of structures such as bridges, the use of acoustic emission (AE) technique is an attractive option and is increasing in use. AE waves are high frequency stress waves generated by rapid release of energy from localised sources within a material, such as crack initiation and growth. AE technique involves recording these waves by means of sensors attached on the surface and then analysing the signals to extract information about the nature of the source. High sensitivity to crack growth, ability to locate source, passive nature (no need to supply energy from outside, but energy from damage source itself is utilised) and possibility to perform real time monitoring (detecting crack as it occurs or grows) are some of the attractive features of AE technique. In spite of these advantages, challenges still exist in using AE technique for monitoring applications, especially in the area of analysis of recorded AE data, as large volumes of data are usually generated during monitoring. The need for effective data analysis can be linked with three main aims of monitoring: (a) accurately locating the source of damage; (b) identifying and discriminating signals from different sources of acoustic emission and (c) quantifying the level of damage of AE source for severity assessment. In AE technique, the location of the emission source is usually calculated using the times of arrival and velocities of the AE signals recorded by a number of sensors. But complications arise as AE waves can travel in a structure in a number of different modes that have different velocities and frequencies. Hence, to accurately locate a source it is necessary to identify the modes recorded by the sensors. This study has proposed and tested the use of time-frequency analysis tools such as short time Fourier transform to identify the modes and the use of the velocities of these modes to achieve very accurate results. Further, this study has explored the possibility of reducing the number of sensors needed for data capture by using the velocities of modes captured by a single sensor for source localization. A major problem in practical use of AE technique is the presence of sources of AE other than crack related, such as rubbing and impacts between different components of a structure. These spurious AE signals often mask the signals from the crack activity; hence discrimination of signals to identify the sources is very important. This work developed a model that uses different signal processing tools such as cross-correlation, magnitude squared coherence and energy distribution in different frequency bands as well as modal analysis (comparing amplitudes of identified modes) for accurately differentiating signals from different simulated AE sources. Quantification tools to assess the severity of the damage sources are highly desirable in practical applications. Though different damage quantification methods have been proposed in AE technique, not all have achieved universal approval or have been approved as suitable for all situations. The b-value analysis, which involves the study of distribution of amplitudes of AE signals, and its modified form (known as improved b-value analysis), was investigated for suitability for damage quantification purposes in ductile materials such as steel. This was found to give encouraging results for analysis of data from laboratory, thereby extending the possibility of its use for real life structures. By addressing these primary issues, it is believed that this thesis has helped improve the effectiveness of AE technique for structural health monitoring of civil infrastructures such as bridges.
Resumo:
Particulate matter (PM) emissions involve a complex mixture of solid and liquid particles suspended in a gas, where it is noted that PM emissions from diesel engines are a major contributor to the ambient air pollution problem. Whilst epidemiological studies have shown a link between increased ambient PM emissions and respiratory morbidity and mortality, studies of this design are not able to identify the PM constituents responsible for driving adverse respiratory health effects. This review explores in detail the physico-chemical properties of diesel particulate matter (DPM), and identifies the constituents of this pollution source that are responsible for the development of respiratory disease. In particular, this review shows that the DPM surface area and adsorbed organic compounds play a significant role in manifesting chemical and cellular processes that if sustained can lead to the development of adverse respiratory health effects. The mechanisms of injury involved included: inflammation, innate and acquired immunity, and oxidative stress. Understanding the mechanisms of lung injury from DPM will enhance efforts to protect at-risk individuals from the harmful respiratory effects of air pollutants.
Resumo:
Vehicle emissions are a significant source of fine particles (Dp < 2.5 µm) in an urban environment. These fine particles have been shown to have detrimental health effects, with children thought to be more susceptible. Vehicle emissions are mainly carbonaceous in nature, and carbonaceous aerosols can be defined as either elemental carbon (EC) or organic carbon (OC). EC is a soot-like material emitted from primary sources while OC fraction is a complex mixture of hundreds of organic compounds from either primary or secondary sources (Cao et al., 2006). Therefore the ratio of OC/EC can aid in the identification of source. The purpose of this paper is to use the concentration of OC and EC in fine particles to determine the levels of vehicle emissions in schools. It is expected that this will improve the understanding of the potential exposure of children in a school environment to vehicle emissions.
Resumo:
Background. We have characterised a new highly divergent geminivirus species, Eragrostis curvula streak virus (ECSV), found infecting a hardy perennial South African wild grass. ECSV represents a new genus-level geminivirus lineage, and has a mixture of features normally associated with other specific geminivirus genera. Results. Whereas the ECSV genome is predicted to express a replication associated protein (Rep) from an unspliced complementary strand transcript that is most similar to those of begomoviruses, curtoviruses and topocuviruses, its Rep also contains what is apparently a canonical retinoblastoma related protein interaction motif such as that found in mastreviruses. Similarly, while ECSV has the same unusual TAAGATTCC virion strand replication origin nonanucleotide found in another recently described divergent geminivirus, Beet curly top Iran virus (BCTIV), the rest of the transcription and replication origin is structurally more similar to those found in begomoviruses and curtoviruses than it is to those found in BCTIV and mastreviruses. ECSV also has what might be a homologue of the begomovirus transcription activator protein gene found in begomoviruses, a mastrevirus-like coat protein gene and two intergenic regions. Conclusion. Although it superficially resembles a chimaera of geminiviruses from different genera, the ECSV genome is not obviously recombinant, implying that the features it shares with other geminiviruses are those that were probably present within the last common ancestor of these viruses. In addition to inferring how the ancestral geminivirus genome may have looked, we use the discovery of ECSV to refine various hypotheses regarding the recombinant origins of the major geminivirus lineages. © 2009 Varsani et al; licensee BioMed Central Ltd.
Resumo:
Airports and cities inevitably recognise the value that each brings the other; however, the separation in decision-making authority for what to build, where, when and how provides a conundrum for both parties. Airports often want a say in what is developed outside of the airport fence, and cities often want a say in what is developed inside the airport fence. Defining how much of a say airports and cities have in decisions beyond their jurisdictional control is likely to be a topic that continues so long as airports and cities maintain separate formal decision-making processes for what to build, where, when and how. However, the recent Green and White Papers for a new National Aviation Policy have made early inroads to formalising relationships between Australia’s major airports and their host cities. At present, no clear indication (within practice or literature) is evident to the appropriateness of different governance arrangements for decisions to develop in situations that bring together the opposing strategic interests of airports and cities; thus leaving decisions for infrastructure development as complex decision-making spaces that hold airport and city/regional interests at stake. The line of enquiry is motivated by a lack of empirical research on networked decision-making domains outside of the realm of institutional theorists (Agranoff & McGuire, 2001; Provan, Fish & Sydow, 2007). That is, governance literature has remained focused towards abstract conceptualisations of organisation, without focusing on the minutia of how organisation influences action in real-world applications. A recent study by Black (2008) has provided an initial foothold for governance researchers into networked decision-making domains. This study builds upon Black’s (2008) work by aiming to explore and understand the problem space of making decisions subjected to complex jurisdictional and relational interdependencies. That is, the research examines the formal and informal structures, relationships, and forums that operationalise debates and interactions between decision-making actors as they vie for influence over deciding what to build, where, when and how in airport-proximal development projects. The research mobilises a mixture of qualitative and quantitative methods to examine three embedded cases of airport-proximal development from a network governance perspective. Findings from the research provide a new understanding to the ways in which informal actor networks underpin and combine with formal decision-making networks to create new (or realigned) governance spaces that facilitate decision-making during complex phases of development planning. The research is timely, and responds well to Isett, Mergel, LeRoux, Mischen and Rethemeyer’s (2011) recent critique of limitations within current network governance literature, specifically to their noted absence of empirical studies that acknowledge and interrogate the simultaneity of formal and informal network structures within network governance arrangements (Isett et al., 2011, pp. 162-166). The combination of social network analysis (SNA) techniques and thematic enquiry has enabled findings to document and interpret the ways in which decision-making actors organise to overcome complex problems for planning infrastructure. An innovative approach to using association networks has been used to provide insights to the importance of the different ways actors interact with one another, thus providing a simple yet valuable addition to the increasingly popular discipline of SNA. The research also identifies when and how different types of networks (i.e. formal and informal) are able to overcome currently known limitations to network governance (see McGuire & Agranoff, 2011), thus adding depth to the emerging body of network governance literature surrounding limitations to network ways of working (i.e. Rhodes, 1997a; Keast & Brown, 2002; Rethemeyer & Hatmaker, 2008; McGuire & Agranoff, 2011). Contributions are made to practice via the provision of a timely understanding of how horizontal fora between airports and their regions are used, particularly in the context of how they reframe the governance of decision-making for airport-proximal infrastructure development. This new understanding will enable government and industry actors to better understand the structural impacts of governance arrangements before they design or adopt them, particularly for factors such as efficiency of information, oversight, and responsiveness to change.
Resumo:
The majority of distribution utilities do not have accurate information on the constituents of their loads. This information is very useful in managing and planning the network, adequately and economically. Customer loads are normally categorized in three main sectors: 1) residential; 2) industrial; and 3) commercial. In this paper, penalized least-squares regression and Euclidean distance methods are developed for this application to identify and quantify the makeup of a feeder load with unknown sectors/subsectors. This process is done on a monthly basis to account for seasonal and other load changes. The error between the actual and estimated load profiles are used as a benchmark of accuracy. This approach has shown to be accurate in identifying customer types in unknown load profiles, and is used in cross-validation of the results and initial assumptions.
Resumo:
According to Karl Popper, widely regarded as one of the greatest philosophers of science in the 20th century, falsifiability is the primary characteristic that distinguishes scientific theories from ideologies – or dogma. For example, for people who argue that schools should treat creationism as a scientific theory, comparable to modern theories of evolution, advocates of creationism would need to become engaged in the generation of falsifiable hypothesis, and would need to abandon the practice of discouraging questioning and inquiry. Ironically, scientific theories themselves are accepted or rejected based on a principle that might be called survival of the fittest. So, for healthy theories on development to occur, four Darwinian functions should function: (a) variation – avoid orthodoxy and encourage divergent thinking, (b) selection – submit all assumptions and innovations to rigorous testing, (c) diffusion – encourage the shareability of new and/or viable ways of thinking, and (d) accumulation – encourage the reuseability of viable aspects of productive innovations.
'Going live' : establishing the creative attributes of the live multi-camera television professional
Resumo:
In my capacity as a television professional and teacher specialising in multi-camera live television production for over 40 years, I was drawn to the conclusion that opaque or inadequately formed understandings of how creativity applies to the field of live television, have impeded the development of pedagogies suitable to the teaching of live television in universities. In the pursuit of this hypothesis, the thesis shows that television degrees were born out of film studies degrees, where intellectual creativity was aligned to single camera production, and the 'creative roles' of producers, directors and scriptwriters. At the same time, multi-camera live television production was subsumed under the 'mass communication' banner, leading to an understanding that roles other than producer and director are simply technical, and bereft of creative intent or acumen. The thesis goes on to show that this attitude to other television production personnel, for example, the vision mixer, videotape operator and camera operator, relegates their roles to that of 'button pusher'. This has resulted in university teaching models with inappropriate resources and unsuitable teaching practices. As a result, the industry is struggling to find people with the skills to fill the demands of the multi-camera live television sector. In specific terms the central hypothesis is pursued through the following sequenced approach. Firstly, the thesis sets out to outline the problems, and traces the origins of the misconceptions that hold with the notion that intellectual creativity does not exist in live multi-camera television. Secondly, this more adequately conceptualised rendition, of the origins particular to the misconceptions of live television and creativity, is then anchored to the field of examination by presentation of the foundations of the roles involved in making live television programs, using multicamera production techniques. Thirdly, this more nuanced rendition of the field sets the stage for a thorough analysis of education and training in the industry, and teaching models at Australian universities. The findings clearly establish that the pedagogical models are aimed at single camera production, a position that deemphasises the creative aspects of multi-camera live television production. Informed by an examination of theories of learning, qualitative interviews, professional reflective practice and observations, the roles of four multi-camera live production crewmembers (camera operator, vision mixer, EVS/videotape operator and director's assistant), demonstrate the existence of intellectual creativity during live production. Finally, supported by the theories of learning, and the development and explication of a successful teaching model, a new approach to teaching students how to work in live television is proposed and substantiated.
Resumo:
Fouling of industrial surfaces by silica and calcium oxalate can be detrimental to a number of process streams. Solution chemistry plays a large roll in the rate and type of scale formed on industrial surfaces. This study is on the kinetics and thermodynamics of SiO2 and calcium oxalate composite formation in solutions containing Mg2+ ions, trans-aconitic acid and sucrose, to mimic factory sugar cane juices. The induction time (ti) of silicic acid polymerization is found to be dependent on the sucrose concentration and SiO2 supersaturation ratio (SS). Generalized kinetic and solubility models are developed for SiO2 and calcium oxalate in binary systems using response surface methodology. The role of sucrose, Mg, trans-aconitic acid, a mixture of Mg and trans-aconitic acid, SiO2 SS ratio and Ca in the formation of com- posites is explained using the solution properties of these species including their ability to form complexes.
Resumo:
Thermogravimetric analysis (TG) and powder X-ray diffraction (PXRD) were used to study some selected Mg/Al and Zn/Al layered double hydroxides (LDHs) prepared by co-precipitation. A Mg/Al hydrotalcite was investigated before and after reformation in fluoride and nitrate solutions. Little change in the TG or PXRD patterns was observed. It was proposed that successful intercalation of nitrate anions has occurred. However, the absence of any change in the d(003) interlayer spacing suggests that fluoride anions were not intercalated between the LDH layers. Any fluoride anions that were removed from solution are most likely adsorbed onto the outer surfaces of the hydrotalcite. As fluoride removal was not quantified it is not possible to confirm that this has happened without further experimentation. Carbonate is probably intercalated into the interlayer of these hydrotalcites, as well as fluoride or nitrate. The carbonate most likely originates from either incomplete decarbonation during thermal activation or adsorption from the atmosphere or dissolved in the deionised water. Small and large scale co-precipitation syntheses of a Zn/Al LDH were also investigated to determine if there was any change in the product. While the small scale experiment produced a good quality LDH of reasonable purity; the large scale synthesis resulted in several additional phases. Imprecise measurement and difficulty in handling the large quantities of reagents appeared to be sufficient to alter the reaction conditions causing a mixture of phases to be formed.
Resumo:
From the early-to-mid 2000s, the Australian horror film production sector has achieved growth and prosperity of a kind not seen since its heyday of the 1980s. Australian horror films can be traced back to the early 1970s, when they experienced a measure of commercial success. However, throughout the twenty-first-century Australian horror gained levels of international recognition that have surpassed the cult status enjoyed by some of the films in the 1970s and 1980s. In recent years, Australia has emerged as a significant producer of breakout, cult, and solid B-grade horror films, which have circulated in markets worldwide. Australian horror’s recent successes have been driven by one of its distinguishing features: its international dimensions. As this chapter argues, the Australian horror film production sector is an export-oriented industry that relies heavily on international partnerships and presales (the sale of distribution rights prior to a film’s completion), and on its relationships with overseas distributors. Yet, these traits vary from film to film as the sector is comprised of several distinct domains of production activity, from guerrilla films destined for niche video markets like specialist cult video stores and online mail-order websites to high(er)-end pictures made for theatrical markets. Furthermore, the content and style of Australian horror movies has often been tailored for export. While some horror filmmakers have sought to play up the Australianness of their product, others have attempted to pass off their films as faux-American or as placeless films effaced of national reference points.
Resumo:
The reactions of pyrrole and thiophene monomers in copper-exchanged mordenite have been investigated using EPR and UV–VIS absorption spectroscopy. The EPR spectra show a decrease in the intensity of the Cu2+ signal and the appearance of a radical signal due to the formation of oxidatively coupled oligomeric and/or polymeric species in the zeolite host. The reaction ceases when ca. 50% of the copper has reacted and differences in the form of the residual Cu2+ signal between the thiophene and pyrrole reactions suggest a greater degree of penetration of the reaction into the zeolite host for pyrrole, in agreement with previous XPS measurements. The EPR signal intensities show that the average length of the polymer chain that is associated with each radical centre is 15–20 and 5–7 monomer units for polypyrrole and polythiophene, respectively. The widths of the EPR signals suggest that these are at least partly due to small oligomers. The UV–VIS absorption spectra of the thiophene system show bands in three main regions: 2.8–3.0 eV (A), 2.3 eV (B) and 1.6–1.9 eV (D, E, F). Bands A and D–F occur in regions which have previously been observed for small oligomers, 4–6 monomer units in length. Band B is assigned to longer chain polythiophene molecules. We therefore conclude that the reaction between thiophene and copper-loaded mordenite produces a mixture of short oligomers together with some long chain polythiophene. The UV–VIS spectra of the pyrrole system show bands in the regions 3.6 eV (A), 2.7–3.0 eV (B, C) and 1.5–1.9 eV (D, F). Assignments of these bands are less certain than for the thiophene case because of the lack of literature data on the spectra of pyrrole oligomers.
Resumo:
This thesis developed and applied Bayesian models for the analysis of survival data. The gene expression was considered as explanatory variables within the Bayesian survival model which can be considered the new contribution in the analysis of such data. The censoring factor that is inherent of survival data has also been addressed in terms of its impact on the fitting of a finite mixture of Weibull distribution with and without covariates. To investigate this, simulation study were carried out under several censoring percentages. Censoring percentage as high as 80% is acceptable here as the work involved high dimensional data. Lastly the Bayesian model averaging approach was developed to incorporate model uncertainty in the prediction of survival.