142 resultados para Sessile Drop


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Libyan regime’s attacks on its own civilian population are a test case for the international community’s commitment to the notion of a “responsibility to protect” (R2P). The UN Security Council’s statement on 22 February 2011 explicitly invoked this concept by calling on “the Government of Libya to meet its responsibility to protect its population”. Yet, with Muammar Gaddafi encouraging further violence against protesters and threatening to fight “until the last drop of blood” it seems unlikely that the Security Council’s warning will be heeded. Greater pressure from the international community will be needed to bring an end to the atrocities in Libya. The international response to the Libyan crisis represents an opportunity to translate the theory of R2P into practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Previous studies have found high temperatures increase the risk of mortality in summer. However, little is known about whether a sharp decrease or increase in temperature between neighbouring days has any effect on mortality. Method: Poisson regression models were used to estimate the association between temperature change and mortality in summer in Brisbane, Australia during 1996–2004 and Los Angeles, United States during 1987–2000. The temperature change was calculated as the current day’s mean temperature minus the previous day’s mean. Results: In Brisbane, a drop of more than 3 °C in temperature between days was associated with relative risks (RRs) of 1.157 (95% confidence interval (CI): 1.024, 1.307) for total non external mortality (NEM), 1.186 (95%CI: 1.002, 1.405) for NEM in females, and 1.442 (95%CI: 1.099, 1.892) for people aged 65–74 years. An increase of more than 3 °C was associated with RRs of 1.353 (95%CI: 1.033, 1.772) for cardiovascular mortality and 1.667 (95%CI: 1.146, 2.425) for people aged < 65 years. In Los Angeles, only a drop of more than 3 °C was significantly associated with RRs of 1.133 (95%CI: 1.053, 1.219) for total NEM, 1.252 (95%CI: 1.131, 1.386) for cardiovascular mortality, and 1.254 (95%CI: 1.135, 1.385) for people aged ≥75 years. In both cities, there were joint effects of temperature change and mean temperature on NEM. Conclusion : A significant change in temperature of more than 3 °C, whether positive or negative, has an adverse impact on mortality even after controlling for the current temperature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Particle number concentrations and size distributions, visibility and particulate mass concentrations and weather parameters were monitored in Brisbane, Australia, on 23 September 2009, during the passage of a dust storm that originated 1400 km away in the dry continental interior. The dust concentration peaked at about mid-day when the hourly average PM2.5 and PM10 values reached 814 and 6460 µg m-3, respectively, with a sharp drop in atmospheric visibility. A linear regression analysis showed a good correlation between the coefficient of light scattering by particles (Bsp) and both PM10 and PM2.5. The particle number in the size range 0.5-20 µm exhibited a lognormal size distribution with modal and geometrical mean diameters of 1.6 and 1.9 µm, respectively. The modal mass was around 10 µm with less than 10% of the mass carried by particles smaller than 2.5 µm. The PM10 fraction accounted for about 68% of the total mass. By mid-day, as the dust began to increase sharply, the ultrafine particle number concentration fell from about 6x103 cm-3 to 3x103 cm-3 and then continued to decrease to less than 1x103 cm-3 by 14h, showing a power-law decrease with Bsp with an R2 value of 0.77 (p<0.01). Ultrafine particle size distributions also showed a significant decrease in number during the dust storm. This is the first scientific study of particle size distributions in an Australian dust storm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Students struggle with learning to program. In recent years, not only has there been a dramatic drop in the number of students enrolling in IT and Computer Science courses, but attrition from these courses continues to be significant. Introductory programming subjects traditionally have high failure rates and as they tend to be core to IT and Computer Science courses can be a road block for many students to their university studies. Is programming really that difficult — or are there other barriers to learning that have a serious and detrimental effect on student progression? In-class experiments were conducted in introductory programming units to confirm our hypothesis that that pair-programming would benefit students' learning to program. We investigated the social and cultural barriers to learning programming by questioning students' perceptions of confidence, difficulty and enjoyment of programming. The results of paired and non-paired students were compared to determine the effect of pair-programming on learning outcomes. Both the empirical and anecdotal results of our experiments strongly supported our hypothesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract OBJECTIVE: Depression, anxiety and alcohol misuse frequently co-occur. While there is an extensive literature reporting on the efficacy of psychological treatments that target depression, anxiety or alcohol misuse separately, less research has examined treatments that address these disorders when they co-occur. We conducted a systematic review to determine whether psychological interventions that target alcohol misuse among people with co-occurring depressive or anxiety disorders are effective. DATA SOURCES: We systematically searched the PubMed and PsychINFO databases from inception to March 2010. Individual searches in alcohol, depression and anxiety were conducted, and were limited to 'human' published 'randomized controlled trials' or 'sequential allocation' articles written in English. STUDY SELECTION: We identified randomized controlled trials that compared manual guided psychological interventions for alcohol misuse among individuals with depressive or anxiety disorders. Of 1540 articles identified, eight met inclusion criteria for the review. DATA EXTRACTION: From each study, we recorded alcohol and mental health outcomes, and other relevant clinical factors including age, gender ratio, follow-up length and drop-out rates. Quality of studies was also assessed. DATA SYNTHESIS: Motivational interviewing and cognitive-behavioral interventions were associated with significant reductions in alcohol consumption and depressive and/or anxiety symptoms. Although brief interventions were associated with significant improvements in both mental health and alcohol use variables, longer interventions produced even better outcomes. CONCLUSIONS: There is accumulating evidence for the effectiveness of motivational interviewing and cognitive behavior therapy for people with co-occurring alcohol and depressive or anxiety disorders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we seek to expand the use of direct methods in real-time applications by proposing a vision-based strategy for pose estimation of aerial vehicles. The vast majority of approaches make use of features to estimate motion. Conversely, the strategy we propose is based on a MR (Multi- Resolution) implementation of an image registration technique (Inverse Compositional Image Alignment ICIA) using direct methods. An on-board camera in a downwards-looking configuration, and the assumption of planar scenes, are the bases of the algorithm. The motion between frames (rotation and translation) is recovered by decomposing the frame-to-frame homography obtained by the ICIA algorithm applied to a patch that covers around the 80% of the image. When the visual estimation is required (e.g. GPS drop-out), this motion is integrated with the previous known estimation of the vehicles’ state, obtained from the on-board sensors (GPS/IMU), and the subsequent estimations are based only on the vision-based motion estimations. The proposed strategy is tested with real flight data in representative stages of a flight: cruise, landing, and take-off, being two of those stages considered critical: take-off and landing. The performance of the pose estimation strategy is analyzed by comparing it with the GPS/IMU estimations. Results show correlation between the visual estimation obtained with the MR-ICIA and the GPS/IMU data, that demonstrate that the visual estimation can be used to provide a good approximation of the vehicle’s state when it is required (e.g. GPS drop-outs). In terms of performance, the proposed strategy is able to maintain an estimation of the vehicle’s state for more than one minute, at real-time frame rates based, only on visual information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of Seven published/submitted papers and one poster presentation, of which five have been published and the other two are under review. This project is financially supported by the QUTPRA Grant. The twenty-first century started with the resurrection of lignocellulosic biomass as a potential substitute for petrochemicals. Petrochemicals, which enjoyed the sustainable economic growth during the past century, have begun to reach or have reached their peak. The world energy situation is complicated by political uncertainty and by the environmental impact associated with petrochemical import and usage. In particular, greenhouse gasses and toxic emissions produced by petrochemicals have been implicated as a significant cause of climate changes. Lignocellulosic biomass (e.g. sugarcane biomass and bagasse), which potentially enjoys a more abundant, widely distributed, and cost-effective resource base, can play an indispensible role in the paradigm transition from fossil-based to carbohydrate-based economy. Poly(3-hydroxybutyrate), PHB has attracted much commercial interest as a plastic and biodegradable material because some its physical properties are similar to those of polypropylene (PP), even though the two polymers have quite different chemical structures. PHB exhibits a high degree of crystallinity, has a high melting point of approximately 180°C, and most importantly, unlike PP, PHB is rapidly biodegradable. Two major factors which currently inhibit the widespread use of PHB are its high cost and poor mechanical properties. The production costs of PHB are significantly higher than for plastics produced from petrochemical resources (e.g. PP costs $US1 kg-1, whereas PHB costs $US8 kg-1), and its stiff and brittle nature makes processing difficult and impedes its ability to handle high impact. Lignin, together with cellulose and hemicellulose, are the three main components of every lignocellulosic biomass. It is a natural polymer occurring in the plant cell wall. Lignin, after cellulose, is the most abundant polymer in nature. It is extracted mainly as a by-product in the pulp and paper industry. Although, traditionally lignin is burnt in industry for energy, it has a lot of value-add properties. Lignin, which to date has not been exploited, is an amorphous polymer with hydrophobic behaviour. These make it a good candidate for blending with PHB and technically, blending can be a viable solution for price and reduction and enhance production properties. Theoretically, lignin and PHB affect the physiochemical properties of each other when they become miscible in a composite. A comprehensive study on structural, thermal, rheological and environmental properties of lignin/PHB blends together with neat lignin and PHB is the targeted scope of this thesis. An introduction to this research, including a description of the research problem, a literature review and an account of the research progress linking the research papers is presented in Chapter 1. In this research, lignin was obtained from bagasse through extraction with sodium hydroxide. A novel two-step pH precipitation procedure was used to recover soda lignin with the purity of 96.3 wt% from the black liquor (i.e. the spent sodium hydroxide solution). The precipitation process is presented in Chapter 2. A sequential solvent extraction process was used to fractionate the soda lignin into three fractions. These fractions, together with the soda lignin, were characterised to determine elemental composition, purity, carbohydrate content, molecular weight, and functional group content. The thermal properties of the lignins were also determined. The results are presented and discussed in Chapter 2. On the basis of the type and quantity of functional groups, attempts were made to identify potential applications for each of the individual lignins. As an addendum to the general section on the development of composite materials of lignin, which includes Chapters 1 and 2, studies on the kinetics of bagasse thermal degradation are presented in Appendix 1. The work showed that distinct stages of mass losses depend on residual sucrose. As the development of value-added products from lignin will improve the economics of cellulosic ethanol, a review on lignin applications, which included lignin/PHB composites, is presented in Appendix 2. Chapters 3, 4 and 5 are dedicated to investigations of the properties of soda lignin/PHB composites. Chapter 3 reports on the thermal stability and miscibility of the blends. Although the addition of soda lignin shifts the onset of PHB decomposition to lower temperatures, the lignin/PHB blends are thermally more stable over a wider temperature range. The results from the thermal study also indicated that blends containing up to 40 wt% soda lignin were miscible. The Tg data for these blends fitted nicely to the Gordon-Taylor and Kwei models. Fourier transform infrared spectroscopy (FT-IR) evaluation showed that the miscibility of the blends was because of specific hydrogen bonding (and similar interactions) between reactive phenolic hydroxyl groups of lignin and the carbonyl group of PHB. The thermophysical and rheological properties of soda lignin/PHB blends are presented in Chapter 4. In this chapter, the kinetics of thermal degradation of the blends is studied using thermogravimetric analysis (TGA). This preliminary investigation is limited to the processing temperature of blend manufacturing. Of significance in the study, is the drop in the apparent energy of activation, Ea from 112 kJmol-1 for pure PHB to half that value for blends. This means that the addition of lignin to PHB reduces the thermal stability of PHB, and that the comparative reduced weight loss observed in the TGA data is associated with the slower rate of lignin degradation in the composite. The Tg of PHB, as well as its melting temperature, melting enthalpy, crystallinity and melting point decrease with increase in lignin content. Results from the rheological investigation showed that at low lignin content (.30 wt%), lignin acts as a plasticiser for PHB, while at high lignin content it acts as a filler. Chapter 5 is dedicated to the environmental study of soda lignin/PHB blends. The biodegradability of lignin/PHB blends is compared to that of PHB using the standard soil burial test. To obtain acceptable biodegradation data, samples were buried for 12 months under controlled conditions. Gravimetric analysis, TGA, optical microscopy, scanning electron microscopy (SEM), differential scanning calorimetry (DSC), FT-IR, and X-ray photoelectron spectroscopy (XPS) were used in the study. The results clearly demonstrated that lignin retards the biodegradation of PHB, and that the miscible blends were more resistant to degradation compared to the immiscible blends. To obtain an understanding between the structure of lignin and the properties of the blends, a methanol-soluble lignin, which contains 3× less phenolic hydroxyl group that its parent soda lignin used in preparing blends for the work reported in Chapters 3 and 4, was blended with PHB and the properties of the blends investigated. The results are reported in Chapter 6. At up to 40 wt% methanolsoluble lignin, the experimental data fitted the Gordon-Taylor and Kwei models, similar to the results obtained soda lignin-based blends. However, the values obtained for the interactive parameters for the methanol-soluble lignin blends were slightly lower than the blends obtained with soda lignin indicating weaker association between methanol-soluble lignin and PHB. FT-IR data confirmed that hydrogen bonding is the main interactive force between the reactive functional groups of lignin and the carbonyl group of PHB. In summary, the structural differences existing between the two lignins did not manifest itself in the properties of their blends.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Determination of the placement and rating of transformers and feeders are the main objective of the basic distribution network planning. The bus voltage and the feeder current are two constraints which should be maintained within their standard range. The distribution network planning is hardened when the planning area is located far from the sources of power generation and the infrastructure. This is mainly as a consequence of the voltage drop, line loss and system reliability. Long distance to supply loads causes a significant amount of voltage drop across the distribution lines. Capacitors and Voltage Regulators (VRs) can be installed to decrease the voltage drop. This long distance also increases the probability of occurrence of a failure. This high probability leads the network reliability to be low. Cross-Connections (CC) and Distributed Generators (DGs) are devices which can be employed for improving system reliability. Another main factor which should be considered in planning of distribution networks (in both rural and urban areas) is load growth. For supporting this factor, transformers and feeders are conventionally upgraded which applies a large cost. Installation of DGs and capacitors in a distribution network can alleviate this issue while the other benefits are gained. In this research, a comprehensive planning is presented for the distribution networks. Since the distribution network is composed of low and medium voltage networks, both are included in this procedure. However, the main focus of this research is on the medium voltage network planning. The main objective is to minimize the investment cost, the line loss, and the reliability indices for a study timeframe and to support load growth. The investment cost is related to the distribution network elements such as the transformers, feeders, capacitors, VRs, CCs, and DGs. The voltage drop and the feeder current as the constraints are maintained within their standard range. In addition to minimizing the reliability and line loss costs, the planned network should support a continual growth of loads, which is an essential concern in planning distribution networks. In this thesis, a novel segmentation-based strategy is proposed for including this factor. Using this strategy, the computation time is significantly reduced compared with the exhaustive search method as the accuracy is still acceptable. In addition to being applicable for considering the load growth, this strategy is appropriate for inclusion of practical load characteristic (dynamic), as demonstrated in this thesis. The allocation and sizing problem has a discrete nature with several local minima. This highlights the importance of selecting a proper optimization method. Modified discrete particle swarm optimization as a heuristic method is introduced in this research to solve this complex planning problem. Discrete nonlinear programming and genetic algorithm as an analytical and a heuristic method respectively are also applied to this problem to evaluate the proposed optimization method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Magnesium alloys have been of growing interest to various engineering applications, such as the automobile, aerospace, communication and computer industries due to their low density, high specific strength, good machineability and availability as compared with other structural materials. However, most Mg alloys suffer from poor plasticity due to their Hexagonal Close Packed structure. Grain refinement has been proved to be an effective method to enhance the strength and alter the ductility of the materials. Several methods have been proposed to produce materials with nanocrystalline grain structures. So far, most of the research work on nanocrystalline materials has been carried out on Face-Centered Cubic and Body-Centered Cubic metals. However, there has been little investigation of nanocrystalline Mg alloys. In this study, bulk coarse-grained and nanocrystalline Mg alloys were fabricated by a mechanical alloying method. The mixed powder of Mg chips and Al powder was mechanically milled under argon atmosphere for different durations of 0 hours (MA0), 10 hours (MA10), 20 hours (MA20), 30 hours (MA30) and 40 hours (MA40), followed by compaction and sintering. Then the sintered billets were hot-extruded into metallic rods with a 7 mm diameter. The obtained Mg alloys have a nominal composition of Mg–5wt% Al, with grain sizes ranging from 13 μm down to 50 nm, depending on the milling durations. The microstructure characterization and evolution after deformation were carried out by means of Optical microscopy, X-Ray Diffraction, Scanning Electron Microscopy, Transmission Electron Microscopy, Scanning Probe Microscopy and Neutron Diffraction techniques. Nanoindentaion, compression and micro-compression tests on micro-pillars were used to study the size effects on the mechanical behaviour of the Mg alloys. Two kinds of size effects on the mechanical behaviours and deformation mechanisms were investigated: grain size effect and sample size effect. The nanoindentation tests were composed of constant strain rate, constant loading rate and indentation creep tests. The normally reported indentation size effect in single crystal and coarse-grained crystals was observed in both the coarse-grained and nanocrystalline Mg alloys. Since the indentation size effect is correlated to the Geometrically Necessary Dislocations under the indenter to accommodate the plastic deformation, the good agreement between the experimental results and the Indentation Size Effect model indicated that, in the current nanocrystalline MA20 and MA30, the dislocation plasticity was still the dominant deformation mechanism. Significant hardness enhancement with decreasing grain size, down to 58 nm, was found in the nanocrystalline Mg alloys. Further reduction of grain size would lead to a drop in the hardness values. The failure of grain refinement strengthening with the relatively high strain rate sensitivity of nanocrystalline Mg alloys suggested a change in the deformation mechanism. Indentation creep tests showed that the stress exponent was dependent on the loading rate during the loading section of the indentation, which was related to the dislocation structures before the creep starts. The influence of grain size on the mechanical behaviour and strength of extruded coarse-grained and nanocrystalline Mg alloys were investigated using uniaxial compression tests. The macroscopic response of the Mg alloys transited from strain hardening to strain softening behaviour, with grain size reduced from 13 ìm to 50 nm. The strain hardening was related to the twinning induced hardening and dislocation hardening effect, while the strain softening was attributed to the localized deformation in the nanocrystalline grains. The tension–compression yield asymmetry was noticed in the nanocrystalline region, demonstrating the twinning effect in the ultra-fine-grained and nanocrystalline region. The relationship k tensions < k compression failed in the nanocrystalline Mg alloys; this was attributed to the twofold effect of grain size on twinning. The nanocrystalline Mg alloys were found to exhibit increased strain rate sensitivity with decreasing grain size, with strain rate ranging from 0.0001/s to 0.01/s. Strain rate sensitivity of coarse-grained MA0 was increased by more than 10 times in MA40. The Hall-Petch relationship broke down at a critical grain size in the nanocrystalline region. The breakdown of the Hall-Petch relationship and the increased strain rate sensitivity were due to the localized dislocation activities (generalization and annihilation at grain boundaries) and the more significant contribution from grain boundary mediated mechanisms. In the micro-compression tests, the sample size effects on the mechanical behaviours were studied on MA0, MA20 and MA40 micro-pillars. In contrast to the bulk samples under compression, the stress-strain curves of MA0 and MA20 micro-pillars were characterized with a number of discrete strain burst events separated by nearly elastic strain segments. Unlike MA0 and MA20, the stress-strain curves of MA40 micro-pillars were smooth, without obvious strain bursts. The deformation mechanisms of the MA0 and MA20 micro-pillars under micro-compression tests were considered to be initially dominated by deformation twinning, followed by dislocation mechanisms. For MA40 pillars, the deformation mechanisms were believed to be localized dislocation activities and grain boundary related mechanisms. The strain hardening behaviours of the micro-pillars suggested that the grain boundaries in the nanocrystalline micro-pillars would reduce the source (nucleation sources for twins/dislocations) starvation hardening effect. The power law relationship of the yield strength on pillar dimensions in MA0, MA20 supported the fact that the twinning mechanism was correlated to the pre-existing defects, which can promote the nucleation of the twins. Then, we provided a latitudinal comparison of the results and conclusions derived from the different techniques used for testing the coarse-grained and nanocrystalline Mg alloy; this helps to better understand the deformation mechanisms of the Mg alloys as a whole. At the end, we summarized the thesis and highlighted the conclusions, contributions, innovations and outcomes of the research. Finally, it outlined recommendations for future work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

What is the future for public health in the twenty first century? Can we glean an idea about the future of public health from its past? As Winston Churchill once said ‘the further backward you look, the further forward you can see’. What then can we see in the history of public health that gives us an idea of where public health might be headed in the future? In the twentieth century there was substantial progress in public health in Australia. These improvements were brought about through a number of factors. In part, improvements were due to improved knowledge about the natural history of disease and its treatment. Added to this knowledge was a shifting focus from legislative measures to protect health, to the emergence of improved promotion and prevention strategies and a general improvement in social and economic conditions for people living in countries like Australia. The same could not, however, be said for poorer countries, many of whom have the most fundamental of sanitary and health protection issues still to deal with. For example, in sub-Saharan Africa and Russia, the decline in life expectancy may be an aberration or it may be related to a range of interconnected factors. In Russia, factors such as alcoholism, violence, suicide, accidents and cardiovascular disease could be contributing to the falling life expectancy (McMichael & Butler 2007). In sub-Saharan Africa, a range of issues such as HIV/AIDS, poverty, malaria, tuberculosis, undernutrition, totally inadequate infrastructure, gender inequality, conflict and violence, political taboos and a complete lack of political will, have all contributed to a dramatic drop in life expectancy (McMichael & Butler 2007).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The growth of solid tumours beyond a critical size is dependent upon angiogenesis, the formation of new blood vessels from an existing vasculature. Tumours may remain dormant at microscopic sizes for some years before switching to a mode in which growth of a supportive vasculature is initiated. The new blood vessels supply nutrients, oxygen, and access to routes by which tumour cells may travel to other sites within the host (metastasize). In recent decades an abundance of biological research has focused on tumour-induced angiogenesis in the hope that treatments targeted at the vasculature may result in a stabilisation or regression of the disease: a tantalizing prospect. The complex and fascinating process of angiogenesis has also attracted the interest of researchers in the field of mathematical biology, a discipline that is, for mathematics, relatively new. The challenge in mathematical biology is to produce a model that captures the essential elements and critical dependencies of a biological system. Such a model may ultimately be used as a predictive tool. In this thesis we examine a number of aspects of tumour-induced angiogenesis, focusing on growth of the neovasculature external to the tumour. Firstly we present a one-dimensional continuum model of tumour-induced angiogenesis in which elements of the immune system or other tumour-cytotoxins are delivered via the newly formed vessels. This model, based on observations from experiments by Judah Folkman et al., is able to show regression of the tumour for some parameter regimes. The modelling highlights a number of interesting aspects of the process that may be characterised further in the laboratory. The next model we present examines the initiation positions of blood vessel sprouts on an existing vessel, in a two-dimensional domain. This model hypothesises that a simple feedback inhibition mechanism may be used to describe the spacing of these sprouts with the inhibitor being produced by breakdown of the existing vessel's basement membrane. Finally, we have developed a stochastic model of blood vessel growth and anastomosis in three dimensions. The model has been implemented in C++, includes an openGL interface, and uses a novel algorithm for calculating proximity of the line segments representing a growing vessel. This choice of programming language and graphics interface allows for near-simultaneous calculation and visualisation of blood vessel networks using a contemporary personal computer. In addition the visualised results may be transformed interactively, and drop-down menus facilitate changes in the parameter values. Visualisation of results is of vital importance in the communication of mathematical information to a wide audience, and we aim to incorporate this philosophy in the thesis. As biological research further uncovers the intriguing processes involved in tumourinduced angiogenesis, we conclude with a comment from mathematical biologist Jim Murray, Mathematical biology is : : : the most exciting modern application of mathematics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Voltage drop and rise at network peak and off–peak periods along with voltage unbalance are the major power quality problems in low voltage distribution networks. Usually, the utilities try to use adjusting the transformer tap changers as a solution for the voltage drop. They also try to distribute the loads equally as a solution for network voltage unbalance problem. On the other hand, the ever increasing energy demand, along with the necessity of cost reduction and higher reliability requirements, are driving the modern power systems towards Distributed Generation (DG) units. This can be in the form of small rooftop photovoltaic cells (PV), Plug–in Electric Vehicles (PEVs) or Micro Grids (MGs). Rooftop PVs, typically with power levels ranging from 1–5 kW installed by the householders are gaining popularity due to their financial benefits for the householders. Also PEVs will be soon emerged in residential distribution networks which behave as a huge residential load when they are being charged while in their later generation, they are also expected to support the network as small DG units which transfer the energy stored in their battery into grid. Furthermore, the MG which is a cluster of loads and several DG units such as diesel generators, PVs, fuel cells and batteries are recently introduced to distribution networks. The voltage unbalance in the network can be increased due to the uncertainties in the random connection point of the PVs and PEVs to the network, their nominal capacity and time of operation. Therefore, it is of high interest to investigate the voltage unbalance in these networks as the result of MGs, PVs and PEVs integration to low voltage networks. In addition, the network might experience non–standard voltage drop due to high penetration of PEVs, being charged at night periods, or non–standard voltage rise due to high penetration of PVs and PEVs generating electricity back into the grid in the network off–peak periods. In this thesis, a voltage unbalance sensitivity analysis and stochastic evaluation is carried out for PVs installed by the householders versus their installation point, their nominal capacity and penetration level as different uncertainties. A similar analysis is carried out for PEVs penetration in the network working in two different modes: Grid to vehicle and Vehicle to grid. Furthermore, the conventional methods are discussed for improving the voltage unbalance within these networks. This is later continued by proposing new and efficient improvement methods for voltage profile improvement at network peak and off–peak periods and voltage unbalance reduction. In addition, voltage unbalance reduction is investigated for MGs and new improvement methods are proposed and applied for the MG test bed, planned to be established at Queensland University of Technology (QUT). MATLAB and PSCAD/EMTDC simulation softwares are used for verification of the analyses and the proposals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Long undersea debris runout can be facilitated by a boundary layer formed by weak marine sediments under a moving slide mass. Undrained loading of such offshore sediment results in a profound drop of basal shear resistance, compared to subaerial shear resistance, enabling long undersea runout. Thus large long-runout submarine landslides are not truly enigmatic (Voight and Elsworth 1992, 1997), but are understandable in terms of conventional geotechnical principles. A corollary is that remoulded undrained strength, and not friction angle, should be used for basal resistance in numerical simulations. This hypothesis is testable via drilling and examining the structure at the soles of undersea debris avalanches for indications of incorporation of sheared marine sediments, by tests of soil properties, and by simulations. Such considerations of emplacement process are an aim of ongoing research in the Lesser Antilles (Caribbean Sea), where multiple offshore debris avalanche and dome-collapse debris deposits have been identified since 1999 on swath bathymetric surveys collected in five oceanographic cruises. This paper reviews the prehistoric and historic collapses that have occurred offshore of Antilles arc islands and summarizes ongoing research on emplacement processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A breaker restrike is an abnormal arcing phenomenon, leading to a possible breaker failure. Eventually, this failure leads to interruption of the transmission and distribution of the electricity supply system until the breaker is replaced. Before 2008, there was little evidence in the literature of monitoring techniques based on restrike measurement and interpretation produced during switching of capacitor banks and shunt reactor banks in power systems. In 2008 a non-intrusive radiometric restrike measurement method and a restrike hardware detection algorithm were developed by M.S. Ramli and B. Kasztenny. However, the limitations of the radiometric measurement method are a band limited frequency response as well as limitations in amplitude determination. Current restrike detection methods and algorithms require the use of wide bandwidth current transformers and high voltage dividers. A restrike switch model using Alternative Transient Program (ATP) and Wavelet Transforms which support diagnostics are proposed. Restrike phenomena become a new diagnostic process using measurements, ATP and Wavelet Transforms for online interrupter monitoring. This research project investigates the restrike switch model Parameter „A. dielectric voltage gradient related to a normal and slowed case of the contact opening velocity and the escalation voltages, which can be used as a diagnostic tool for a vacuum circuit-breaker (CB) at service voltages between 11 kV and 63 kV. During current interruption of an inductive load at current quenching or chopping, a transient voltage is developed across the contact gap. The dielectric strength of the gap should rise to a point to withstand this transient voltage. If it does not, the gap will flash over, resulting in a restrike. A straight line is fitted through the voltage points at flashover of the contact gap. This is the point at which the gap voltage has reached a value that exceeds the dielectric strength of the gap. This research shows that a change in opening contact velocity of the vacuum CB produces a corresponding change in the slope of the gap escalation voltage envelope. To investigate the diagnostic process, an ATP restrike switch model was modified with contact opening velocity computation for restrike waveform signature analyses along with experimental investigations. This also enhanced a mathematical CB model with the empirical dielectric model for SF6 (sulphur hexa-fluoride) CBs at service voltages above 63 kV and a generalised dielectric curve model for 12 kV CBs. A CB restrike can be predicted if there is a similar type of restrike waveform signatures for measured and simulated waveforms. The restrike switch model applications are used for: computer simulations as virtual experiments, including predicting breaker restrikes; estimating the interrupter remaining life of SF6 puffer CBs; checking system stresses; assessing point-on-wave (POW) operations; and for a restrike detection algorithm development using Wavelet Transforms. A simulated high frequency nozzle current magnitude was applied to an Equation (derived from the literature) which can calculate the life extension of the interrupter of a SF6 high voltage CB. The restrike waveform signatures for a medium and high voltage CB identify its possible failure mechanism such as delayed opening, degraded dielectric strength and improper contact travel. The simulated and measured restrike waveform signatures are analysed using Matlab software for automatic detection. Experimental investigation of a 12 kV vacuum CB diagnostic was carried out for the parameter determination and a passive antenna calibration was also successfully developed with applications for field implementation. The degradation features were also evaluated with a predictive interpretation technique from the experiments, and the subsequent simulation indicates that the drop in voltage related to the slow opening velocity mechanism measurement to give a degree of contact degradation. A predictive interpretation technique is a computer modeling for assessing switching device performance, which allows one to vary a single parameter at a time; this is often difficult to do experimentally because of the variable contact opening velocity. The significance of this thesis outcome is that it is a non-intrusive method developed using measurements, ATP and Wavelet Transforms to predict and interpret a breaker restrike risk. The measurements on high voltage circuit-breakers can identify degradation that can interrupt the distribution and transmission of an electricity supply system. It is hoped that the techniques for the monitoring of restrike phenomena developed by this research will form part of a diagnostic process that will be valuable for detecting breaker stresses relating to the interrupter lifetime. Suggestions for future research, including a field implementation proposal to validate the restrike switch model for ATP system studies and the hot dielectric strength curve model for SF6 CBs, are given in Appendix A.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In 2012 the existing eight disciplines of Creative Industries Faculty, QUT combined with the School of Design (formerly a component of the Faculty of Built Environment and Engineering) to create a super faculty that includes the following disciplines: Architecture, Creative Writing & Literary Studies, Dance, Drama, Fashion, Film & Television, Industrial Design, Interior Design, Journalism, Media & Communication, Landscape Architecture, Music & Sound and Urban Design. The university’s research training unit AIRS (Advanced Information Retrieval Skills) is a systematic introduction to research level information literacies. It is currently being redesigned to reflect today’s new data intensive research environment and facilitate the capacity for life-long learning. Upon completion participants are expected to be able to: 1. Demonstrate an understanding of the theory of advanced search and evaluative strategies to efficiently yield appropriate resources to create original research. 2. Apply appropriate data management strategies to organise and utilize your information proficiently, ethically and legally. 3. Identify strategies to ensure best practice in the use of information sources, information technologies, information access tools and investigative methods. All Creative Industries Faculty research students must complete this unit into which CI Librarians teach discipline specific material. The library employs a team of research specific experts as well as Liaison Librarians for each faculty. Together they develop and deliver a generic research training program that provides researcher training in the following areas: Managing Research Data, QUT ePrints: New features for tracking your research impact, Tracking Research Impact, Research Students and the Library: Overview of Library Research Support Services, Technologies for Research Collaboration, Open Access Publishing, Greater Impact via Creative Commons Licence, CAMBIA - Navigating the patent literature, Uploading Publications to QUT ePrints Workshop, AIRS for supervisors, Finding Existing Research Data, Keeping up to date:Discovering and managing current awareness information and Getting Published. In 2011 Creative Industries initiated a new faculty specific research training program to promote capacity building for research within their Faculty, with workshops designed and developed with Faculty Research Leaders, The Office of Research and Liaison Librarians. “Show me the money” which assists staff to pursue alternative funding sources was one such session that was well attended and generated much discussion and interest. Drop in support sessions for ePrints, EndNote referencing software and Tracking Research Impact for the Creative Industries were also popular options on the menu. Liaison Librarians continue to provide one-on-one consultations with individual researchers as requested. This service assists Librarians greatly with getting to know and monitoring their researchers’ changing needs. The CI Faculty has enlisted two Research Leaders, one for each of the two Schools (Design and Media, Entertainment & Creative Arts) whose role it is to mentor newer research staff. Similarly within the CI library liaison team one librarian is assigned the role of Research Coordinator, whose responsibility it is to be the primary liaison with the Assistant Dean, Research and other key Faculty research managers and is the one most likely to attend Faculty committees and meetings relating to research support.