133 resultados para VACUUM MISALIGNMENT
Resumo:
A breaker restrike is an abnormal arcing phenomenon, leading to a possible breaker failure. Eventually, this failure leads to interruption of the transmission and distribution of the electricity supply system until the breaker is replaced. Before 2008, there was little evidence in the literature of monitoring techniques based on restrike measurement and interpretation produced during switching of capacitor banks and shunt reactor banks in power systems. In 2008 a non-intrusive radiometric restrike measurement method and a restrike hardware detection algorithm were developed by M.S. Ramli and B. Kasztenny. However, the limitations of the radiometric measurement method are a band limited frequency response as well as limitations in amplitude determination. Current restrike detection methods and algorithms require the use of wide bandwidth current transformers and high voltage dividers. A restrike switch model using Alternative Transient Program (ATP) and Wavelet Transforms which support diagnostics are proposed. Restrike phenomena become a new diagnostic process using measurements, ATP and Wavelet Transforms for online interrupter monitoring. This research project investigates the restrike switch model Parameter „A. dielectric voltage gradient related to a normal and slowed case of the contact opening velocity and the escalation voltages, which can be used as a diagnostic tool for a vacuum circuit-breaker (CB) at service voltages between 11 kV and 63 kV. During current interruption of an inductive load at current quenching or chopping, a transient voltage is developed across the contact gap. The dielectric strength of the gap should rise to a point to withstand this transient voltage. If it does not, the gap will flash over, resulting in a restrike. A straight line is fitted through the voltage points at flashover of the contact gap. This is the point at which the gap voltage has reached a value that exceeds the dielectric strength of the gap. This research shows that a change in opening contact velocity of the vacuum CB produces a corresponding change in the slope of the gap escalation voltage envelope. To investigate the diagnostic process, an ATP restrike switch model was modified with contact opening velocity computation for restrike waveform signature analyses along with experimental investigations. This also enhanced a mathematical CB model with the empirical dielectric model for SF6 (sulphur hexa-fluoride) CBs at service voltages above 63 kV and a generalised dielectric curve model for 12 kV CBs. A CB restrike can be predicted if there is a similar type of restrike waveform signatures for measured and simulated waveforms. The restrike switch model applications are used for: computer simulations as virtual experiments, including predicting breaker restrikes; estimating the interrupter remaining life of SF6 puffer CBs; checking system stresses; assessing point-on-wave (POW) operations; and for a restrike detection algorithm development using Wavelet Transforms. A simulated high frequency nozzle current magnitude was applied to an Equation (derived from the literature) which can calculate the life extension of the interrupter of a SF6 high voltage CB. The restrike waveform signatures for a medium and high voltage CB identify its possible failure mechanism such as delayed opening, degraded dielectric strength and improper contact travel. The simulated and measured restrike waveform signatures are analysed using Matlab software for automatic detection. Experimental investigation of a 12 kV vacuum CB diagnostic was carried out for the parameter determination and a passive antenna calibration was also successfully developed with applications for field implementation. The degradation features were also evaluated with a predictive interpretation technique from the experiments, and the subsequent simulation indicates that the drop in voltage related to the slow opening velocity mechanism measurement to give a degree of contact degradation. A predictive interpretation technique is a computer modeling for assessing switching device performance, which allows one to vary a single parameter at a time; this is often difficult to do experimentally because of the variable contact opening velocity. The significance of this thesis outcome is that it is a non-intrusive method developed using measurements, ATP and Wavelet Transforms to predict and interpret a breaker restrike risk. The measurements on high voltage circuit-breakers can identify degradation that can interrupt the distribution and transmission of an electricity supply system. It is hoped that the techniques for the monitoring of restrike phenomena developed by this research will form part of a diagnostic process that will be valuable for detecting breaker stresses relating to the interrupter lifetime. Suggestions for future research, including a field implementation proposal to validate the restrike switch model for ATP system studies and the hot dielectric strength curve model for SF6 CBs, are given in Appendix A.
Resumo:
Organizations adopt a Supply Chain Management System (SCMS) expecting benefits to the organization and its functions. However, organizations are facing mounting challenges to realizing benefits through SCMS. Studies suggest a growing dissatisfaction among client organizations due to an increasing gap between expectations and realization of SCMS benefits. Further, reflecting the Enterprise System studies such as Seddon et al. (2010), SCMS benefits are also expected to flow to the organization throughout its lifecycle rather than being realized all at once. This research therefore proposes to derive a lifecycle-wide understanding of SCMS benefits and realization to derive a benefit expectation management framework to attain the full potential of an SCMS. The primary research question of this study is: How can client organizations better manage their benefit expectations of SCM systems? The specific research goals of the current study include: (1) to better understand the misalignment of received and expected benefits of SCM systems; (2) to identify the key factors influencing SCM system expectations and to develop a framework to manage SCMS benefits; (3) to explore how organizational satisfaction is influenced by the lack of SCMS benefit confirmation; and (4) to explore how to improve the realization of SCM system benefits. Expectation-Confirmation Theory (ECT) provides the theoretical underpinning for this study. ECT has been widely used in the consumer behavior literature to study customer satisfaction, post-purchase behavior and service marketing in general. Recently, ECT has been extended into Information Systems (IS) research focusing on individual user satisfaction and IS continuance. However, only a handful of studies have employed ECT to study organizational satisfaction on large-scale IS. The current study will enrich the research stream by extending ECT into organizational-level analysis and verifying the preliminary findings of relevant works by Staples et al. (2002), Nevo and Chan (2007) and Nevo and Wade (2007). Moreover, this study will go further trying to operationalize the constructs of ECT into the context of SCMS. The empirical findings of the study commence with a content analysis, through which 41 vendor reports and academic reports are analyzed yielding sixty expected benefits of SCMS. Then, the expected benefits are compared with the benefits realized at a case organization in the Fast Moving Consumer Goods industry sector that had implemented a SAP Supply Chain Management System seven years earlier. The study develops an SCMS Benefit Expectation Management (SCMS-BEM) Framework. The comparison of benefit expectations and confirmations highlights that, while certain benefits are realized earlier in the lifecycle, other benefits could take almost a decade to realize. Further analysis and discussion on how the developed SCMS-BEM Framework influences ECT when applied in SCMS was also conducted. It is recommended that when establishing their expectations of the SCMS, clients should remember that confirmation of these expectations will have a long lifecycle, as shown in the different time periods in the SCMS-BEM Framework. Moreover, the SCMS-BEM Framework will allow organizations to maintain high levels of satisfaction through careful mitigation and confirming expectations based on the lifecycle phase. In addition, the study reveals that different stakeholder groups have different expectations of the same SCMS. The perspective of multiple stakeholders has significant implications for the application of ECT in the SCMS context. When forming expectations of the SCMS, the collection of organizational benefits of SCMS should represent the perceptions of all stakeholder groups. The same mechanism should be employed in the measurements of received SCMS benefits. Moreover, for SCMS, there exists interdependence of the satisfaction among the various stakeholders. The satisfaction of decision-makers or the authorized staff is not only driven by their own expectation confirmation level, it is also influenced by the confirmation level of other stakeholders‘ expectations in the organization. Satisfaction from any one particular stakeholder group can not reflect the true satisfaction of the client organization. Furthermore, it is inferred from the SCMS-BEM Framework that organizations should place emphasis on the viewpoints of the operational and management staff when evaluating the benefits of SCMS in the short and middle term. At the same time, organizations should be placing more attention on the perspectives of strategic staff when evaluating the performance of the SCMS in the long term.
Resumo:
Background: Outside the mass-spectrometer, proteomics research does not take place in a vacuum. It is affected by policies on funding and research infrastructure. Proteomics research both impacts and is impacted by potential clinical applications. It provides new techniques & clinically relevant findings, but the possibilities for such innovations (and thus the perception of the potential for the field by funders) are also impacted by regulatory practices and the readiness of the health sector to incorporate proteomics-related tools & findings. Key to this process is how knowledge is translated. Methods: We present preliminary results from a multi-year social science project, funded by the Canadian Institutes of Health Research, on the processes and motivations for knowledge translation in the health sciences. The proteomics case within this wider study uses qualitative methods to examine the interplay between proteomics science and regulatory and policy makers regarding clinical applications of proteomics. Results: Adopting an interactive format to encourage conference attendees’ feedback, our poster focuses on deficits in effective knowledge translation strategies from the laboratory to policy, clinical, & regulatory arenas. An analysis of the interviews conducted to date suggests five significant choke points: the changing priorities of funding agencies; the complexity of proteomics research; the organisation of proteomics research; the relationship of proteomics to genomics and other omics sciences; and conflict over the appropriate role of standardisation. Conclusion: We suggest that engagement with aspects of knowledge translation, such as those mentioned above, is crucially important for the eventual clinical application ofproteomics science on any meaningful scale.
Resumo:
Characterization of mass transfer properties was achieved in the longitudinal, radial, and tangential directions for four Australian hardwood species: spotted gum, blackbutt, jarrah, and messmate. Measurement of mass transfer properties for these species was necessary to complement current vacuum drying modeling research. Water-vapour diffusivity was determined in steady state using a specific vapometer. Permeability was determined using a specialized device developed to measure over a wide range of permeability values. Permeability values of some species and material directions were extremely low and undetectable by the mass flow meter device. Hence, a custom system based on volume evolution was conceived to determine very low, previously unpublished, wood permeability values. Mass diffusivity and permeability were lowest for spotted gum and highest for messmate. Except for messmate in the radial direction, the four species measured were less permeable in all directions than the lowest published figures, demonstrating the high impermeability of Australian hardwoods and partly accounting for their relatively slow drying rates. Permeability, water-vapour diffusivity, and associated anisotropic ratio data obtained for messmate were extreme or did not follow typical trends and is consequently the most difficult of the four woods to dry in terms of collapse and checking degradation. © The State of Queensland, Department of Agriculture, Fisheries and Forestry, 2012.
Resumo:
Understanding the impacts of traffic and climate change on water quality helps decision makers to develop better policy and plans for dealing with unsustainable urban and transport development. This chapter presents detailed methodologies developed for sample collection and testing for heavy metals and total petroleum hydrocarbons, as part of a research study to investigate the impacts of climate change and changes to urban traffic characteristics on pollutant build-up and wash-off from urban road surfaces. Cadmium, chromium, nickel, copper, lead, iron, aluminium, manganese and zinc were the target heavy metals, and selected gasoline and diesel range organics were the target total petroleum hydrocarbons for this study. The study sites were selected to encompass the urban traffic characteristics of the Gold Coast region, Australia. An improved sample collection method referred to as ‘the wet and dry vacuum system’ for the pollutant build-up, and an effective wash-off plan to incorporate predicted changes to rainfall characteristics due to climate change, were implemented. The novel approach to sample collection for pollutant build-up helped to maintain the integrity of collection efficiency. The wash-off plan helped to incorporate the predicted impacts of climate change in the Gold Coast region. The robust experimental methods developed will help in field sample collection and chemical testing of different stormwater pollutants in build-up and wash-off.
Resumo:
Molecular dynamics simulations were carried out on single chain models of linear low-density polyethylene in vacuum to study the effects of branch length, branch content, and branch distribution on the polymer’s crystalline structure at 300 K. The trans/gauche (t/g) ratios of the backbones of the modeled molecules were calculated and utilized to characterize their degree of crystallinity. The results show that the t/g ratio decreases with increasing branch content regardless of branch length and branch distribution, indicating that branch content is the key molecular parameter that controls the degree of crystallinity. Although t/g ratios of the models with the same branch content vary, they are of secondary importance. However, our data suggests that branch distribution (regular or random) has a significant effect on the degree of crystallinity for models containing 10 hexyl branches/1,000 backbone carbons. The fractions of branches that resided in the equilibrium crystalline structures of the models were also calculated. On average, 9.8% and 2.5% of the branches were found in the crystallites of the molecules with ethyl and hexyl branches while C13 NMR experiments showed that the respective probabilities of branch inclusion for ethyl and hexyl branches are 10% and 6% [Hosoda et al., Polymer 1990, 31, 1999–2005]. However, the degree of branch inclusion seems to be insensitive to the branch content and branch distribution.
Resumo:
Authigenic illite-smectite and chlorite in reservoir sandstones from several Pacific rim sedimentary basins in Australia and New Zealand have been examined using an Electroscan Environmental Scanning Electron Microscope (ESEM) before, during, and after treatment with fresh water and HCl, respectively. These dynamic experiments are possible in the ESEM because, unlike conventional SEMs that require a high vacuum in the sample chamber (10-6 torr), the ESEM will operate at high pressures up to 20 torr. This means that materials and processes can be examined at high magnifications in their natural states, wet or dry, and over a range of temperatures (-20 to 1000 degrees C) and pressures. Sandstones containing the illite-smectite (60-70% illite interlayers) were flushed with fresh water for periods of up to 12 hours. Close examination of the same illite-smectite lines or filled pores, both before and after freshwater treatments, showed that the morphology of the illite-smectite was not changed by prolonged freshwater treatment. Chlorite-bearing sandstones (Fe-rich chlorite) were reacted with 1M to 10M HCl at temperatures of up to 80 degrees C and for periods of up to 48 hours. Before treatment the chlorites showed typically platy morphologies. After HCl treatment the chlorite grains were coated with an amorphous gel composed of Ca, Cl, and possibly amorphous Si, as determined by EDS analyses on the freshly treated rock surface. Brief washing in water removed this surface coating and revealed apparently unchanged chlorite showing no signs of dissolution or acid attack. However, although the chlorite showed no morphological changes, elemental analysis only detected silicon and oxygen.
Resumo:
Many methods exist at the moment for deformable face fitting. A drawback to nearly all these approaches is that they are (i) noisy in terms of landmark positions, and (ii) the noise is biased across frames (i.e. the misalignment is toward common directions across all frames). In this paper we propose a grouped $\mathcal{L}1$-norm anchored method for simultaneously aligning an ensemble of deformable face images stemming from the same subject, given noisy heterogeneous landmark estimates. Impressive alignment performance improvement and refinement is obtained using very weak initialization as "anchors".
Resumo:
Different types of HTS joints of Bi-2212/Ag tapes and laminates, which are fabricated by dip-coating and partial-melt processes, have been investigated. All joints are prepared using green single and laminated tapes and according to the scheme: coating-joining-processing. The heat treated tapes have critical current (Ic) between 7 and 27 A, depending on tape thickness and the number of Bi-2212 ceramic layers in laminated tapes. It is found that the current transport properties of joints depend on the type of laminate, joint configuration and joint treatment, Ic losses in joints of Bi-2212 tapes and laminates are attributed to defects in their structure, such as pores, secondary phases and misalignment of Bi-2212 grains near the Ag edges. By optimizing joint configuration, current transmission up to 100% is achieved for both single tapes and laminated tapes.
Resumo:
The effects of electron irradiation on NiO-containing solid solution systems are described. Partially hydrated NiO solid solutions, e. g. , NiO-MgO, undergo surface reduction to Ni metal after examination by TEM. This surface layer results in the formation of Moire interference patterns.
Resumo:
In his 2007 PESA keynote address, Paul Smeyers discussed the increasing regulation of child-rearing through government intervention and the generation of “experts,” citing particular examples from Europe where cases of childhood obesity and parental neglect have stirred public opinion and political debate. In his paper (this issue), Smeyers touches on a number of tensions before concluding that child rearing qualifies as a practice in which liberal governments should be reluctant to intervene. In response, I draw on recent experiences in Australia and argue that certain tragic events of late are the result of an ethical, moral and social vacuum in which these tensions coalesce. While I agree with Smeyers that governments should be reluctant to “intervene” in the private domain of the family, I argue that there is a difference between intervention and support. In concluding, I maintain that if certain Western liberal democracies did a more comprehensive job of supporting children and their families through active social investment in primary school education, then both families and schools would be better equipped to deal with the challenges they now face.
Resumo:
In the field of face recognition, Sparse Representation (SR) has received considerable attention during the past few years. Most of the relevant literature focuses on holistic descriptors in closed-set identification applications. The underlying assumption in SR-based methods is that each class in the gallery has sufficient samples and the query lies on the subspace spanned by the gallery of the same class. Unfortunately, such assumption is easily violated in the more challenging face verification scenario, where an algorithm is required to determine if two faces (where one or both have not been seen before) belong to the same person. In this paper, we first discuss why previous attempts with SR might not be applicable to verification problems. We then propose an alternative approach to face verification via SR. Specifically, we propose to use explicit SR encoding on local image patches rather than the entire face. The obtained sparse signals are pooled via averaging to form multiple region descriptors, which are then concatenated to form an overall face descriptor. Due to the deliberate loss spatial relations within each region (caused by averaging), the resulting descriptor is robust to misalignment & various image deformations. Within the proposed framework, we evaluate several SR encoding techniques: l1-minimisation, Sparse Autoencoder Neural Network (SANN), and an implicit probabilistic technique based on Gaussian Mixture Models. Thorough experiments on AR, FERET, exYaleB, BANCA and ChokePoint datasets show that the proposed local SR approach obtains considerably better and more robust performance than several previous state-of-the-art holistic SR methods, in both verification and closed-set identification problems. The experiments also show that l1-minimisation based encoding has a considerably higher computational than the other techniques, but leads to higher recognition rates.
Resumo:
An oriented graphitic nanostructured carbon film has been employed as a conductometric hydrogen gas sensor. The carbon film was energetically deposited using a filtered cathodic vacuum arc with a -75 V bias applied to a stainless steel grid placed 1cm from the surface of the Si substrate. The substrate was heated to 400°C prior to deposition. Electron microscopy showed evidence that the film consisted largely of vertically oriented graphitic sheets and had a density of 2.06 g/cm3. 76% of the atoms were bonded in sp2 or graphitic configurations. A change in the device resistance of >; 1.5% was exhibited upon exposure to 1 % hydrogen gas (in synthetic, zero humidity air) at 100°C. The time for the sensor resistance to increase by 1.5 % under these conditions was approximately 60 s and the baseline (zero hydrogen exposure) resistance remained constant to within 0.01% during and after the hydrogen exposures.
Resumo:
Pt/anodized TiO2/SiC based metal-oxide-semiconductor (MOS) devices were fabricated and characterized for their sensitivity towards propene (C3H6). Titanium (Ti) thin films were deposited onto the SiC substrates using a filtered cathodic vacuum arc (FCVA) method. Fluoride ions containing neutral electrolyte (0.5 wt% NH4F in ethylene glycol)were used to anodize the Ti films. The anodized films were subsequently annealed at 600 °C for 4 hrs in an oxygen rich environment to obtain TiO2. The current-voltage(I-V) characteristics of the Pt/TiO2/SiC devices were measured in different concentrations of propene. Exposure to the analyte gas caused a change in the Schottky barrier height and hence a lateral shift in the I-V characteristics. The effective change in the barrier height for 1% propene was calculated as 32.8 meV at 620°C. The dynamic response of the sensors was also investigated and a voltage shift of 157 mV was measured at 620°C during exposure to 1% propene.
Resumo:
Model calculations, which include the effects of turbulence during subsequent solar nebula evolution after the collapse of a cool interstellar cloud, can reconcile some of the apparent differences between physical parameters obtained from theory and the cosmochemical record. Two important aspects of turbulence in a protoplanetary cloud include the growth and transport of solid grains. While the physical effects of the process can be calculated and compared with the probable remains of the nebula formulation period, the more subtle effects on primitive grains and their survival in the cosmochemical record cannot be readily evaluated. The environment offered by the Space Station (or Space Shuttle) experimental facility can provide the vacuum and low gravity conditions for sufficiently long time periods required for experimental verification of these cosmochemical models.