947 resultados para Operational conditions
Resumo:
Polycrystalline silver is used to catalytically oxidise methanol to formaldehyde. This paper reports the results of extensive investigations involving the use of environmental scanning electron microscopy (ESEM) to monitor structural changes in silver during simulated industrial reaction conditions. The interaction of oxygen, nitrogen, and water, either singly or in combination, with a silver catalyst at temperatures up to 973 K resulted in the appearance of a reconstructed silver surface. More spectacular was the effect an oxygen/methanol mixture had on the silver morphology. At a temperature of ca. 713 K pinholes were created in the vicinity of defects as a consequence of subsurface explosions. These holes gradually increased in size and large platelet features were created. Elevation of the catalyst temperature to 843 K facilitated the wholescale oxygen induced restructuring of the entire silver surface. Methanol reacted with subsurface oxygen to produce subsurface hydroxyl species which ultimately formed water in the subsurface layers of silver. The resultant hydrostatic pressure forced the silver surface to adopt a "hill and valley" conformation in order to minimise the surface free energy. Upon approaching typical industrial operating conditions widespread explosions occurred on the catalyst and it was also apparent that the silver surface was extremely mobile under the applied conditions. The interaction of methanol alone with silver resulted in the initial formation of pinholes primarily in the vicinity of defects, due to reaction with oxygen species incorporated in the catalyst during electrochemical synthesis. However, dramatic reduction in the hole concentration with time occurred as all the available oxygen became consumed. A remarkable correlation between formaldehyde production and hole concentration was found.
Resumo:
The effect of oxidation and reduction conditions upon the morphology of polycrystalline silver catalysts has been investigated by means of in situ Fourier-transform infrared (FTIR) spectroscopy. Characterization of the sample was achieved by inspection of the νas(COO) band profile of adsorbed formate, recorded after dosing with formic acid at ambient temperature. Evidence was obtained for the existence of a silver surface reconstructed by the presence of subsurface oxygen in addition to the conventional family of Ag(111) and Ag(110) crystal faces. Oxidation at 773 K facilitated the reconstruction of silver planes due to the formation of subsurface oxygen species. Prolonged oxygen treatment at 773 K also caused particle fragmentation as a consequence of excessive oxygen penetration of the silver catalyst at defect sites. It was also deduced that the presence of oxygen in the gas phase stabilized the growth of silver planes which could form stronger bonds with oxygen. In contrast, high-temperature thermal treatment in vacuum induced significant sintering of the silver catalyst. Reduction at 773 K resulted in substantial quantities of dissolved hydrogen (and probably hydroxy species) in the bulk silver structure. Furthermore, enhanced defect formation in the catalyst was also noted, as evidenced by the increased concentration of formate species associated with oxygen-reconstructed silver faces.
Resumo:
Operational modal analysis (OMA) is prevalent in modal identifi cation of civil structures. It asks for response measurements of the underlying structure under ambient loads. A valid OMA method requires the excitation be white noise in time and space. Although there are numerous applications of OMA in the literature, few have investigated the statistical distribution of a measurement and the infl uence of such randomness to modal identifi cation. This research has attempted modifi ed kurtosis to evaluate the statistical distribution of raw measurement data. In addition, a windowing strategy employing this index has been proposed to select quality datasets. In order to demonstrate how the data selection strategy works, the ambient vibration measurements of a laboratory bridge model and a real cable-stayed bridge have been respectively considered. The analysis incorporated with frequency domain decomposition (FDD) as the target OMA approach for modal identifi cation. The modal identifi cation results using the data segments with different randomness have been compared. The discrepancy in FDD spectra of the results indicates that, in order to fulfi l the assumption of an OMA method, special care shall be taken in processing a long vibration measurement data. The proposed data selection strategy is easy-to-apply and verifi ed effective in modal analysis.
Resumo:
Grading osteoarthritic tissue has, until now, been a laboratory process confined to research activities. This thesis establishes a scientific protocol that extends osteoarthritic tissue ranking to surgical practice. The innovative protocol, which now incorporates the structural degeneration of collagen, enhances the traditional Modified Mankin ranking system, enabling its application to real time decision during surgery. Because it is fast and without time consuming laboratory process, it would potentially enable the cataloguing of tissues in osteoarthritic joints in all compartments of diseased joints during surgery for epistemological study and insight into the manifestation of osteoarthritis across age, gender, occupation, physical activities and race.
Resumo:
NAPLAN RESULTS HAVE gained socio-political prominence and have been used as indicators of educational outcomes for all students, including Indigenous students. Despite the promise of open and in-depth access to NAPLAN data as a vehicle for intervention, we argue that the use of NAPLAN data as a basis for teachers and schools to reduce variance in learning outcomes is insufficient. NAPLAN tests are designed to show statistical variance at the level of the school and the individual, yet do not factor in the sociocultural and cognitive conditions Indigenous students’ experience when taking the tests. We contend that further understanding of these influences may help teachers understand how to develop their classroom practices to secure better numeracy and literacy outcomes for all students. Empirical research findings demonstrate how teachers can develop their classroom practices from an understanding of the extraneous cognitive load imposed by test taking. We have analysed Indigenous students’ experience of solving mathematical test problems to discover evidence of extraneous cognitive load. We have also explored conditions that are more supportive of learning derived from a classroom intervention which provides an alternative way to both assess and build learning for Indigenous students. We conclude that conditions to support assessment for more equitable learning outcomes require a reduction in cognitive load for Indigenous students while maintaining a high level of expectation and participation in problem solving.
Resumo:
A significant amount of speech data is required to develop a robust speaker verification system, but it is difficult to find enough development speech to match all expected conditions. In this paper we introduce a new approach to Gaussian probabilistic linear discriminant analysis (GPLDA) to estimate reliable model parameters as a linearly weighted model taking more input from the large volume of available telephone data and smaller proportional input from limited microphone data. In comparison to a traditional pooled training approach, where the GPLDA model is trained over both telephone and microphone speech, this linear-weighted GPLDA approach is shown to provide better EER and DCF performance in microphone and mixed conditions in both the NIST 2008 and NIST 2010 evaluation corpora. Based upon these results, we believe that linear-weighted GPLDA will provide a better approach than pooled GPLDA, allowing for the further improvement of GPLDA speaker verification in conditions with limited development data.
Resumo:
The reduction of the health literacy concept to a functional relationship with text, does not acknowledge the range of information sources that people draw from in order to make informed decision about their health and treatment. Drawing from two studies that explored how people with two different but complex and life-threatening chronic health conditions, chronic kidney disease and HIV, a socio-cultural understanding of the practise of health literacy is described. Health information is experienced by patients as a chronic health condition landscape, and develops from three information sources; namely epistemic, social and corporeal sources. Participants in both studies used activities that involved orienting, sharing and creating information to map this landscape which was used to inform their decision-making. These findings challenge the traditional conceptions of health literacy and suggest an approach that views the landscape of chronic illness as being socially, physically and contextually constructed. This approach necessitates a recasting of health literacy away from a sole interest in skills and towards understanding how information practices facilitate people becoming health literate.
A methodology to develop an urban transport disadvantage framework : the case of Brisbane, Australia
Resumo:
Most individuals travel in order to participate in a network of activities which are important for attaining a good standard of living. Because such activities are commonly widely dispersed and not located locally, regular access to a vehicle is important to avoid exclusion. However, planning transport system provisions that can engage members of society in an acceptable degree of activity participation remains a great challenge. The main challenges in most cities of the world are due to significant population growth and rapid urbanisation which produces increased demand for transport. Keeping pace with these challenges in most urban areas is difficult due to the widening gap between supply and demand for transport systems which places the urban population at a transport disadvantage. The key element in mitigating the issue of urban transport disadvantage is to accurately identify the urban transport disadvantaged. Although wide-ranging variables and multi-dimensional methods have been used to identify this group, variables are commonly selected using ad-hoc techniques and unsound methods. This poses questions of whether the current variables used are accurately linked with urban transport disadvantage, and the effectiveness of the current policies. To fill these gaps, the research conducted for this thesis develops an operational urban transport disadvantage framework (UTDAF) based on key statistical urban transport disadvantage variables to accurately identify the urban transport disadvantaged. The thesis develops a methodology based on qualitative and quantitative statistical approaches to develop an urban transport disadvantage framework designed to accurately identify urban transport disadvantage. The reliability and the applicability of the methodology developed is the prime concern rather than the accuracy of the estimations. Relevant concepts that impact on urban transport disadvantage identification and measurement and a wide range of urban transport disadvantage variables were identified through a review of the existing literature. Based on the reviews, a conceptual urban transport disadvantage framework was developed based on the causal theory. Variables identified during the literature review were selected and consolidated based on the recommendations of international and local experts during the Delphi study. Following the literature review, the conceptual urban transport disadvantage framework was statistically assessed to identify key variables. Using the statistical outputs, the key variables were weighted and aggregated to form the UTDAF. Before the variable's weights were finalised, they were adjusted based on results of correlation analysis between elements forming the framework to improve the framework's accuracy. The UTDAF was then applied to three contextual conditions to determine the framework's effectiveness in identifying urban transport disadvantage. The development of the framework is likely to be a robust application measure for policy makers to justify infrastructure investments and to generate awareness about the issue of urban transport disadvantage.
Resumo:
This article examines the conditions of penal hope behind suggestions that the penal expansionism of the last three decades may be at a ‘turning point’. The article proceeds by outlining David Green’s (2013b) suggested catalysts of penal reform and considers how applicable they are in the Australian context. Green’s suggested catalysts are: the cycles and saturation thesis; shifts in the dominant conception of the offender; the global financial crisis (GFC) and budgetary constraints; the drop in crime; the emergence of the prisoner re‐entry movement; apparent shifts in public opinion; the influence of evangelical Christian ideas; and the Right on Crime initiative. The article then considers a number of other possible catalysts or forces: the role of trade unions; the role of courts; the emergence of recidivism as a political issue; the influence of ‘evidence based’/‘what works’ discourse; and the emergence of justice reinvestment (JR). The article concludes with some comments about the capacity of criminology and criminologists to contribute to penal reductionism, offering an optimistic assessment for the prospects of a reflexive criminology that engages in and engenders a wider politics around criminal justice issues.
Resumo:
The ability of the technique of large-amplitude Fourier transformed (FT) ac voltammetry to facilitate the quantitative evaluation of electrode processes involving electron transfer and catalytically coupled chemical reactions has been evaluated. Predictions derived on the basis of detailed simulations imply that the rate of electron transfer is crucial, as confirmed by studies on the ferrocenemethanol (FcMeOH)-mediated electrocatalytic oxidation of ascorbic acid. Thus, at glassy carbon, gold, and boron-doped diamond electrodes, the introduction of the coupled electrocatalytic reaction, while producing significantly enhanced dc currents, does not affect the ac harmonics. This outcome is as expected if the FcMeOH (0/+) process remains fully reversible in the presence of ascorbic acid. In contrast, the ac harmonic components available from FT-ac voltammetry are predicted to be highly sensitive to the homogeneous kinetics when an electrocatalytic reaction is coupled to a quasi-reversible electron-transfer process. The required quasi-reversible scenario is available at an indium tin oxide electrode. Consequently, reversible potential, heterogeneous charge-transfer rate constant, and charge-transfer coefficient values of 0.19 V vs Ag/AgCl, 0.006 cm s (-1) and 0.55, respectively, along with a second-order homogeneous chemical rate constant of 2500 M (-1) s (-1) for the rate-determining step in the catalytic reaction were determined by comparison of simulated responses and experimental voltammograms derived from the dc and first to fourth ac harmonic components generated at an indium tin oxide electrode. The theoretical concepts derived for large-amplitude FT ac voltammetry are believed to be applicable to a wide range of important solution-based mediated electrocatalytic reactions.
Resumo:
Today, the majority of semiconductor fabrication plants (fabs) conduct equipment preventive maintenance based on statistically-derived time- or wafer-count-based intervals. While these practices have had relative success in managing equipment availability and product yield, the cost, both in time and materials, remains high. Condition-based maintenance has been successfully adopted in several industries, where costs associated with equipment downtime range from potential loss of life to unacceptable affects to companies’ bottom lines. In this paper, we present a method for the monitoring of complex systems in the presence of multiple operating regimes. In addition, the new representation of degradation processes will be used to define an optimization procedure that facilitates concurrent maintenance and operational decision-making in a manufacturing system. This decision-making procedure metaheuristically maximizes a customizable cost function that reflects the benefits of production uptime, and the losses incurred due to deficient quality and downtime. The new degradation monitoring method is illustrated through the monitoring of a deposition tool operating over a prolonged period of time in a major fab, while the operational decision-making is demonstrated using simulated operation of a generic cluster tool.
Resumo:
In order to establish the influence of the drying air characteristics on the drying performance and fluidization quality of bovine intestine for pet food, several drying tests have been carried out in a laboratory scale heat pump assisted fluid bed dryer. Bovine intestine samples were heat pump fluidized bed dried at atmospheric pressure and at temperatures below and above the materials freezing points, equipped with a continuous monitoring system. The investigation of the drying characteristics have been conducted in the temperature range −10 to 25 ◦C and the airflow in the range 1.5–2.5 m/s. Some experiments were conducted as single temperature drying experiments and others as two stage drying experiments employing two temperatures. An Arrhenius-type equation was used to interpret the influence of the drying air temperature on the effective diffusivity, calculated with the method of slopes in terms of energy activation, and this was found to be sensitive to the temperature. The effective diffusion coefficient of moisture transfer was determined by the Fickian method using uni-dimensional moisture movement in both moisture, removal by evaporation and combined sublimation and evaporation. Correlations expressing the effective moisture diffusivity and drying temperature are reported. Bovine particles were characterized according to the Geldart classification and the minimum fluidization velocity was calculated using the Ergun Equation and generalized equation for all drying conditions at the beginning and end of the trials. Walli’s model was used to categorize stability of the fluidization at the beginning and end of the dryingv for each trial. The determined Walli’s values were positive at the beginning and end of all trials indicating stable fluidization at the beginning and end for each drying condition.
Resumo:
Spatial organisation of proteins according to their function plays an important role in the specificity of their molecular interactions. Emerging proteomics methods seek to assign proteins to sub-cellular locations by partial separation of organelles and computational analysis of protein abundance distributions among partially separated fractions. Such methods permit simultaneous analysis of unpurified organelles and promise proteome-wide localisation in scenarios wherein perturbation may prompt dynamic re-distribution. Resolving organelles that display similar behavior during a protocol designed to provide partial enrichment represents a possible shortcoming. We employ the Localisation of Organelle Proteins by Isotope Tagging (LOPIT) organelle proteomics platform to demonstrate that combining information from distinct separations of the same material can improve organelle resolution and assignment of proteins to sub-cellular locations. Two previously published experiments, whose distinct gradients are alone unable to fully resolve six known protein-organelle groupings, are subjected to a rigorous analysis to assess protein-organelle association via a contemporary pattern recognition algorithm. Upon straightforward combination of single-gradient data, we observe significant improvement in protein-organelle association via both a non-linear support vector machine algorithm and partial least-squares discriminant analysis. The outcome yields suggestions for further improvements to present organelle proteomics platforms, and a robust analytical methodology via which to associate proteins with sub-cellular organelles.
Resumo:
The E&P sector can learn much about asset maintenance from the space and satellite industry. Practitioners from both the upstream oil and gas industry and the space and satellite sector have repeatedly noted several striking similarities between the two industries over the years, which have in turn resulted in many direct comparisons in the media and industry press. The similarities between the two industries have even resulted in a modest amount of cross-pollinating between the respective supply chains. Because the operating conditions of both industries are so extreme, some oil and gas equipment vendors have occasionally sourced motors and other parts from aerospace contractors. Also, satellites are now being used to assess oil fires, detect subsidence in oil fields, measure oil spills, collect and transmit operational data from oil and gas fields, and monitor the movement of icebergs that might potentially collide with offshore oil and gas installations.
Resumo:
According to a study conducted by the International Maritime organisation (IMO) shipping sector is responsible for 3.3% of the global Greenhouse Gas (GHG) emissions. The 1997 Kyoto Protocol calls upon states to pursue limitation or reduction of emissions of GHG from marine bunker fuels working through the IMO. In 2011, 14 years after the adoption of the Kyoto Protocol, the Marine Environment Protection Committee (MEPC) of the IMO has adopted mandatory energy efficiency measures for international shipping which can be treated as the first ever mandatory global GHG reduction instrument for an international industry. The MEPC approved an amendment of Annex VI of the 1973 International Convention for the Prevention of Pollution from Ships (MARPOL 73/78) to introduce a mandatory Energy Efficiency Design Index (EEDI) for new ships and the Ship Energy Efficiency Management Plan (SEEMP) for all ships. Considering the growth projections of human population and world trade the technical and operational measures may not be able to reduce the amount of GHG emissions from international shipping in a satisfactory level. Therefore, the IMO is considering to introduce market-based mechanisms that may serve two purposes including providing a fiscal incentive for the maritime industry to invest in more energy efficient manner and off-setting of growing ship emissions. Some leading developing countries already voiced their serious reservations on the newly adopted IMO regulations stating that by imposing the same obligation on all countries, irrespective of their economic status, this amendment has rejected the Principle of Common but Differentiated Responsibility (the CBDR Principle), which has always been the cornerstone of international climate change law discourses. They also claimed that negotiation for a market based mechanism should not be continued without a clear commitment from the developed counters for promotion of technical co-operation and transfer of technology relating to the improvement of energy efficiency of ships. Against this backdrop, this article explores the challenges for the developing counters in the implementation of already adopted technical and operational measures.