936 resultados para Process analysis
Resumo:
Synthesis of fine particle α-alumina and related oxide materials such as MgAl2O4, CaAl2O4, Y3Al5O12 (YAG), Image , β′-alumina, LaAlO3 and ruby powder (Image ) has been achieved at low temperatures (500°C) by the combustion of corresponding metal nitrate-urea mixtures. Solid combustion products have been identified by their characteristic X-ray diffraction patterns. The fine particle nature of α-alumina and related oxide materials has been investigated using SEM, TEM, particle size analysis and surface area measurements.
Resumo:
Time series, from a narrow point of view, is a sequence of observations on a stochastic process made at discrete and equally spaced time intervals. Its future behavior can be predicted by identifying, fitting, and confirming a mathematical model. In this paper, time series analysis is applied to problems concerning runwayinduced vibrations of an aircraft. A simple mathematical model based on this technique is fitted to obtain the impulse response coefficients of an aircraft system considered as a whole for a particular type of operation. Using this model, the output which is the aircraft response can be obtained with lesser computation time for any runway profile as the input.
Resumo:
Crime analysts have traditionally received little guidance from academic researchers in key tasks in the analysis process, specifically the testing of multiple hypotheses and evaluating evidence in a scientific fashion. This article attempts to fill this gap by outlining a method (the Analysis of Competing Hypotheses) of systematically analysing multiple explanations for crime problems. The method is systematic, avoids many cognitive errors common in analysis, and is explicit. It is argued that the implementation of this approach makes analytic products audit-able, the reasoning underpinning them transparent, and provides intelligence managers a rational professional development tool for individual analysts.
Resumo:
A basic lectin (pI approximately 10.0) was purified to homogeneity from the seeds of winged bean (Psophocarpus tetragonolobus) by affinity chromatography on Sepharose 6-aminocaproyl-D-galactosamine. The lectin agglutinated trypsinized rabbit erythrocytes and had a relative molecular mass of 58,000 consisting of two subunits of Mr 29,000. The lectin binds to N-dansylgalactosamine, leading to a 15-fold increase in dansyl fluorescence with a concomitant 25-nm blue shift in the emission maximum. The lectin has two binding sites/dimer for this sugar and an association constant of 4.17 X 10(5) M-1 at 25 degrees C. The strong binding to N-dansylgalactosamine is due to a relatively positive entropic contribution as revealed by the thermodynamic parameters: delta H = -33.62 kJ mol-1 and delta S0 = -5.24 J mol-1 K-1. Binding of this sugar to the lectin shows that it can accommodate a large hydrophobic substituent on the C-2 carbon of D-galactose. Studies with other sugars indicate that a hydrophobic substituent in alpha- conformation at the anomeric position increases the affinity of binding. The C-4 and C-6 hydroxyl groups are critical for sugar binding to this lectin. Lectin difference absorption spectra in the presence of N-acetylgalactosamine indicate perturbation of tryptophan residues on sugar binding. The results of stopped flow kinetics with N- dansylgalactosamine and the lectin are consistent with a simple one- step mechanism for which k+1 = 1.33 X 10(4) M-1 s-1 and k-1 = 3.2 X 10(- 2) s-1 at 25 degrees C. This k-1 is slower than any reported for a lectin-monosaccharide complex so far. The activation parameters indicate an enthalpically controlled association process.
Resumo:
The aim of this thesis is to develop a fully automatic lameness detection system that operates in a milking robot. The instrumentation, measurement software, algorithms for data analysis and a neural network model for lameness detection were developed. Automatic milking has become a common practice in dairy husbandry, and in the year 2006 about 4000 farms worldwide used over 6000 milking robots. There is a worldwide movement with the objective of fully automating every process from feeding to milking. Increase in automation is a consequence of increasing farm sizes, the demand for more efficient production and the growth of labour costs. As the level of automation increases, the time that the cattle keeper uses for monitoring animals often decreases. This has created a need for systems for automatically monitoring the health of farm animals. The popularity of milking robots also offers a new and unique possibility to monitor animals in a single confined space up to four times daily. Lameness is a crucial welfare issue in the modern dairy industry. Limb disorders cause serious welfare, health and economic problems especially in loose housing of cattle. Lameness causes losses in milk production and leads to early culling of animals. These costs could be reduced with early identification and treatment. At present, only a few methods for automatically detecting lameness have been developed, and the most common methods used for lameness detection and assessment are various visual locomotion scoring systems. The problem with locomotion scoring is that it needs experience to be conducted properly, it is labour intensive as an on-farm method and the results are subjective. A four balance system for measuring the leg load distribution of dairy cows during milking in order to detect lameness was developed and set up in the University of Helsinki Research farm Suitia. The leg weights of 73 cows were successfully recorded during almost 10,000 robotic milkings over a period of 5 months. The cows were locomotion scored weekly, and the lame cows were inspected clinically for hoof lesions. Unsuccessful measurements, caused by cows standing outside the balances, were removed from the data with a special algorithm, and the mean leg loads and the number of kicks during milking was calculated. In order to develop an expert system to automatically detect lameness cases, a model was needed. A probabilistic neural network (PNN) classifier model was chosen for the task. The data was divided in two parts and 5,074 measurements from 37 cows were used to train the model. The operation of the model was evaluated for its ability to detect lameness in the validating dataset, which had 4,868 measurements from 36 cows. The model was able to classify 96% of the measurements correctly as sound or lame cows, and 100% of the lameness cases in the validation data were identified. The number of measurements causing false alarms was 1.1%. The developed model has the potential to be used for on-farm decision support and can be used in a real-time lameness monitoring system.
Resumo:
We compare two popular methods for estimating the power spectrum from short data windows, namely the adaptive multivariate autoregressive (AMVAR) method and the multitaper method. By analyzing a simulated signal (embedded in a background Ornstein-Uhlenbeck noise process) we demonstrate that the AMVAR method performs better at detecting short bursts of oscillations compared to the multitaper method. However, both methods are immune to jitter in the temporal location of the signal. We also show that coherence can still be detected in noisy bivariate time series data by the AMVAR method even if the individual power spectra fail to show any peaks. Finally, using data from two monkeys performing a visuomotor pattern discrimination task, we demonstrate that the AMVAR method is better able to determine the termination of the beta oscillations when compared to the multitaper method.
Resumo:
The research field of Business Process Management (BPM) has gradually developed as a discipline situated within the computer, management and information systems sciences. Its evolution has been shaped by its own conference series, the BPM conference. Still, as with any other academic discipline, debates accrue and persist, which target the identity as well as the quality and maturity of the BPM field. In this paper, we contribute to the debate on the identity and progress of the BPM conference research community through an analysis of the BPM conference proceedings. We develop an understanding of signs of progress of research presented at this conference, where, how, and why papers in this conference have had an impact, and the most appropriate formats for disseminating influential research in this conference. Based on our findings from this analysis, we provide conclusions about the state of the conference series and develop a set of recommendations to further develop the conference community in terms of research maturity, methodological advance, quality, impact, and progression.
Resumo:
The Australian hardwood plantation industry is challenged to identify profitable markets for the sale of its wood fibre. The majority of the hardwood plantations already established in Australia have been managed for the production of pulpwood; however, interest exists to identify more profitable and value-added markets. As a consequence of a predominately pulpwood-focused management regime, this plantation resource contains a range of qualities and performance. Identifying alternative processing strategies and products that suit young plantation-grown hardwoods have proved challenging, with low product recoveries and/or unmarketable products as the outcome of many studies. Simple spindleless lathe technology was used to process 918 billets from six commercially important Australian hardwood species. The study has demonstrated that the production of rotary peeled veneer is an effective method for converting plantation hardwood trees. Recovery rates significantly higher than those reported for more traditional processing techniques (e.g., sawmilling) were achieved. Veneer visually graded to industry standards exhibited favourable recoveries suitable for the manufacture of structural products.
Resumo:
In irrigated cropping, as with any other industry, profit and risk are inter-dependent. An increase in profit would normally coincide with an increase in risk, and this means that risk can be traded for profit. It is desirable to manage a farm so that it achieves the maximum possible profit for the desired level of risk. This paper identifies risk-efficient cropping strategies that allocate land and water between crop enterprises for a case study of an irrigated farm in Southern Queensland, Australia. This is achieved by applying stochastic frontier analysis to the output of a simulation experiment. The simulation experiment involved changes to the levels of business risk by systematically varying the crop sowing rules in a bioeconomic model of the case study farm. This model utilises the multi-field capability of the process based Agricultural Production System Simulator (APSIM) and is parameterised using data collected from interviews with a collaborating farmer. We found sowing rules that increased the farm area sown to cotton caused the greatest increase in risk-efficiency. Increasing maize area also improved risk-efficiency but to a lesser extent than cotton. Sowing rules that increased the areas sown to wheat reduced the risk-efficiency of the farm business. Sowing rules were identified that had the potential to improve the expected farm profit by ca. $50,000 Annually, without significantly increasing risk. The concept of the shadow price of risk is discussed and an expression is derived from the estimated frontier equation that quantifies the trade-off between profit and risk.
Resumo:
Miniaturized mass spectrometric ionization techniques for environmental analysis and bioanalysis Novel miniaturized mass spectrometric ionization techniques based on atmospheric pressure chemical ionization (APCI) and atmospheric pressure photoionization (APPI) were studied and evaluated in the analysis of environmental samples and biosamples. The three analytical systems investigated here were gas chromatography-microchip atmospheric pressure chemical ionization-mass spectrometry (GC-µAPCI-MS) and gas chromatography-microchip atmospheric pressure photoionization-mass spectrometry (GC-µAPPI-MS), where sample pretreatment and chromatographic separation precede ionization, and desorption atmospheric pressure photoionization-mass spectrometry (DAPPI-MS), where the samples are analyzed either as such or after minimal pretreatment. The gas chromatography-microchip atmospheric pressure ionization-mass spectrometry (GC-µAPI-MS) instrumentations were used in the analysis of polychlorinated biphenyls (PCBs) in negative ion mode and 2-quinolinone-derived selective androgen receptor modulators (SARMs) in positive ion mode. The analytical characteristics (i.e., limits of detection, linear ranges, and repeatabilities) of the methods were evaluated with PCB standards and SARMs in urine. All methods showed good analytical characteristics and potential for quantitative environmental analysis or bioanalysis. Desorption and ionization mechanisms in DAPPI were studied. Desorption was found to be a thermal process, with the efficiency strongly depending on thermal conductivity of the sampling surface. Probably the size and polarity of the analyte also play a role. In positive ion mode, the ionization is dependent on the ionization energy and proton affinity of the analyte and the spray solvent, while in negative ion mode the ionization mechanism is determined by the electron affinity and gas-phase acidity of the analyte and the spray solvent. DAPPI-MS was tested in the fast screening analysis of environmental, food, and forensic samples, and the results demonstrated the feasibility of DAPPI-MS for rapid screening analysis of authentic samples.
Resumo:
Business Process Management (BPM) as a research field integrates different perspectives from the disciplines computer science, management science and information systems research. Its evolution has by been shaped by the corresponding conferences series, the International Conference on Business Process Management (BPM conference). As much as in other academic discipline, there is an ongoing debate that discusses the identity, the quality and maturity of the BPM field. In this paper, we review and summarize the major findings a larger study that will be published in the Business & Information Systems Engineering journal in 2016. In the study, we investigate the identity and progress of the BPM conference research community through an analysis of the BPM conference proceedings. Based on our findings from this analysis, we formulate recommendations to further develop the conference community in terms of methodological advance, quality, impact and progression.
Resumo:
Digital elevation models (DEMs) have been an important topic in geography and surveying sciences for decades due to their geomorphological importance as the reference surface for gravita-tion-driven material flow, as well as the wide range of uses and applications. When DEM is used in terrain analysis, for example in automatic drainage basin delineation, errors of the model collect in the analysis results. Investigation of this phenomenon is known as error propagation analysis, which has a direct influence on the decision-making process based on interpretations and applications of terrain analysis. Additionally, it may have an indirect influence on data acquisition and the DEM generation. The focus of the thesis was on the fine toposcale DEMs, which are typically represented in a 5-50m grid and used in the application scale 1:10 000-1:50 000. The thesis presents a three-step framework for investigating error propagation in DEM-based terrain analysis. The framework includes methods for visualising the morphological gross errors of DEMs, exploring the statistical and spatial characteristics of the DEM error, making analytical and simulation-based error propagation analysis and interpreting the error propagation analysis results. The DEM error model was built using geostatistical methods. The results show that appropriate and exhaustive reporting of various aspects of fine toposcale DEM error is a complex task. This is due to the high number of outliers in the error distribution and morphological gross errors, which are detectable with presented visualisation methods. In ad-dition, the use of global characterisation of DEM error is a gross generalisation of reality due to the small extent of the areas in which the decision of stationarity is not violated. This was shown using exhaustive high-quality reference DEM based on airborne laser scanning and local semivariogram analysis. The error propagation analysis revealed that, as expected, an increase in the DEM vertical error will increase the error in surface derivatives. However, contrary to expectations, the spatial au-tocorrelation of the model appears to have varying effects on the error propagation analysis depend-ing on the application. The use of a spatially uncorrelated DEM error model has been considered as a 'worst-case scenario', but this opinion is now challenged because none of the DEM derivatives investigated in the study had maximum variation with spatially uncorrelated random error. Sig-nificant performance improvement was achieved in simulation-based error propagation analysis by applying process convolution in generating realisations of the DEM error model. In addition, typology of uncertainty in drainage basin delineations is presented.
Resumo:
In an earlier paper [1], it has been shown that velocity ratio, defined with reference to the analogous circuit, is a basic parameter in the complete analysis of a linear one-dimensional dynamical system. In this paper it is shown that the terms constituting velocity ratio can be readily determined by means of an algebraic algorithm developed from a heuristic study of the process of transfer matrix multiplication. The algorithm permits the set of most significant terms at a particular frequency of interest to be identified from a knowledge of the relative magnitudes of the impedances of the constituent elements of a proposed configuration. This feature makes the algorithm a potential tool in a first approach to a rational design of a complex dynamical filter. This algorithm is particularly suited for the desk analysis of a medium size system with lumped as well as distributed elements.
Resumo:
A direct method of preparing cast aluminium alloy-graphite particle composites using uncoated graphite particles is reported. The method consists of introducing and dispersing uncoated but suitably pretreated graphite particles in aluminium alloy melts, and casting the resulting composite melts in suitable permanent moulds. The optical pretreatment required for the dispersion of the uncoated graphite particles in aluminium alloy melts consists of heating the graphite particles to 400° C in air for 1 h just prior to their dispersion in the melts. The effects of alloying elements such as Si, Cu and Mg on the dispersability of pretreated graphite in molten aluminium have also been reported. It was found that additions of about 0.5% Mg or 5% Si significantly improve the dispersability of graphite particles in aluminium alloy melts as indicated by the high recoveries of graphite in the castings of these composites. It was also possible to disperse upto 3% graphite in LM 13 alloy melts and retain the graphite particles in a well distributed fashion in the castings using the pre-heat-treated graphite particles. The observations in this study have been related to the information presently available on wetting between graphite and molten aluminium in the presence of different elements and our own thermogravimetric analysis studies on graphite particles. Physical and mechanical properties of LM 13-3% graphite composite made using pre-heat-treated graphite powder, were found to be adequate for many applications, including pistons which have been successfully used in internal combustion engines.
Resumo:
The aim of this study was to examine the actions of geographically dispersed process stakeholders (doctors, community pharmacists and RACFs) in order to cope with the information silos that exist within and across different settings. The study setting involved three metropolitan RACFs in Sydney, Australia and employed a qualitative approach using semi-structured interviews, non-participant observations and artefact analysis. Findings showed that medication information was stored in silos which required specific actions by each setting to translate this information to fit their local requirements. A salient example of this was the way in which community pharmacists used the RACF medication charts to prepare residents' pharmaceutical records. This translation of medication information across settings was often accompanied by telephone or face-to-face conversations to cross-check, validate or obtain new information. Findings highlighted that technological interventions that work in silos can negatively impact the quality of medication management processes in RACF settings. The implementation of commercial software applications like electronic medication charts need to be appropriately integrated to satisfy the collaborative information requirements of the RACF medication process.