920 resultados para proposed solution
Resumo:
The security of industrial control systems in critical infrastructure is a concern for the Australian government and other nations. There is a need to provide local Australian training and education for both control system engineers and information technology professionals. This paper proposes a postgraduate curriculum of four courses to provide knowledge and skills to protect critical infrastructure industrial control systems. Our curriculum is unique in that it provides security awareness but also the advanced skills required for security specialists in this area. We are aware that in the Australian context there is a cultural gap between the thinking of control system engineers who are responsible for maintaining and designing critical infrastructure and information technology professionals who are responsible for protecting these systems from cyber attacks. Our curriculum aims to bridge this gap by providing theoretical and practical exercises that will raise the awareness and preparedness of both groups of professionals.
Resumo:
In this paper we construct earthwork allocation plans for a linear infrastructure road project. Fuel consumption metrics and an innovative block partitioning and modelling approach are applied to reduce costs. 2D and 3D variants of the problem were compared to see what effect, if any, occurs on solution quality. 3D variants were also considered to see what additional complexities and difficulties occur. The numerical investigation shows a significant improvement and a reduction in fuel consumption as theorised. The proposed solutions differ considerably from plans that were constructed for a distance based metric as commonly used in other approaches. Under certain conditions, 3D problem instances can be solved optimally as 2D problems.
Resumo:
In March 2008, the Australian Government announced its intention to introduce a national Emissions Trading Scheme (ETS), now expected to start in 2015. This impending development provides an ideal setting to investigate the impact an ETS in Australia will have on the market valuation of Australian Securities Exchange (ASX) firms. This is the first empirical study into the pricing effects of the ETS in Australia. Primarily, we hypothesize that firm value will be negatively related to a firm's carbon intensity profile. That is, there will be a greater impact on firm value for high carbon emitters in the period prior (2007) to the introduction of the ETS, whether for reasons relating to the existence of unbooked liabilities associated with future compliance and/or abatement costs, or for reasons relating to reduced future earnings. Using a sample of 58 Australian listed firms (constrained by the current availability of emissions data) which comprise larger, more profitable and less risky listed Australian firms, we first undertake an event study focusing on five distinct information events argued to impact the probability of the proposed ETS being enacted. Here, we find direct evidence that the capital market is indeed pricing the proposed ETS. Second, using a modified version of the Ohlson (1995) valuation model, we undertake a valuation analysis designed not only to complement the event study results, but more importantly to provide insights into the capital market's assessment of the magnitude of the economic impact of the proposed ETS as reflected in market capitalization. Here, our results show that the market assesses the most carbon intensive sample firms a market value decrement relative to other sample firms of between 7% and 10% of market capitalization. Further, based on the carbon emission profile of the sample firms we imply a ‘future carbon permit price’ of between AUD$17 per tonne and AUD$26 per tonne of carbon dioxide emitted. This study is more precise than industry reports, which set a carbon price of between AUD$15 to AUD$74 per tonne.
Resumo:
Despite advances in the field of workflow flexibility, there is still insufficient support for dealing with unforeseen exceptions. In particular, it is challenging to find a solution which preserves the intent of the process as much as possible when such exceptions are encountered. This challenge can be alleviated by making the connection between a process and its objectives more explicit. This paper presents a demo illustrating the blended workflow approach where two specifications are fused together, a "classic" process model and a goal model. End users are guided by the process model but may deviate from this model whenever unexpected situations are encountered. The two models involved provide views on the process and the demo shows how one can switch between these views and how they are kept consistent by the blended workflow engine. A simple example involving the making of a doctor's appointment illustrates the potential advantages of the proposed approach to both researchers and developers.
Resumo:
Thermogravimetric analysis (TG) and powder X-ray diffraction (PXRD) were used to study some selected Mg/Al and Zn/Al layered double hydroxides (LDHs) prepared by co-precipitation. A Mg/Al hydrotalcite was investigated before and after reformation in fluoride and nitrate solutions. Little change in the TG or PXRD patterns was observed. It was proposed that successful intercalation of nitrate anions has occurred. However, the absence of any change in the d(003) interlayer spacing suggests that fluoride anions were not intercalated between the LDH layers. Any fluoride anions that were removed from solution are most likely adsorbed onto the outer surfaces of the hydrotalcite. As fluoride removal was not quantified it is not possible to confirm that this has happened without further experimentation. Carbonate is probably intercalated into the interlayer of these hydrotalcites, as well as fluoride or nitrate. The carbonate most likely originates from either incomplete decarbonation during thermal activation or adsorption from the atmosphere or dissolved in the deionised water. Small and large scale co-precipitation syntheses of a Zn/Al LDH were also investigated to determine if there was any change in the product. While the small scale experiment produced a good quality LDH of reasonable purity; the large scale synthesis resulted in several additional phases. Imprecise measurement and difficulty in handling the large quantities of reagents appeared to be sufficient to alter the reaction conditions causing a mixture of phases to be formed.
Resumo:
The name apophyllite refers to a specific group of phyllosilicates, a class of minerals that also includes the micas and are a class of minerals of similar chemical makeup that comprise a solid solution series, and includes the members apophyllite-(KF), apophyllite-(KOH) and apophyllite-(NaF). Fluorapophyllite apophyllite-(KF) and hydroxyapophyllite apophyllite-(KOH) are different minerals only because of the difference in percentages of fluorine to hydroxyl ions. Three apophyllite minerals have been characterised by thermogravimetric analysis and infrared spectroscopy. Dehydration takes place in several steps. Major mass losses occur at around 205–220 °C and at 400–429 °C. Minor mass losses are observed around 242–292 °C. It is proposed that dehydration occurs in the first decomposition step. Water is lost over the temperature range 125–250, 250–325 and 325–525 °C with the loss of 4.5, 0.5 and 3.0 mol of water. Water functions as zeolitic water and is also coordinated to the silica surfaces.
Resumo:
A theoretical framework for a construction management decision evaluation system for project selection by means of a literature review. The theory is developed by the examination of the major factors concerning the project selection decision from a deterministic viewpoint, where the decision-maker is assumed to possess 'perfect knowledge' of all the aspects involved. Four fundamental project characteristics are identified together with three meaningful outcome variables. The relationship within and between these variables are considered together with some possible solution techniques. The theory is next extended to time-related dynamic aspects of the problem leading to the implications of imperfect knowledge and a nondeterministic model. A solution technique is proposed in which Gottinger's sequential machines are utilised to model the decision process,
Resumo:
This extended abstract summarizes the state-of-the-art solution to the structuring problem for models that describe existing real world or envisioned processes. Special attention is devoted to models that allow for the true concurrency semantics. Given a model of a process, the structuring problem deals with answering the question of whether there exists another model that describes the process and is solely composed of structured patterns, such as sequence, selection, option for simultaneous execution, and iteration. Methods and techniques for structuring developed by academia as well as products and standards proposed by industry are discussed. Expectations and recommendations on the future advancements of the structuring problem are suggested.
Resumo:
This paper presents an Image Based Visual Servo control design for Fixed Wing Unmanned Aerial Vehicles tracking locally linear infrastructure in the presence of wind using a body fixed imaging sensor. Visual servoing offers improved data collection by posing the tracking task as one of controlling a feature as viewed by the inspection sensor, although is complicated by the introduction of wind as aircraft heading and course angle no longer align. In this work it is shown that the effects of wind alter the desired line angle required for continuous tracking to equal the wind correction angle as would be calculated to set a desired course. A control solution is then sort by linearizing the interaction matrix about the new feature pose such that kinematics of the feature can be augmented with the lateral dynamics of the aircraft, from which a state feedback control design is developed. Simulation results are presented comparing no compensation, integral control and the proposed controller using the wind correction angle, followed by an assessment of response to atmospheric disturbances in the form of turbulence and wind gusts
Resumo:
The health impacts of exposure to ambient temperature have been drawing increasing attention from the environmental health research community, government, society, industries, and the public. Case-crossover and time series models are most commonly used to examine the effects of ambient temperature on mortality. However, some key methodological issues remain to be addressed. For example, few studies have used spatiotemporal models to assess the effects of spatial temperatures on mortality. Few studies have used a case-crossover design to examine the delayed (distributed lag) and non-linear relationship between temperature and mortality. Also, little evidence is available on the effects of temperature changes on mortality, and on differences in heat-related mortality over time. This thesis aimed to address the following research questions: 1. How to combine case-crossover design and distributed lag non-linear models? 2. Is there any significant difference in effect estimates between time series and spatiotemporal models? 3. How to assess the effects of temperature changes between neighbouring days on mortality? 4. Is there any change in temperature effects on mortality over time? To combine the case-crossover design and distributed lag non-linear model, datasets including deaths, and weather conditions (minimum temperature, mean temperature, maximum temperature, and relative humidity), and air pollution were acquired from Tianjin China, for the years 2005 to 2007. I demonstrated how to combine the case-crossover design with a distributed lag non-linear model. This allows the case-crossover design to estimate the non-linear and delayed effects of temperature whilst controlling for seasonality. There was consistent U-shaped relationship between temperature and mortality. Cold effects were delayed by 3 days, and persisted for 10 days. Hot effects were acute and lasted for three days, and were followed by mortality displacement for non-accidental, cardiopulmonary, and cardiovascular deaths. Mean temperature was a better predictor of mortality (based on model fit) than maximum or minimum temperature. It is still unclear whether spatiotemporal models using spatial temperature exposure produce better estimates of mortality risk compared with time series models that use a single site’s temperature or averaged temperature from a network of sites. Daily mortality data were obtained from 163 locations across Brisbane city, Australia from 2000 to 2004. Ordinary kriging was used to interpolate spatial temperatures across the city based on 19 monitoring sites. A spatiotemporal model was used to examine the impact of spatial temperature on mortality. A time series model was used to assess the effects of single site’s temperature, and averaged temperature from 3 monitoring sites on mortality. Squared Pearson scaled residuals were used to check the model fit. The results of this study show that even though spatiotemporal models gave a better model fit than time series models, spatiotemporal and time series models gave similar effect estimates. Time series analyses using temperature recorded from a single monitoring site or average temperature of multiple sites were equally good at estimating the association between temperature and mortality as compared with a spatiotemporal model. A time series Poisson regression model was used to estimate the association between temperature change and mortality in summer in Brisbane, Australia during 1996–2004 and Los Angeles, United States during 1987–2000. Temperature change was calculated by the current day's mean temperature minus the previous day's mean. In Brisbane, a drop of more than 3 �C in temperature between days was associated with relative risks (RRs) of 1.16 (95% confidence interval (CI): 1.02, 1.31) for non-external mortality (NEM), 1.19 (95% CI: 1.00, 1.41) for NEM in females, and 1.44 (95% CI: 1.10, 1.89) for NEM aged 65.74 years. An increase of more than 3 �C was associated with RRs of 1.35 (95% CI: 1.03, 1.77) for cardiovascular mortality and 1.67 (95% CI: 1.15, 2.43) for people aged < 65 years. In Los Angeles, only a drop of more than 3 �C was significantly associated with RRs of 1.13 (95% CI: 1.05, 1.22) for total NEM, 1.25 (95% CI: 1.13, 1.39) for cardiovascular mortality, and 1.25 (95% CI: 1.14, 1.39) for people aged . 75 years. In both cities, there were joint effects of temperature change and mean temperature on NEM. A change in temperature of more than 3 �C, whether positive or negative, has an adverse impact on mortality even after controlling for mean temperature. I examined the variation in the effects of high temperatures on elderly mortality (age . 75 years) by year, city and region for 83 large US cities between 1987 and 2000. High temperature days were defined as two or more consecutive days with temperatures above the 90th percentile for each city during each warm season (May 1 to September 30). The mortality risk for high temperatures was decomposed into: a "main effect" due to high temperatures using a distributed lag non-linear function, and an "added effect" due to consecutive high temperature days. I pooled yearly effects across regions and overall effects at both regional and national levels. The effects of high temperature (both main and added effects) on elderly mortality varied greatly by year, city and region. The years with higher heat-related mortality were often followed by those with relatively lower mortality. Understanding this variability in the effects of high temperatures is important for the development of heat-warning systems. In conclusion, this thesis makes contribution in several aspects. Case-crossover design was combined with distribute lag non-linear model to assess the effects of temperature on mortality in Tianjin. This makes the case-crossover design flexibly estimate the non-linear and delayed effects of temperature. Both extreme cold and high temperatures increased the risk of mortality in Tianjin. Time series model using single site’s temperature or averaged temperature from some sites can be used to examine the effects of temperature on mortality. Temperature change (no matter significant temperature drop or great temperature increase) increases the risk of mortality. The high temperature effect on mortality is highly variable from year to year.
Resumo:
In this paper, the spectral approximations are used to compute the fractional integral and the Caputo derivative. The effective recursive formulae based on the Legendre, Chebyshev and Jacobi polynomials are developed to approximate the fractional integral. And the succinct scheme for approximating the Caputo derivative is also derived. The collocation method is proposed to solve the fractional initial value problems and boundary value problems. Numerical examples are also provided to illustrate the effectiveness of the derived methods.
Resumo:
Fractional partial differential equations have been applied to many problems in physics, finance, and engineering. Numerical methods and error estimates of these equations are currently a very active area of research. In this paper we consider a fractional diffusionwave equation with damping. We derive the analytical solution for the equation using the method of separation of variables. An implicit difference approximation is constructed. Stability and convergence are proved by the energy method. Finally, two numerical examples are presented to show the effectiveness of this approximation.
Resumo:
A new optimal control model of the interactions between a growing tumour and the host immune system along with an immunotherapy treatment strategy is presented. The model is based on an ordinary differential equation model of interactions between the growing tu- mour and the natural killer, cytotoxic T lymphocyte and dendritic cells of the host immune system, extended through the addition of a control function representing the application of a dendritic cell treat- ment to the system. The numerical solution of this model, obtained from a multi species Runge–Kutta forward-backward sweep scheme, is described. We investigate the effects of varying the maximum al- lowed amount of dendritic cell vaccine administered to the system and find that control of the tumour cell population is best effected via a high initial vaccine level, followed by reduced treatment and finally cessation of treatment. We also found that increasing the strength of the dendritic cell vaccine causes an increase in the number of natural killer cells and lymphocytes, which in turn reduces the growth of the tumour.
Resumo:
Considerate amount of research has proposed optimization-based approaches employing various vibration parameters for structural damage diagnosis. The damage detection by these methods is in fact a result of updating the analytical structural model in line with the current physical model. The feasibility of these approaches has been proven. But most of the verification has been done on simple structures, such as beams or plates. In the application on a complex structure, like steel truss bridges, a traditional optimization process will cost massive computational resources and lengthy convergence. This study presents a multi-layer genetic algorithm (ML-GA) to overcome the problem. Unlike the tedious convergence process in a conventional damage optimization process, in each layer, the proposed algorithm divides the GA’s population into groups with a less number of damage candidates; then, the converged population in each group evolves as an initial population of the next layer, where the groups merge to larger groups. In a damage detection process featuring ML-GA, as parallel computation can be implemented, the optimization performance and computational efficiency can be enhanced. In order to assess the proposed algorithm, the modal strain energy correlation (MSEC) has been considered as the objective function. Several damage scenarios of a complex steel truss bridge’s finite element model have been employed to evaluate the effectiveness and performance of ML-GA, against a conventional GA. In both single- and multiple damage scenarios, the analytical and experimental study shows that the MSEC index has achieved excellent damage indication and efficiency using the proposed ML-GA, whereas the conventional GA only converges at a local solution.
Resumo:
The objective of this PhD research program is to investigate numerical methods for simulating variably-saturated flow and sea water intrusion in coastal aquifers in a high-performance computing environment. The work is divided into three overlapping tasks: to develop an accurate and stable finite volume discretisation and numerical solution strategy for the variably-saturated flow and salt transport equations; to implement the chosen approach in a high performance computing environment that may have multiple GPUs or CPU cores; and to verify and test the implementation. The geological description of aquifers is often complex, with porous materials possessing highly variable properties, that are best described using unstructured meshes. The finite volume method is a popular method for the solution of the conservation laws that describe sea water intrusion, and is well-suited to unstructured meshes. In this work we apply a control volume-finite element (CV-FE) method to an extension of a recently proposed formulation (Kees and Miller, 2002) for variably saturated groundwater flow. The CV-FE method evaluates fluxes at points where material properties and gradients in pressure and concentration are consistently defined, making it both suitable for heterogeneous media and mass conservative. Using the method of lines, the CV-FE discretisation gives a set of differential algebraic equations (DAEs) amenable to solution using higher-order implicit solvers. Heterogeneous computer systems that use a combination of computational hardware such as CPUs and GPUs, are attractive for scientific computing due to the potential advantages offered by GPUs for accelerating data-parallel operations. We present a C++ library that implements data-parallel methods on both CPU and GPUs. The finite volume discretisation is expressed in terms of these data-parallel operations, which gives an efficient implementation of the nonlinear residual function. This makes the implicit solution of the DAE system possible on the GPU, because the inexact Newton-Krylov method used by the implicit time stepping scheme can approximate the action of a matrix on a vector using residual evaluations. We also propose preconditioning strategies that are amenable to GPU implementation, so that all computationally-intensive aspects of the implicit time stepping scheme are implemented on the GPU. Results are presented that demonstrate the efficiency and accuracy of the proposed numeric methods and formulation. The formulation offers excellent conservation of mass, and higher-order temporal integration increases both numeric efficiency and accuracy of the solutions. Flux limiting produces accurate, oscillation-free solutions on coarse meshes, where much finer meshes are required to obtain solutions with equivalent accuracy using upstream weighting. The computational efficiency of the software is investigated using CPUs and GPUs on a high-performance workstation. The GPU version offers considerable speedup over the CPU version, with one GPU giving speedup factor of 3 over the eight-core CPU implementation.