922 resultados para Traditional enrichment method
Resumo:
The Lockyer Valley in southeast Queensland supports important and intensive irrigation which is dependant on the quality and availability of groundwater. Prolonged drought conditions from ~1997 resulted in a depletion of the alluvial aquifers, and concern for the long-term sustainability of this resource. By 2008, many areas of the valley were at < 20% of storage. Some relief occurred with rain events in early 2009, then in December 2010 - January 2011, most of southeast Queensland experienced unprecedented flooding. These storm-based events have caused a shift in research focus from investigations of drought conditions and mitigation to flood response analysis. For the alluvial aquifer system of the valley, a preliminary assessment of groundwater observation bore data, prior to and during the flood, indicates that there is a spatially variable aquifer response. While water levels in some bores screened in unconfined shallow aquifers have recovered by more than 10 m within a short period of time (months), others show only a small or moderate response. Measurements of pre- and post-flood groundwater levels and high-resolution time-series records from data loggers are considered within the framework of a 3D geological model of the Lockyer Valley using Groundwater Visualisation System(GVS). Groundwater level fluctuations covering both drought and flood periods are used to estimate groundwater recharge using the water table fluctuation method (WTF), supplemented by estimates derived using chloride mass balance. The presentation of hydraulic and recharge information in a 3D format has considerable advantages over the traditional 2D presentation of data. The 3D approach allows the distillation of multiple types of information(topography, geological, hydraulic and spatial) into one representation that provides valuable insights into the major controls of groundwater flow and recharge. The influence of aquifer lithology on the spatial variability of groundwater recharge is also demonstrated.
Resumo:
Given global demand for new infrastructure, governments face substantial challenges in funding new infrastructure and simultaneously delivering Value for Money (VfM). The paper begins with an update on a key development in a new early/first-order procurement decision making model that deploys production cost/benefit theory and theories concerning transaction costs from the New Institutional Economics, in order to identify a procurement mode that is likely to deliver the best ratio of production costs and transaction costs to production benefits, and therefore deliver superior VfM relative to alternative procurement modes. In doing so, the new procurement model is also able to address the uncertainty concerning the relative merits of Public-Private Partnerships (PPP) and non-PPP procurement approaches. The main aim of the paper is to develop competition as a dependent variable/proxy for VfM and a hypothesis (overarching proposition), as well as developing a research method to test the new procurement model. Competition reflects both production costs and benefits (absolute level of competition) and transaction costs (level of realised competition) and is a key proxy for VfM. Using competition as a proxy for VfM, the overarching proposition is given as: When the actual procurement mode matches the predicted (theoretical) procurement mode (informed by the new procurement model), then actual competition is expected to match potential competition (based on actual capacity). To collect data to test this proposition, the research method that is developed in this paper combines a survey and case study approach. More specifically, data collection instruments for the surveys to collect data on actual procurement, actual competition and potential competition are outlined. Finally, plans for analysing this survey data are briefly mentioned, along with noting the planned use of analytical pattern matching in deploying the new procurement model and in order to develop the predicted (theoretical) procurement mode.
Resumo:
The emergence of Twenty20 cricket at the elite level has been marketed on the excitement of the big hitter, where it seems that winning is a result of the muscular batter hitting boundaries at will. This version of the game has captured the imagination of many young players who all want to score runs with “big hits”. However, in junior cricket, boundary hitting is often more difficult due to size limitations of children and games played on outfields where the ball does not travel quickly. As a result, winning is often achieved via a less spectacular route – by scoring more singles than your opponents. However, most standard coaching texts only describe how to play boundary scoring shots (e.g. the drives, pulls, cuts and sweeps) and defensive shots to protect the wicket. Learning to bat appears to have been reduced to extremes of force production, i.e. maximal force production to hit boundaries or minimal force production to stop the ball from hitting the wicket. Initially, this is not a problem because the typical innings of a young player (<12 years) would be based on the concept of “block” or “bash” – they “block” the good balls and “bash” the short balls. This approach works because there are many opportunities to hit boundaries off the numerous inaccurate deliveries of novice bowlers. Most runs are scored behind the wicket by using the pace of the bowler’s delivery to re-direct the ball, because the intrinsic dynamics (i.e. lack of strength) of most children means that they can only create sufficient power by playing shots where the whole body can contribute to force production. This method works well until the novice player comes up against more accurate bowling when they find they have no way of scoring runs. Once batters begin to face “good” bowlers, batters have to learn to score runs via singles. In cricket coaching manuals (e.g. ECB, n.d), running between the wickets is treated as a separate task to batting, and the “basics” of running, such as how to “back- up”, carry the bat, calling and turning and sliding the bat into the crease are “drilled” into players. This task decomposition strategy focussing on techniques is a common approach to skill acquisition in many highly traditional sports, typified in cricket by activities where players hit balls off tees and receive “throw-downs” from coaches. However, the relative usefulness of these approaches in the acquisition of sporting skills is increasingly being questioned (Pinder, Renshaw & Davids, 2009). We will discuss why this is the case in the next section.
Resumo:
Business practices vary from one company to another and business practices often need to be changed due to changes of business environments. To satisfy different business practices, enterprise systems need to be customized. To keep up with ongoing business practice changes, enterprise systems need to be adapted. Because of rigidity and complexity, the customization and adaption of enterprise systems often takes excessive time with potential failures and budget shortfall. Moreover, enterprise systems often drag business behind because they cannot be rapidly adapted to support business practice changes. Extensive literature has addressed this issue by identifying success or failure factors, implementation approaches, and project management strategies. Those efforts were aimed at learning lessons from post implementation experiences to help future projects. This research looks into this issue from a different angle. It attempts to address this issue by delivering a systematic method for developing flexible enterprise systems which can be easily tailored for different business practices or rapidly adapted when business practices change. First, this research examines the role of system models in the context of enterprise system development; and the relationship of system models with software programs in the contexts of computer aided software engineering (CASE), model driven architecture (MDA) and workflow management system (WfMS). Then, by applying the analogical reasoning method, this research initiates a concept of model driven enterprise systems. The novelty of model driven enterprise systems is that it extracts system models from software programs and makes system models able to stay independent of software programs. In the paradigm of model driven enterprise systems, system models act as instructors to guide and control the behavior of software programs. Software programs function by interpreting instructions in system models. This mechanism exposes the opportunity to tailor such a system by changing system models. To make this true, system models should be represented in a language which can be easily understood by human beings and can also be effectively interpreted by computers. In this research, various semantic representations are investigated to support model driven enterprise systems. The significance of this research is 1) the transplantation of the successful structure for flexibility in modern machines and WfMS to enterprise systems; and 2) the advancement of MDA by extending the role of system models from guiding system development to controlling system behaviors. This research contributes to the area relevant to enterprise systems from three perspectives: 1) a new paradigm of enterprise systems, in which enterprise systems consist of two essential elements: system models and software programs. These two elements are loosely coupled and can exist independently; 2) semantic representations, which can effectively represent business entities, entity relationships, business logic and information processing logic in a semantic manner. Semantic representations are the key enabling techniques of model driven enterprise systems; and 3) a brand new role of system models; traditionally the role of system models is to guide developers to write system source code. This research promotes the role of system models to control the behaviors of enterprise.
Resumo:
In this paper, I show clear links between the theoretical underpinnings of SFL and those of specific sociological, anthropological, and communication research traditions. My purpose in doing so is to argue that SFL is an excellent interdisciplinary research method for the social sciences, especially considering the emergent form of political economy being touted by new media enthusiasts: the so called knowledge (or information) economy. To demonstrate the flexibility and salience of SFL in diverse traditions of social research, and as evidence of its ability to be deployed as a flexible research method across formerly impermeable disciplinary and social boundaries, I use analyses from my doctoral research, relating these - theoretically speaking - to specific research traditions in sociology, communication, and anthropology.
Resumo:
Green energy is one of the key factors, driving down electricity bill and zero carbon emission generating electricity to green building. However, the climate change and environmental policies are accelerating people to use renewable energy instead of coal-fired (convention type) energy for green building that energy is not environmental friendly. Therefore, solar energy is one of the clean energy solving environmental impact and paying less in electricity fee. The method of solar energy is collecting sun from solar array and saves in battery from which provides necessary electricity to whole house with zero carbon emission. However, in the market a lot of solar arrays suppliers, the aims of this paper attempted to use superiority and inferiority multi-criteria ranking (SIR) method with 13 constraints establishing I-flows and S-flows matrices to evaluate four alternatives solar energies and determining which alternative is the best, providing power to sustainable building. Furthermore, SIR is well-known structured approach of multi-criteria decision support tools and gradually used in construction and building. The outcome of this paper significantly gives an indication to user selecting solar energy.
Resumo:
This paper describes an effective method for signal-authentication and spoofing detection for civilian GNSS receivers using the GPS L1 C/A and the Galileo E1-B Safety of Life service. The paper discusses various spoofing attack profiles and how the proposed method is able to detect these attacks. This method is relatively low-cost and can be suitable for numerous mass-market applications. This paper is the subject of a pending patent.
Resumo:
There are many applications in aeronautics where there exist strong couplings between disciplines. One practical example is within the context of Unmanned Aerial Vehicle(UAV) automation where there exists strong coupling between operation constraints, aerodynamics, vehicle dynamics, mission and path planning. UAV path planning can be done either online or offline. The current state of path planning optimisation online UAVs with high performance computation is not at the same level as its ground-based offline optimizer's counterpart, this is mainly due to the volume, power and weight limitations on the UAV; some small UAVs do not have the computational power needed for some optimisation and path planning task. In this paper, we describe an optimisation method which can be applied to Multi-disciplinary Design Optimisation problems and UAV path planning problems. Hardware-based design optimisation techniques are used. The power and physical limitations of UAV, which may not be a problem in PC-based solutions, can be approached by utilizing a Field Programmable Gate Array (FPGA) as an algorithm accelerator. The inevitable latency produced by the iterative process of an Evolutionary Algorithm (EA) is concealed by exploiting the parallelism component within the dataflow paradigm of the EA on an FPGA architecture. Results compare software PC-based solutions and the hardware-based solutions for benchmark mathematical problems as well as a simple real world engineering problem. Results also indicate the practicality of the method which can be used for more complex single and multi objective coupled problems in aeronautical applications.
Resumo:
Lean product design has the potential to reduce the overall product development time and cost and can improve the quality of a product. However, it has been found that no or little work has been carried out to provide an integrated framework of "lean design" and to quantitatively evaluate the effectiveness of lean practices/principles in product development process. This research proposed an integrated framework for lean design process and developed a dynamic decision making tool based on Methods Time Measurement (MTM) approach for assessing the impact of lean design on the assembly process. The proposed integrated lean framework demonstrates the lean processes to be followed in the product design and assembly process in order to achieve overall leanness. The decision tool consists of a central database, the lean design guidelines, and MTM analysis. Microsoft Access and C# are utilized to develop the user interface to use the MTM analysis as decision making tool. MTM based dynamic tool is capable of estimating the assembly time, costs of parts and labour of various alternatives of a design and hence is able to achieve optimum design. A case study is conducted to test and validate the functionality of the MTM Analysis as well as to verify the lean guidelines proposed for product development.
Resumo:
We study Krylov subspace methods for approximating the matrix-function vector product φ(tA)b where φ(z) = [exp(z) - 1]/z. This product arises in the numerical integration of large stiff systems of differential equations by the Exponential Euler Method, where A is the Jacobian matrix of the system. Recently, this method has found application in the simulation of transport phenomena in porous media within mathematical models of wood drying and groundwater flow. We develop an a posteriori upper bound on the Krylov subspace approximation error and provide a new interpretation of a previously published error estimate. This leads to an alternative Krylov approximation to φ(tA)b, the so-called Harmonic Ritz approximant, which we find does not exhibit oscillatory behaviour of the residual error.
Resumo:
Crack is a significant influential factor in soil slope that could leads to rainfall-induced slope instability. Existence of cracks at soil surface will decrease the shear strength and increase the hydraulic conductivity of soil slope. Although previous research has shown the effect of surface-cracks in soil stability, the influence of deep-cracks on soil stability is still unknown. The limited availability of deep crack data due to the difficulty of effective investigate methods could be one of the obstacles. Current technology in electrical resistivity can be used to detect deep-cracks in soil. This paper discusses deep cracks in unsaturated residual soil slopes in Indonesia using electrical resistivity method. The field investigation such as bore hole and SPT tests was carried out at multiple locations in the area where the electrical resistivity testing have been conducted. Subsequently, the results from bore-hole and SPT test were used to verify the results of the electrical resistivity test. This study demonstrates the benefits and limitations of the electrical resistivity in detecting deep-cracks in a residual soil slopes.
Resumo:
The uniformization method (also known as randomization) is a numerically stable algorithm for computing transient distributions of a continuous time Markov chain. When the solution is needed after a long run or when the convergence is slow, the uniformization method involves a large number of matrix-vector products. Despite this, the method remains very popular due to its ease of implementation and its reliability in many practical circumstances. Because calculating the matrix-vector product is the most time-consuming part of the method, overall efficiency in solving large-scale problems can be significantly enhanced if the matrix-vector product is made more economical. In this paper, we incorporate a new relaxation strategy into the uniformization method to compute the matrix-vector products only approximately. We analyze the error introduced by these inexact matrix-vector products and discuss strategies for refining the accuracy of the relaxation while reducing the execution cost. Numerical experiments drawn from computer systems and biological systems are given to show that significant computational savings are achieved in practical applications.