843 resultados para Transmission of data flow model driven development
Resumo:
The exponential growth of studies on the biological response to ocean acidification over the last few decades has generated a large amount of data. To facilitate data comparison, a data compilation hosted at the data publisher PANGAEA was initiated in 2008 and is updated on a regular basis (doi:10.1594/PANGAEA.149999). By January 2015, a total of 581 data sets (over 4 000 000 data points) from 539 papers had been archived. Here we present the developments of this data compilation five years since its first description by Nisumaa et al. (2010). Most of study sites from which data archived are still in the Northern Hemisphere and the number of archived data from studies from the Southern Hemisphere and polar oceans are still relatively low. Data from 60 studies that investigated the response of a mix of organisms or natural communities were all added after 2010, indicating a welcomed shift from the study of individual organisms to communities and ecosystems. The initial imbalance of considerably more data archived on calcification and primary production than on other processes has improved. There is also a clear tendency towards more data archived from multifactorial studies after 2010. For easier and more effective access to ocean acidification data, the ocean acidification community is strongly encouraged to contribute to the data archiving effort, and help develop standard vocabularies describing the variables and define best practices for archiving ocean acidification data.
Groundwater flow model of the Logan river alluvial aquifer system Josephville, South East Queensland
Resumo:
The study focuses on an alluvial plain situated within a large meander of the Logan River at Josephville near Beaudesert which supports a factory that processes gelatine. The plant draws water from on site bores, as well as the Logan River, for its production processes and produces approximately 1.5 ML per day (Douglas Partners, 2004) of waste water containing high levels of dissolved ions. At present a series of treatment ponds are used to aerate the waste water reducing the level of organic matter; the water is then used to irrigate grazing land around the site. Within the study the hydrogeology is investigated, a conceptual groundwater model is produced and a numerical groundwater flow model is developed from this. On the site are several bores that access groundwater, plus a network of monitoring bores. Assessment of drilling logs shows the area is formed from a mixture of poorly sorted Quaternary alluvial sediments with a laterally continuous aquifer comprised of coarse sands and fine gravels that is in contact with the river. This aquifer occurs at a depth of between 11 and 15 metres and is overlain by a heterogeneous mixture of silts, sands and clays. The study investigates the degree of interaction between the river and the groundwater within the fluvially derived sediments for reasons of both environmental monitoring and sustainability of the potential local groundwater resource. A conceptual hydrogeological model of the site proposes two hydrostratigraphic units, a basal aquifer of coarse-grained materials overlain by a thick semi-confining unit of finer materials. From this, a two-layer groundwater flow model and hydraulic conductivity distribution was developed based on bore monitoring and rainfall data using MODFLOW (McDonald and Harbaugh, 1988) and PEST (Doherty, 2004) based on GMS 6.5 software (EMSI, 2008). A second model was also considered with the alluvium represented as a single hydrogeological unit. Both models were calibrated to steady state conditions and sensitivity analyses of the parameters has demonstrated that both models are very stable for changes in the range of ± 10% for all parameters and still reasonably stable for changes up to ± 20% with RMS errors in the model always less that 10%. The preferred two-layer model was found to give the more realistic representation of the site, where water level variations and the numerical modeling showed that the basal layer of coarse sands and fine gravels is hydraulically connected to the river and the upper layer comprising a poorly sorted mixture of silt-rich clays and sands of very low permeability limits infiltration from the surface to the lower layer. The paucity of historical data has limited the numerical modelling to a steady state one based on groundwater levels during a drought period and forecasts for varying hydrological conditions (e.g. short term as well as prolonged dry and wet conditions) cannot reasonably be made from such a model. If future modelling is to be undertaken it is necessary to establish a regular program of groundwater monitoring and maintain a long term database of water levels to enable a transient model to be developed at a later stage. This will require a valid monitoring network to be designed with additional bores required for adequate coverage of the hydrogeological conditions at the Josephville site. Further investigations would also be enhanced by undertaking pump testing to investigate hydrogeological properties in the aquifer.
Resumo:
A Simulink Matlab control system of a heavy vehicle suspension has been developed. The aim of the exercise presented in this paper was to develop a Simulink Matlab control system of a heavy vehicle suspension. The objective facilitated by this outcome was the use of a working model of a heavy vehicle (HV) suspension that could be used for future research. A working computer model is easier and cheaper to re-configure than a HV axle group installed on a truck; it presents less risk should something go wrong and allows more scope for variation and sensitivity analysis before embarking on further "real-world" testing. Empirical data recorded as the input and output signals of a heavy vehicle (HV) suspension were used to develop the parameters for computer simulation of a linear time invariant system described by a second-order differential equation of the form: (i.e. a "2nd-order" system). Using the empirical data as an input to the computer model allowed validation of its output compared with the empirical data. The errors ranged from less than 1% to approximately 3% for any parameter, when comparing like-for-like inputs and outputs. The model is presented along with the results of the validation. This model will be used in future research in the QUT/Main Roads project Heavy vehicle suspensions – testing and analysis, particularly so for a theoretical model of a multi-axle HV suspension with varying values of dynamic load sharing. Allowance will need to be made for the errors noted when using the computer models in this future work.
Resumo:
The uncertain and dynamic nature of International Construction Joint Venture (ICJV) performance is evolved with many critical factors which lead to make partner relationships more complex in respect of making decisions to maintain a cohesive environment. Addressing to the fact, a generic system dynamics performance model for ICJV is developed by integrating a number variables as to get an overall impact on performance of ICJV and to make effective decisions based on that. In order to formulate and validate the model both structurally and behaviourally, both qualitative and quantitative data are gathered by conducting intensive interviews from two ICJVs in Thailand. After conducting intensive simulations of model, three major problems are identified related to negative value gap, low productivity in construction and high rate of ineffective information sharing of both ICJVs. Several policies are suggested and integrated application of these policies provides a maximum improvement to performance of the ICJV.
Resumo:
The health system is one sector dealing with a deluge of complex data. Many healthcare organisations struggle to utilise these volumes of health data effectively and efficiently. Also, there are many healthcare organisations, which still have stand-alone systems, not integrated for management of information and decision-making. This shows, there is a need for an effective system to capture, collate and distribute this health data. Therefore, implementing the data warehouse concept in healthcare is potentially one of the solutions to integrate health data. Data warehousing has been used to support business intelligence and decision-making in many other sectors such as the engineering, defence and retail sectors. The research problem that is going to be addressed is, "how can data warehousing assist the decision-making process in healthcare". To address this problem the researcher has narrowed an investigation focusing on a cardiac surgery unit. This research used the cardiac surgery unit at the Prince Charles Hospital (TPCH) as the case study. The cardiac surgery unit at TPCH uses a stand-alone database of patient clinical data, which supports clinical audit, service management and research functions. However, much of the time, the interaction between the cardiac surgery unit information system with other units is minimal. There is a limited and basic two-way interaction with other clinical and administrative databases at TPCH which support decision-making processes. The aims of this research are to investigate what decision-making issues are faced by the healthcare professionals with the current information systems and how decision-making might be improved within this healthcare setting by implementing an aligned data warehouse model or models. As a part of the research the researcher will propose and develop a suitable data warehouse prototype based on the cardiac surgery unit needs and integrating the Intensive Care Unit database, Clinical Costing unit database (Transition II) and Quality and Safety unit database [electronic discharge summary (e-DS)]. The goal is to improve the current decision-making processes. The main objectives of this research are to improve access to integrated clinical and financial data, providing potentially better information for decision-making for both improved from the questionnaire and by referring to the literature, the results indicate a centralised data warehouse model for the cardiac surgery unit at this stage. A centralised data warehouse model addresses current needs and can also be upgraded to an enterprise wide warehouse model or federated data warehouse model as discussed in the many consulted publications. The data warehouse prototype was able to be developed using SAS enterprise data integration studio 4.2 and the data was analysed using SAS enterprise edition 4.3. In the final stage, the data warehouse prototype was evaluated by collecting feedback from the end users. This was achieved by using output created from the data warehouse prototype as examples of the data desired and possible in a data warehouse environment. According to the feedback collected from the end users, implementation of a data warehouse was seen to be a useful tool to inform management options, provide a more complete representation of factors related to a decision scenario and potentially reduce information product development time. However, there are many constraints exist in this research. For example the technical issues such as data incompatibilities, integration of the cardiac surgery database and e-DS database servers and also, Queensland Health information restrictions (Queensland Health information related policies, patient data confidentiality and ethics requirements), limited availability of support from IT technical staff and time restrictions. These factors have influenced the process for the warehouse model development, necessitating an incremental approach. This highlights the presence of many practical barriers to data warehousing and integration at the clinical service level. Limitations included the use of a small convenience sample of survey respondents, and a single site case report study design. As mentioned previously, the proposed data warehouse is a prototype and was developed using only four database repositories. Despite this constraint, the research demonstrates that by implementing a data warehouse at the service level, decision-making is supported and data quality issues related to access and availability can be reduced, providing many benefits. Output reports produced from the data warehouse prototype demonstrated usefulness for the improvement of decision-making in the management of clinical services, and quality and safety monitoring for better clinical care. However, in the future, the centralised model selected can be upgraded to an enterprise wide architecture by integrating with additional hospital units’ databases.
Resumo:
The common presupposition of Enterprise Systems (ES) is that they lead to significant efficiency gains. However, this is only the case for well-implemented ES that meet organisational requirements. The list of major ES implementation failures is as long as the list of success stories. We argue here that this arises from a more fundamental problem, the functionalist approach to ES development and provision. As long as vendors will continue to develop generic, difficult-to-adapt ES packages, this problem will prevail because organisations have a non-generic character. A solution to this problem can only consist in rethinking the way ES packages are provided. We propose a strict abstraction layer of ES functionalities and their representation as conceptual models. ES vendors must provide sufficient means for configuring these conceptual models. We discuss in this paper what generic situations can occur during process model configuration in order to understand process model configuration in depth.
Resumo:
This thesis studies the water resources of Laidley Creek catchment within the Lockyer Valley where groundwater is used for intensive irrigation of crops. A holistic approach was used to consider groundwater within the total water cycle. The project mapped the geology, measured stream flows and groundwater levels, and analysed the chemistry of the waters. These data were integrated within a catchment-wide conceptual model, including historic and rainfall records. From this a numerical simulation was produced to test data validity and develop predictions of behaviour, which can support management decisions, particularly in times of variable climate.
Resumo:
Multivariate predictive models are widely used tools for assessment of aquatic ecosystem health and models have been successfully developed for the prediction and assessment of aquatic macroinvertebrates, diatoms, local stream habitat features and fish. We evaluated the ability of a modelling method based on the River InVertebrate Prediction and Classification System (RIVPACS) to accurately predict freshwater fish assemblage composition and assess aquatic ecosystem health in rivers and streams of south-eastern Queensland, Australia. The predictive model was developed, validated and tested in a region of comparatively high environmental variability due to the unpredictable nature of rainfall and river discharge. The model was concluded to provide sufficiently accurate and precise predictions of species composition and was sensitive enough to distinguish test sites impacted by several common types of human disturbance (particularly impacts associated with catchment land use and associated local riparian, in-stream habitat and water quality degradation). The total number of fish species available for prediction was low in comparison to similar applications of multivariate predictive models based on other indicator groups, yet the accuracy and precision of our model was comparable to outcomes from such studies. In addition, our model developed for sites sampled on one occasion and in one season only (winter), was able to accurately predict fish assemblage composition at sites sampled during other seasons and years, provided that they were not subject to unusually extreme environmental conditions (e.g. extended periods of low flow that restricted fish movement or resulted in habitat desiccation and local fish extinctions).
Resumo:
Objective: The aim of this study was to develop a model capable of predicting variability in the mental workload experienced by frontline operators under routine and nonroutine conditions. Background: Excess workload is a risk that needs to be managed in safety-critical industries. Predictive models are needed to manage this risk effectively yet are difficult to develop. Much of the difficulty stems from the fact that workload prediction is a multilevel problem. Method: A multilevel workload model was developed in Study 1 with data collected from an en route air traffic management center. Dynamic density metrics were used to predict variability in workload within and between work units while controlling for variability among raters. The model was cross-validated in Studies 2 and 3 with the use of a high-fidelity simulator. Results: Reported workload generally remained within the bounds of the 90% prediction interval in Studies 2 and 3. Workload crossed the upper bound of the prediction interval only under nonroutine conditions. Qualitative analyses suggest that nonroutine events caused workload to cross the upper bound of the prediction interval because the controllers could not manage their workload strategically. Conclusion: The model performed well under both routine and nonroutine conditions and over different patterns of workload variation. Application: Workload prediction models can be used to support both strategic and tactical workload management. Strategic uses include the analysis of historical and projected workflows and the assessment of staffing needs. Tactical uses include the dynamic reallocation of resources to meet changes in demand.
Resumo:
The EEG time series has been subjected to various formalisms of analysis to extract meaningful information regarding the underlying neural events. In this paper the linear prediction (LP) method has been used for analysis and presentation of spectral array data for the better visualisation of background EEG activity. It has also been used for signal generation, efficient data storage and transmission of EEG. The LP method is compared with the standard Fourier method of compressed spectral array (CSA) of the multichannel EEG data. The autocorrelation autoregressive (AR) technique is used for obtaining the LP coefficients with a model order of 15. While the Fourier method reduces the data only by half, the LP method just requires the storage of signal variance and LP coefficients. The signal generated using white Gaussian noise as the input to the LP filter has a high correlation coefficient of 0.97 with that of original signal, thus making LP as a useful tool for storage and transmission of EEG. The biological significance of Fourier method and the LP method in respect to the microstructure of neuronal events in the generation of EEG is discussed.
Resumo:
The development of the flow of a granular material down an inclined plane starting from rest is studied as a function of the base roughness. In the simulations, the particles are rough frictional spheres interacting via the Hertz contact law. The rough base is made of a random configuration of fixed spheres with diameter different from the flowing particles, and the base roughness is decreased by decreasing the diameter of the base particles. The transition from an ordered to a disordered flowing state at a critical value of the base particle diameter, first reported by Kumaran and Maheshwari Phys. Fluids 24, 053302 (2012)] for particles with the linear contact model, is observed for the Hertzian contact model as well. The flow development for the ordered and disordered flows is very different. During the development of the disordered flow for the rougher base, there is shearing throughout the height. During the development of the ordered flow for the smoother base, there is a shear layer at the bottom and a plug region with no internal shearing above. In the shear layer, the particles are layered and hexagonally ordered in the plane parallel to the base, and the velocity profile is well approximated by Bagnold law. The flow develops in two phases. In the first phase, the thickness of the shear layer and the maximum velocity increase linearly in time till the shear front reaches the top. In the second phase, after the shear layer encompasses the entire flow, there is a much slower increase in the maximum velocity until the steady state is reached. (C) 2013 AIP Publishing LLC.
Resumo:
Effective air flow distribution through perforated tiles is required to efficiently cool servers in a raised floor data center. We present detailed computational fluid dynamics (CFD) modeling of air flow through a perforated tile and its entrance to the adjacent server rack. The realistic geometrical details of the perforated tile, as well as of the rack are included in the model. Generally, models for air flow through perforated tiles specify a step pressure loss across the tile surface, or porous jump model based on the tile porosity. An improvement to this includes a momentum source specification above the tile to simulate the acceleration of the air flow through the pores, or body force model. In both of these models, geometrical details of tile such as pore locations and shapes are not included. More details increase the grid size as well as the computational time. However, the grid refinement can be controlled to achieve balance between the accuracy and computational time. We compared the results from CFD using geometrical resolution with the porous jump and body force model solution as well as with the measured flow field using particle image velocimetry (PIV) experiments. We observe that including tile geometrical details gives better results as compared to elimination of tile geometrical details and specifying physical models across and above the tile surface. A modification to the body force model is also suggested and improved results were achieved.
Resumo:
This paper proposes an extended version of the basic New Keynesian monetary (NKM) model which contemplates revision processes of output and inflation data in order to assess the importance of data revisions on the estimated monetary policy rule parameters and the transmission of policy shocks. Our empirical evidence based on a structural econometric approach suggests that although the initial announcements of output and inflation are not rational forecasts of revised output and inflation data, ignoring the presence of non well-behaved revision processes may not be a serious drawback in the analysis of monetary policy in this framework. However, the transmission of inflation-push shocks is largely affected by considering data revisions. The latter being especially true when the nominal stickiness parameter is estimated taking into account data revision processes.
Resumo:
An analytical mathematical model for friction between a fabric strip and the volar forearm has been developed and validated experimentally. The model generalizes the common assumption of a cylindrical arm to any convex prism, and makes predictions for pressure and tension based on Amontons' law. This includes a relationship between the coefficient of static friction (mu) and forces on either end of a fabric strip in contact with part of the surface of the arm and perpendicular to its axis. Coefficients of friction were determined from experiments between arm phantoms of circular and elliptical cross-section (made from Plaster of Paris covered in Neoprene) and a nonwoven fabric. As predicted by the model, all values of mu calculated from experimental results agreed within +/- 8 per cent, and showed very little systematic variation with the deadweight, geometry, or arc of contact used. With an appropriate choice of coordinates the relationship predicted by this model for forces on either end of a fabric strip reduces to the prediction from the common model for circular arms. This helps to explain the surprisingly accurate values of mu obtained by applying the cylindrical model to experimental data on real arms.
Resumo:
The combinatorial model of nuclear level densities has now reached a level of accuracy comparable to that of the best global analytical expressions without suffering from the limits imposed by the statistical hypothesis on which the latter expressions rely. In particular, it provides, naturally, non-Gaussian spin distribution as well as non-equipartition of parities which are known to have an impact on cross section predictions at low energies [1, 2, 3]. Our previous global models developed in Refs. [1, 2] suffered from deficiencies, in particular in the way the collective effects - both vibrational and rotational - were treated. We have recently improved this treatment using simultaneously the single-particle levels and collective properties predicted by a newly derived Gogny interaction [4], therefore enabling a microscopic description of energy-dependent shell, pairing and deformation effects. In addition for deformed nuclei, the transition to sphericity is coherently taken into account on the basis of a temperature-dependent Hartree-Fock calculation which provides at each temperature the structure properties needed to build the level densities. This new method is described and shown to give promising results with respect to available experimental data.