107 resultados para Plant architecture model
Resumo:
Plant tissue has a complex cellular structure which is an aggregate of individual cells bonded by middle lamella. During drying processes, plant tissue undergoes extreme deformations which are mainly driven by moisture removal and turgor loss. Numerical modelling of this problem becomes challenging when conventional grid-based modelling techniques such as Finite Element Methods (FEM) and Finite Difference Methods (FDM) have grid-based limitations. This work presents a meshfree approach to model and simulate the deformations of plant tissues during drying. This method demonstrates the fundamental capabilities of meshfree methods in handling extreme deformations of multiphase systems. A simplified 2D tissue model is developed by aggregating individual cells while accounting for the stiffness of the middle lamella. Each individual cell is simply treated as consisting of two main components: cell fluid and cell wall. The cell fluid is modelled using Smoothed Particle Hydrodynamics (SPH) and the cell wall is modelled using a Discrete Element Method (DEM). During drying, moisture removal is accounted for by reduction of cell fluid and wall mass, which causes local shrinkage of cells eventually leading to tissue scale shrinkage. The cellular deformations are quantified using several cellular geometrical parameters and a favourably good agreement is observed when compared to experiments on apple tissue. The model is also capable of visually replicating dry tissue structures. The proposed model can be used as a step in developing complex tissue models to simulate extreme deformations during drying.
Resumo:
Fundamental understanding on microscopic physical changes of plant materials is vital to optimize product quality and processing techniques, particularly in food engineering. Although grid-based numerical modelling can assist in this regard, it becomes quite challenging to overcome the inherited complexities of these biological materials especially when such materials undergo critical processing conditions such as drying, where the cellular structure undergoes extreme deformations. In this context, a meshfree particle based model was developed which is fundamentally capable of handling extreme deformations of plant tissues during drying. The model is built by coupling a particle based meshfree technique: Smoothed Particle Hydrodynamics (SPH) and a Discrete Element Method (DEM). Plant cells were initiated as hexagons and aggregated to form a tissue which also accounts for the characteristics of the middle lamella. In each cell, SPH was used to model cell protoplasm and DEM was used to model the cell wall. Drying was incorporated by varying the moisture content, the turgor pressure, and cell wall contraction effects. Compared to the state of the art grid-based microscale plant tissue drying models, the proposed model can be used to simulate tissues under excessive moisture content reductions incorporating cell wall wrinkling. Also, compared to the state of the art SPH-DEM tissue models, the proposed model better replicates real tissues and the cell-cell interactions used ensure efficient computations. Model predictions showed good agreement both qualitatively and quantitatively with experimental findings on dried plant tissues. The proposed modelling approach is fundamentally flexible to study different cellular structures for their microscale morphological changes at dehydration.
Resumo:
Design Science is the process of solving ‘wicked problems’ through designing, developing, instantiating, and evaluating novel solutions (Hevner, March, Park and Ram, 2004). Wicked problems are described as agent finitude in combination with problem complexity and normative constraint (Farrell and Hooker, 2013). In Information Systems Design Science, determining that problems are ‘wicked’ differentiates Design Science research from Solutions Engineering (Winter, 2008) and is a necessary part of proving the relevance to Information Systems Design Science research (Hevner, 2007; Iivari, 2007). Problem complexity is characterised as many problem components with nested, dependent and co-dependent relationships interacting through multiple feedback and feed-forward loops. Farrell and Hooker (2013) specifically state for wicked problems “it will often be impossible to disentangle the consequences of specific actions from those of other co-occurring interactions”. This paper discusses the application of an Enterprise Information Architecture modelling technique to disentangle the wicked problem complexity for one case. It proposes that such a modelling technique can be applied to other wicked problems and can lay the foundations for proving relevancy to DSR, provide solution pathways for artefact development, and aid to substantiate those elements required to produce Design Theory.
Resumo:
A single plant cell was modeled with smoothed particle hydrodynamics (SPH) and a discrete element method (DEM) to study the basic micromechanics that govern the cellular structural deformations during drying. This two-dimensional particle-based model consists of two components: a cell fluid model and a cell wall model. The cell fluid was approximated to a highly viscous Newtonian fluid and modeled with SPH. The cell wall was treated as a stiff semi-permeable solid membrane with visco-elastic properties and modeled as a neo-Hookean solid material using a DEM. Compared to existing meshfree particle-based plant cell models, we have specifically introduced cell wall–fluid attraction forces and cell wall bending stiffness effects to address the critical shrinkage characteristics of the plant cells during drying. Also, a moisture domain-based novel approach was used to simulate drying mechanisms within the particle scheme. The model performance was found to be mainly influenced by the particle resolution, initial gap between the outermost fluid particles and wall particles and number of particles in the SPH influence domain. A higher order smoothing kernel was used with adaptive smoothing length to improve the stability and accuracy of the model. Cell deformations at different states of cell dryness were qualitatively and quantitatively compared with microscopic experimental findings on apple cells and a fairly good agreement was observed with some exceptions. The wall–fluid attraction forces and cell wall bending stiffness were found to be significantly improving the model predictions. A detailed sensitivity analysis was also done to further investigate the influence of wall–fluid attraction forces, cell wall bending stiffness, cell wall stiffness and the particle resolution. This novel meshfree based modeling approach is highly applicable for cellular level deformation studies of plant food materials during drying, which characterize large deformations.
Resumo:
We present a new algorithm to compute the voxel-wise genetic contribution to brain fiber microstructure using diffusion tensor imaging (DTI) in a dataset of 25 monozygotic (MZ) twins and 25 dizygotic (DZ) twin pairs (100 subjects total). First, the structural and DT scans were linearly co-registered. Structural MR scans were nonlinearly mapped via a 3D fluid transformation to a geometrically centered mean template, and the deformation fields were applied to the DTI volumes. After tensor re-orientation to realign them to the anatomy, we computed several scalar and multivariate DT-derived measures including the geodesic anisotropy (GA), the tensor eigenvalues and the full diffusion tensors. A covariance-weighted distance was measured between twins in the Log-Euclidean framework [2], and used as input to a maximum-likelihood based algorithm to compute the contributions from genetics (A), common environmental factors (C) and unique environmental ones (E) to fiber architecture. Quanititative genetic studies can take advantage of the full information in the diffusion tensor, using covariance weighted distances and statistics on the tensor manifold.
Resumo:
In this chapter we consider biosecurity surveillance as part of a complex system comprising many different biological, environmental and human factors and their interactions. Modelling and analysis of surveillance strategies should take into account these complexities, and also facilitate the use and integration of the many types of different information that can provide insight into the system as a whole. After a brief discussion of a range of options, we focus on Bayesian networks for representing such complex systems. We summarize the features of Bayesian networks and describe these in the context of surveillance.
Resumo:
Providing support for reversible transformations as a basis for round-trip engineering is a significant challenge in model transformation research. While there are a number of current approaches, they require the underlying transformation to exhibit an injective behaviour when reversing changes. This however, does not serve all practical transformations well. In this paper, we present a novel approach to round-trip engineering that does not place restrictions on the nature of the underlying transformation. Based on abductive logic programming, it allows us to compute a set of legitimate source changes that equate to a given change to the target model. Encouraging results are derived from an initial prototype that supports most concepts of the Tefkat transformation language
Resumo:
With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.
Resumo:
In architecture courses, instilling a wider understanding of the industry specific representations practiced in the Building Industry is normally done under the auspices of Technology and Science subjects. Traditionally, building industry professionals communicated their design intentions using industry specific representations. Originally these mainly two dimensional representations such as plans, sections, elevations, schedules, etc. were produced manually, using a drawing board. Currently, this manual process has been digitised in the form of Computer Aided Design and Drafting (CADD) or ubiquitously simply CAD. While CAD has significant productivity and accuracy advantages over the earlier manual method, it still only produces industry specific representations of the design intent. Essentially, CAD is a digital version of the drawing board. The tool used for the production of these representations in industry is still mainly CAD. This is also the approach taken in most traditional university courses and mirrors the reality of the situation in the building industry. A successor to CAD, in the form of Building Information Modelling (BIM), is presently evolving in the Construction Industry. CAD is mostly a technical tool that conforms to existing industry practices. BIM on the other hand is revolutionary both as a technical tool and as an industry practice. Rather than producing representations of design intent, BIM produces an exact Virtual Prototype of any building that in an ideal situation is centrally stored and freely exchanged between the project team. Essentially, BIM builds any building twice: once in the virtual world, where any faults are resolved, and finally, in the real world. There is, however, no established model for learning through the use of this technology in Architecture courses. Queensland University of Technology (QUT), a tertiary institution that maintains close links with industry, recognises the importance of equipping their graduates with skills that are relevant to industry. BIM skills are currently in increasing demand throughout the construction industry through the evolution of construction industry practices. As such, during the second half of 2008, QUT 4th year architectural students were formally introduced for the first time to BIM, as both a technology and as an industry practice. This paper will outline the teaching team’s experiences and methodologies in offering a BIM unit (Architectural Technology and Science IV) at QUT for the first time and provide a description of the learning model. The paper will present the results of a survey on the learners’ perspectives of both BIM and their learning experiences as they learn about and through this technology.
Resumo:
Decision Support System (DSS) has played a significant role in construction project management. This has been proven that a lot of DSS systems have been implemented throughout the whole construction project life cycle. However, most research only concentrated in model development and left few fundamental aspects in Information System development. As a result, the output of researches are complicated to be adopted by lay person particularly those whom come from a non-technical background. Hence, a DSS should hide the abstraction and complexity of DSS models by providing a more useful system which incorporated user oriented system. To demonstrate a desirable architecture of DSS particularly in public sector planning, we aim to propose a generic DSS framework for consultant selection. It will focus on the engagement of engineering consultant for irrigation and drainage infrastructure. The DSS framework comprise from operational decision to strategic decision level. The expected result of the research will provide a robust framework of DSS for consultant selection. In addition, the paper also discussed other issues that related to the existing DSS framework by integrating enabling technologies from computing. This paper is based on the preliminary case study conducted via literature review and archival documents at Department of Irrigation and Drainage (DID) Malaysia. The paper will directly affect to the enhancement of consultant pre-qualification assessment and selection tools. By the introduction of DSS in this area, the selection process will be more efficient in time, intuitively aided qualitative judgment, and transparent decision through aggregation of decision among stakeholders.
Resumo:
The analysis and value of digital evidence in an investigation has been the domain of discourse in the digital forensic community for several years. While many works have considered different approaches to model digital evidence, a comprehensive understanding of the process of merging different evidence items recovered during a forensic analysis is still a distant dream. With the advent of modern technologies, pro-active measures are integral to keeping abreast of all forms of cyber crimes and attacks. This paper motivates the need to formalize the process of analyzing digital evidence from multiple sources simultaneously. In this paper, we present the forensic integration architecture (FIA) which provides a framework for abstracting the evidence source and storage format information from digital evidence and explores the concept of integrating evidence information from multiple sources. The FIA architecture identifies evidence information from multiple sources that enables an investigator to build theories to reconstruct the past. FIA is hierarchically composed of multiple layers and adopts a technology independent approach. FIA is also open and extensible making it simple to adapt to technological changes. We present a case study using a hypothetical car theft case to demonstrate the concepts and illustrate the value it brings into the field.
Groundwater flow model of the Logan river alluvial aquifer system Josephville, South East Queensland
Resumo:
The study focuses on an alluvial plain situated within a large meander of the Logan River at Josephville near Beaudesert which supports a factory that processes gelatine. The plant draws water from on site bores, as well as the Logan River, for its production processes and produces approximately 1.5 ML per day (Douglas Partners, 2004) of waste water containing high levels of dissolved ions. At present a series of treatment ponds are used to aerate the waste water reducing the level of organic matter; the water is then used to irrigate grazing land around the site. Within the study the hydrogeology is investigated, a conceptual groundwater model is produced and a numerical groundwater flow model is developed from this. On the site are several bores that access groundwater, plus a network of monitoring bores. Assessment of drilling logs shows the area is formed from a mixture of poorly sorted Quaternary alluvial sediments with a laterally continuous aquifer comprised of coarse sands and fine gravels that is in contact with the river. This aquifer occurs at a depth of between 11 and 15 metres and is overlain by a heterogeneous mixture of silts, sands and clays. The study investigates the degree of interaction between the river and the groundwater within the fluvially derived sediments for reasons of both environmental monitoring and sustainability of the potential local groundwater resource. A conceptual hydrogeological model of the site proposes two hydrostratigraphic units, a basal aquifer of coarse-grained materials overlain by a thick semi-confining unit of finer materials. From this, a two-layer groundwater flow model and hydraulic conductivity distribution was developed based on bore monitoring and rainfall data using MODFLOW (McDonald and Harbaugh, 1988) and PEST (Doherty, 2004) based on GMS 6.5 software (EMSI, 2008). A second model was also considered with the alluvium represented as a single hydrogeological unit. Both models were calibrated to steady state conditions and sensitivity analyses of the parameters has demonstrated that both models are very stable for changes in the range of ± 10% for all parameters and still reasonably stable for changes up to ± 20% with RMS errors in the model always less that 10%. The preferred two-layer model was found to give the more realistic representation of the site, where water level variations and the numerical modeling showed that the basal layer of coarse sands and fine gravels is hydraulically connected to the river and the upper layer comprising a poorly sorted mixture of silt-rich clays and sands of very low permeability limits infiltration from the surface to the lower layer. The paucity of historical data has limited the numerical modelling to a steady state one based on groundwater levels during a drought period and forecasts for varying hydrological conditions (e.g. short term as well as prolonged dry and wet conditions) cannot reasonably be made from such a model. If future modelling is to be undertaken it is necessary to establish a regular program of groundwater monitoring and maintain a long term database of water levels to enable a transient model to be developed at a later stage. This will require a valid monitoring network to be designed with additional bores required for adequate coverage of the hydrogeological conditions at the Josephville site. Further investigations would also be enhanced by undertaking pump testing to investigate hydrogeological properties in the aquifer.
Molecular architecture of the human sinus node: insights into the function of the cardiac pacemaker.
Resumo:
BACKGROUND: Although we know much about the molecular makeup of the sinus node (SN) in small mammals, little is known about it in humans. The aims of the present study were to investigate the expression of ion channels in the human SN and to use the data to predict electrical activity. METHODS AND RESULTS: Quantitative polymerase chain reaction, in situ hybridization, and immunofluorescence were used to analyze 6 human tissue samples. Messenger RNA (mRNA) for 120 ion channels (and some related proteins) was measured in the SN, a novel paranodal area, and the right atrium (RA). The results showed, for example, that in the SN compared with the RA, there was a lower expression of Na(v)1.5, K(v)4.3, K(v)1.5, ERG, K(ir)2.1, K(ir)6.2, RyR2, SERCA2a, Cx40, and Cx43 mRNAs but a higher expression of Ca(v)1.3, Ca(v)3.1, HCN1, and HCN4 mRNAs. The expression pattern of many ion channels in the paranodal area was intermediate between that of the SN and RA; however, compared with the SN and RA, the paranodal area showed greater expression of K(v)4.2, K(ir)6.1, TASK1, SK2, and MiRP2. Expression of ion channel proteins was in agreement with expression of the corresponding mRNAs. The levels of mRNA in the SN, as a percentage of those in the RA, were used to estimate conductances of key ionic currents as a percentage of those in a mathematical model of human atrial action potential. The resulting SN model successfully produced pacemaking. CONCLUSIONS: Ion channels show a complex and heterogeneous pattern of expression in the SN, paranodal area, and RA in humans, and the expression pattern is appropriate to explain pacemaking.
Resumo:
This paper argues a model of complex system design for sustainable architecture within a framework of entropy evolution. The spectrum of sustainable architecture consists of the efficient use of energy and material resource in life-cycle of buildings, the active involvement of the occupants in micro-climate control within buildings, and the natural environmental context. The interactions of the parameters compose a complex system of sustainable architectural design, of which the conventional linear and fragmented design technologies are insufficient to indicate holistic and ongoing environmental performance. The complexity theory of dissipative structure states a microscopic formulation of open system evolution, which provides a system design framework for the evolution of building environmental performance towards an optimization of sustainability in architecture.