902 resultados para physically based modeling
Resumo:
If the land sector is to make significant contributions to mitigating anthropogenic greenhouse gas (GHG) emissions in coming decades, it must do so while concurrently expanding production of food and fiber. In our view, mathematical modeling will be required to provide scientific guidance to meet this challenge. In order to be useful in GHG mitigation policy measures, models must simultaneously meet scientific, software engineering, and human capacity requirements. They can be used to understand GHG fluxes, to evaluate proposed GHG mitigation actions, and to predict and monitor the effects of specific actions; the latter applications require a change in mindset that has parallels with the shift from research modeling to decision support. We compare and contrast 6 agro-ecosystem models (FullCAM, DayCent, DNDC, APSIM, WNMM, and AgMod), chosen because they are used in Australian agriculture and forestry. Underlying structural similarities in the representations of carbon flows though plants and soils in these models are complemented by a diverse range of emphases and approaches to the subprocesses within the agro-ecosystem. None of these agro-ecosystem models handles all land sector GHG fluxes, and considerable model-based uncertainty exists for soil C fluxes and enteric methane emissions. The models also show diverse approaches to the initialisation of model simulations, software implementation, distribution, licensing, and software quality assurance; each of these will differentially affect their usefulness for policy-driven GHG mitigation prediction and monitoring. Specific requirements imposed on the use of models by Australian mitigation policy settings are discussed, and areas for further scientific development of agro-ecosystem models for use in GHG mitigation policy are proposed.
Resumo:
Large sized power transformers are important parts of the power supply chain. These very critical networks of engineering assets are an essential base of a nation’s energy resource infrastructure. This research identifies the key factors influencing transformer normal operating conditions and predicts the asset management lifespan. Engineering asset research has developed few lifespan forecasting methods combining real-time monitoring solutions for transformer maintenance and replacement. Utilizing the rich data source from a remote terminal unit (RTU) system for sensor-data driven analysis, this research develops an innovative real-time lifespan forecasting approach applying logistic regression based on the Weibull distribution. The methodology and the implementation prototype are verified using a data series from 161 kV transformers to evaluate the efficiency and accuracy for energy sector applications. The asset stakeholders and suppliers significantly benefit from the real-time power transformer lifespan evaluation for maintenance and replacement decision support.
Resumo:
Structural equation modeling (SEM) is a powerful statistical approach for the testing of networks of direct and indirect theoretical causal relationships in complex data sets with intercorrelated dependent and independent variables. SEM is commonly applied in ecology, but the spatial information commonly found in ecological data remains difficult to model in a SEM framework. Here we propose a simple method for spatially explicit SEM (SE-SEM) based on the analysis of variance/covariance matrices calculated across a range of lag distances. This method provides readily interpretable plots of the change in path coefficients across scale and can be implemented using any standard SEM software package. We demonstrate the application of this method using three studies examining the relationships between environmental factors, plant community structure, nitrogen fixation, and plant competition. By design, these data sets had a spatial component, but were previously analyzed using standard SEM models. Using these data sets, we demonstrate the application of SE-SEM to regularly spaced, irregularly spaced, and ad hoc spatial sampling designs and discuss the increased inferential capability of this approach compared with standard SEM. We provide an R package, sesem, to easily implement spatial structural equation modeling.
Resumo:
Pesticide use in paddy rice production may contribute to adverse ecological effects in surface waters. Risk assessments conducted for regulatory purposes depend on the use of simulation models to determine predicted environment concentrations (PEC) of pesticides. Often tiered approaches are used, in which assessments at lower tiers are based on relatively simple models with conservative scenarios, while those at higher tiers have more realistic representations of physical and biochemical processes. This chapter reviews models commonly used for predicting the environmental fate of pesticides in rice paddies. Theoretical considerations, unique features, and applications are discussed. This review is expected to provide information to guide model selection for pesticide registration, regulation, and mitigation in rice production areas.
Resumo:
Analyzing and redesigning business processes is a complex task, which requires the collaboration of multiple actors. Current approaches focus on collaborative modeling workshops where process stakeholders verbally contribute their perspective on a process while modeling experts translate their contributions and integrate them into a model using traditional input devices. Limiting participants to verbal contributions not only affects the outcome of collaboration but also collaboration itself. We created CubeBPM – a system that allows groups of actors to interact with process models through a touch based interface on a large interactive touch display wall. We are currently in the process of conducting a study that aims at assessing the impact of CubeBPM on collaboration and modeling performance. Initial results presented in this paper indicate that the setting helped participants to become more active in collaboration.
Resumo:
Analyzing and redesigning business processes is a complex task, which requires the collaboration of multiple actors. Current approaches focus on collaborative modeling workshops where process stakeholders verbally contribute their perspective on a process while modeling experts translate their contributions and integrate them into a model using traditional input devices. Limiting participants to verbal contributions not only affects the outcome of collaboration but also collaboration itself. We created CubeBPM – a system that allows groups of actors to interact with process models through a touch based interface on a large interactive touch display wall. We are currently in the process of conducting a study that aims at assessing the impact of CubeBPM on collaboration and modeling performance. Initial results presented in this paper indicate that the setting helped participants to become more active in collaboration.
Resumo:
Analyzing and redesigning business processes is a complex task, which requires the collaboration of multiple actors. Current approaches focus on workshops where process stakeholders together with modeling experts create a graphical visualization of a process in a model. Within these workshops, stakeholders are mostly limited to verbal contributions, which are integrated into a process model by a modeling expert using traditional input devices. This limitation negatively affects the collaboration outcome and also the perception of the collaboration itself. In order to overcome this problem we created CubeBPM – a system that allows groups of actors to interact with process models through a touch based interface on a large interactive touch display wall. Using this system for collaborative modeling, we expect to provide a more effective collaboration environment thus improving modeling performance and collaboration.
Resumo:
The mesoscale simulation of a lamellar mesophase based on a free energy functional is examined with the objective of determining the relationship between the parameters in the model and molecular parameters. Attention is restricted to a symmetric lamellar phase with equal volumes of hydrophilic and hydrophobic components. Apart from the lamellar spacing, there are two parameters in the free energy functional. One of the parameters, r, determines the sharpness of the interface, and it is shown how this parameter can be obtained from the interface profile in a molecular simulation. The other parameter, A, provides an energy scale. Analytical expressions are derived to relate these parameters to r and A to the bending and compression moduli and the permeation constant in the macroscopic equation to the Onsager coefficient in the concentration diffusion equation. The linear hydrodynamic response predicted by the theory is verified by carrying out a mesoscale simulation using the lattice-Boltzmann technique and verifying that the analytical predictions are in agreement with simulation results. A macroscale model based on the layer thickness field and the layer normal field is proposed, and the relationship between the parameters in the macroscale model from the parameters in the mesoscale free energy functional is obtained.
Resumo:
The nicotinic Acetylcholine Receptor (nAChR) is the major class of neurotransmitter receptors that is involved in many neurodegenerative conditions such as schizophrenia, Alzheimer's and Parkinson's diseases. The N-terminal region or Ligand Binding Domain (LBD) of nAChR is located at pre- and post-synaptic nervous system, which mediates synaptic transmission. nAChR acts as the drug target for agonist and competitive antagonist molecules that modulate signal transmission at the nerve terminals. Based on Acetylcholine Binding Protein (AChBP) from Lymnea stagnalis as the structural template, the homology modeling approach was carried out to build three dimensional model of the N-terminal region of human alpha(7)nAChR. This theoretical model is an assembly of five alpha(7) subunits with 5 fold axis symmetry, constituting a channel, with the binding picket present at the interface region of the subunits. alpha-netlrotoxin is a potent nAChR competitive antagonist that readily blocks the channel resulting in paralysis. The molecular interaction of alpha-Bungarotoxin, a long chain alpha-neurotoxin from (Bungarus multicinctus) and human alpha(7)nAChR seas studied. Agonists such as acetylcholine, nicotine, which are used in it diverse array of biological activities, such as enhancements of cognitive performances, were also docked with the theoretical model of human alpha(7)nAChR. These docked complexes were analyzed further for identifying the crucial residues involved i interaction. These results provide the details of interaction of agonists and competitive antagonists with three dimensional model of the N-terminal region of human alpha(7)nAChR and thereby point to the design of novel lead compounds.
Resumo:
This paper presents the modeling and analysis of a voltage source converter (VSC) based back-to-back (BTB) HVDC link. The case study considers the response to changes in the active and reactive power and disturbance caused by single line to ground (SLG) fault. The controllers at each terminal are designed to inject a variable (magnitude and phase angle) sinusoidal, balanced set of voltages to regulate/control the active and reactive power. It is also possible to regulate the converter bus (AC) voltage by controlling the injected reactive power. The analysis is carried out using both d-q model (neglecting the harmonics in the output voltages of VSC) and three phase detailed model of VSC. While the eigenvalue analysis and controller design is based on the d-q model, the transient simulation considers both models.
Resumo:
Algorithms for planning quasistatic attitude maneuvers based on the Jacobian of the forward kinematic mapping of fully-reversed (FR) sequences of rotations are proposed in this paper. An FR sequence of rotations is a series of finite rotations that consists of initial rotations about the axes of a body-fixed coordinate frame and subsequent rotations that undo these initial rotations. Unlike the Jacobian of conventional systems such as a robot manipulator, the Jacobian of the system manipulated through FR rotations is a null matrix at the identity, which leads to a total breakdown of the traditional Jacobian formulation. Therefore, the Jacobian algorithm is reformulated and implemented so as to synthesize an FR sequence for a desired rotational displacement. The Jacobian-based algorithm presented in this paper identifies particular six-rotation FR sequences that synthesize desired orientations. We developed the single-step and the multiple-step Jacobian methods to accomplish a given task using six-rotation FR sequences. The single-step Jacobian method identifies a specific FR sequence for a given desired orientation and the multiple-step Jacobian algorithm synthesizes physically feasible FR rotations on an optimal path. A comparison with existing algorithms verifies the fast convergence ability of the Jacobian-based algorithm. Unlike closed-form solutions to the inverse kinematics problem, the Jacobian-based algorithm determines the most efficient FR sequence that yields a desired rotational displacement through a simple and inexpensive numerical calculation. The procedure presented here is useful for those motion planning problems wherein the Jacobian is singular or null.
Resumo:
Water temperature measurements from Wivenhoe Dam offer a unique opportunity for studying fluctuations of temperatures in a subtropical dam as a function of time and depth. Cursory examination of the data indicate a complicated structure across both time and depth. We propose simplifying the task of describing these data by breaking the time series at each depth into physically meaningful components that individually capture daily, subannual, and annual (DSA) variations. Precise definitions for each component are formulated in terms of a wavelet-based multiresolution analysis. The DSA components are approximately pairwise uncorrelated within a given depth and between different depths. They also satisfy an additive property in that their sum is exactly equal to the original time series. Each component is based upon a set of coefficients that decomposes the sample variance of each time series exactly across time and that can be used to study both time-varying variances of water temperature at each depth and time-varying correlations between temperatures at different depths. Each DSA component is amenable for studying a certain aspect of the relationship between the series at different depths. The daily component in general is weakly correlated between depths, including those that are adjacent to one another. The subannual component quantifies seasonal effects and in particular isolates phenomena associated with the thermocline, thus simplifying its study across time. The annual component can be used for a trend analysis. The descriptive analysis provided by the DSA decomposition is a useful precursor to a more formal statistical analysis.
Resumo:
A modeling paradigm is proposed for covariate, variance and working correlation structure selection for longitudinal data analysis. Appropriate selection of covariates is pertinent to correct variance modeling and selecting the appropriate covariates and variance function is vital to correlation structure selection. This leads to a stepwise model selection procedure that deploys a combination of different model selection criteria. Although these criteria find a common theoretical root based on approximating the Kullback-Leibler distance, they are designed to address different aspects of model selection and have different merits and limitations. For example, the extended quasi-likelihood information criterion (EQIC) with a covariance penalty performs well for covariate selection even when the working variance function is misspecified, but EQIC contains little information on correlation structures. The proposed model selection strategies are outlined and a Monte Carlo assessment of their finite sample properties is reported. Two longitudinal studies are used for illustration.
Resumo:
Miniaturization of analytical instrumentation is attracting growing interest in response to the explosive demand for rapid, yet sensitive analytical methods and low-cost, highly automated instruments for pharmaceutical and bioanalyses and environmental monitoring. Microfabrication technology in particular, has enabled fabrication of low-cost microdevices with a high degree of integrated functions, such as sample preparation, chemical reaction, separation, and detection, on a single microchip. These miniaturized total chemical analysis systems (microTAS or lab-on-a-chip) can also be arrayed for parallel analyses in order to accelerate the sample throughput. Other motivations include reduced sample consumption and waste production as well as increased speed of analysis. One of the most promising hyphenated techniques in analytical chemistry is the combination of a microfluidic separation chip and mass spectrometer (MS). In this work, the emerging polymer microfabrication techniques, ultraviolet lithography in particular, were exploited to develop a capillary electrophoresis (CE) separation chip which incorporates a monolithically integrated electrospray ionization (ESI) emitter for efficient coupling with MS. An epoxy photoresist SU-8 was adopted as structural material and characterized with respect to its physicochemical properties relevant to chip-based CE and ESI/MS, namely surface charge, surface interactions, heat transfer, and solvent compatibility. As a result, SU-8 was found to be a favorable material to substitute for the more commonly used glass and silicon in microfluidic applications. In addition, an infrared (IR) thermography was introduced as direct, non-intrusive method to examine the heat transfer and thermal gradients during microchip-CE. The IR data was validated through numerical modeling. The analytical performance of SU-8-based microchips was established for qualitative and quantitative CE-ESI/MS analysis of small drug compounds, peptides, and proteins. The CE separation efficiency was found to be similar to that of commercial glass microchips and conventional CE systems. Typical analysis times were only 30-90 s per sample indicating feasibility for high-throughput analysis. Moreover, a mass detection limit at the low-attomole level, as low as 10E+5 molecules, was achieved utilizing MS detection. The SU-8 microchips developed in this work could also be mass produced at low cost and with nearly identical performance from chip to chip. Until this work, the attempts to combine CE separation with ESI in a chip-based system, amenable to batch fabrication and capable of high, reproducible analytical performance, have not been successful. Thus, the CE-ESI chip developed in this work is a substantial step toward lab-on-a-chip technology.
Resumo:
In this dissertation, I present an overall methodological framework for studying linguistic alternations, focusing specifically on lexical variation in denoting a single meaning, that is, synonymy. As the practical example, I employ the synonymous set of the four most common Finnish verbs denoting THINK, namely ajatella, miettiä, pohtia and harkita ‘think, reflect, ponder, consider’. As a continuation to previous work, I describe in considerable detail the extension of statistical methods from dichotomous linguistic settings (e.g., Gries 2003; Bresnan et al. 2007) to polytomous ones, that is, concerning more than two possible alternative outcomes. The applied statistical methods are arranged into a succession of stages with increasing complexity, proceeding from univariate via bivariate to multivariate techniques in the end. As the central multivariate method, I argue for the use of polytomous logistic regression and demonstrate its practical implementation to the studied phenomenon, thus extending the work by Bresnan et al. (2007), who applied simple (binary) logistic regression to a dichotomous structural alternation in English. The results of the various statistical analyses confirm that a wide range of contextual features across different categories are indeed associated with the use and selection of the selected think lexemes; however, a substantial part of these features are not exemplified in current Finnish lexicographical descriptions. The multivariate analysis results indicate that the semantic classifications of syntactic argument types are on the average the most distinctive feature category, followed by overall semantic characterizations of the verb chains, and then syntactic argument types alone, with morphological features pertaining to the verb chain and extra-linguistic features relegated to the last position. In terms of overall performance of the multivariate analysis and modeling, the prediction accuracy seems to reach a ceiling at a Recall rate of roughly two-thirds of the sentences in the research corpus. The analysis of these results suggests a limit to what can be explained and determined within the immediate sentential context and applying the conventional descriptive and analytical apparatus based on currently available linguistic theories and models. The results also support Bresnan’s (2007) and others’ (e.g., Bod et al. 2003) probabilistic view of the relationship between linguistic usage and the underlying linguistic system, in which only a minority of linguistic choices are categorical, given the known context – represented as a feature cluster – that can be analytically grasped and identified. Instead, most contexts exhibit degrees of variation as to their outcomes, resulting in proportionate choices over longer stretches of usage in texts or speech.