908 resultados para Model Development
Resumo:
To simulate cropping systems, crop models must not only give reliable predictions of yield across a wide range of environmental conditions, they must also quantify water and nutrient use well, so that the status of the soil at maturity is a good representation of the starting conditions for the next cropping sequence. To assess the suitability for this task a range of crop models, currently used in Australia, were tested. The models differed in their design objectives, complexity and structure and were (i) tested on diverse, independent data sets from a wide range of environments and (ii) model components were further evaluated with one detailed data set from a semi-arid environment. All models were coded into the cropping systems shell APSIM, which provides a common soil water and nitrogen balance. Crop development was input, thus differences between simulations were caused entirely by difference in simulating crop growth. Under nitrogen non-limiting conditions between 73 and 85% of the observed kernel yield variation across environments was explained by the models. This ranged from 51 to 77% under varying nitrogen supply. Water and nitrogen effects on leaf area index were predicted poorly by all models resulting in erroneous predictions of dry matter accumulation and water use. When measured light interception was used as input, most models improved in their prediction of dry matter and yield. This test highlighted a range of compensating errors in all modelling approaches. Time course and final amount of water extraction was simulated well by two models, while others left up to 25% of potentially available soil water in the profile. Kernel nitrogen percentage was predicted poorly by all models due to its sensitivity to small dry matter changes. Yield and dry matter could be estimated adequately for a range of environmental conditions using the general concepts of radiation use efficiency and transpiration efficiency. However, leaf area and kernel nitrogen dynamics need to be improved to achieve better estimates of water and nitrogen use if such models are to be use to evaluate cropping systems. (C) 1998 Elsevier Science B.V.
Resumo:
The removal of chemicals in solution by overland how from agricultural land has the potential to be a significant source of chemical loss where chemicals are applied to the soil surface, as in zero tillage and surface-mulched farming systems. Currently, we lack detailed understanding of the transfer mechanism between the soil solution and overland flow, particularly under field conditions. A model of solute transfer from soil solution to overland flow was developed. The model is based on the hypothesis that a solute is initially distributed uniformly throughout the soil pore space in a thin layer at the soil surface. A fundamental assumption of the model is that at the time runoff commences, any solute at the soil surface that could be transported into the soil with the infiltrating water will already have been convected away from the area of potential exchange. Solute remaining at the soil surface is therefore not subject to further infiltration and may be approximated as a layer of tracer on a plane impermeable surface. The model fitted experimental data very well in all but one trial. The model in its present form focuses on the exchange of solute between the soil solution and surface water after the commencement of runoff. Future model development requires the relationship between the mass transfer parameters of the model and the time to runoff: to be defined. This would enable the model to be used for extrapolation beyond the specific experimental results of this study. The close agreement between experimental results and model simulations shows that the simple transfer equation proposed in this study has promise for estimating solute loss to surface runoff. Copyright (C) 2000 John Wiley & Sons, Ltd.
Resumo:
An extensive research program focused on the characterization of various metallurgical complex smelting and coal combustion slags is being undertaken. The research combines both experimental and thermodynamic modeling studies. The approach is illustrated by work on the PbO-ZnO-Al2O3-FeO-Fe2O3-CaO-SiO2 system. Experimental measurements of the liquidus and solidus have been undertaken under oxidizing and reducing conditions using equilibration, quenching, and electron probe X-ray microanalysis. The experimental program has been planned so as to obtain data for thermodynamic model development as well as for pseudo-ternary Liquidus diagrams that can be used directly by process operators. Thermodynamic modeling has been carried out using the computer system FACT, which contains thermodynamic databases with over 5000 compounds and evaluated solution models. The FACT package is used for the calculation of multiphase equilibria in multicomponent systems of industrial interest. A modified quasi-chemical solution model is used for the liquid slag phase. New optimizations have been carried out, which significantly improve the accuracy of the thermodynamic models for lead/zinc smelting and coal combustion processes. Examples of experimentally determined and calculated liquidus diagrams are presented. These examples provide information of direct relevance to various metallurgical smelting and coal combustion processes.
Resumo:
Systems approaches can help to evaluate and improve the agronomic and economic viability of nitrogen application in the frequently water-limited environments. This requires a sound understanding of crop physiological processes and well tested simulation models. Thus, this experiment on spring wheat aimed to better quantify water x nitrogen effects on wheat by deriving some key crop physiological parameters that have proven useful in simulating crop growth. For spring wheat grown in Northern Australia under four levels of nitrogen (0 to 360 kg N ha(-1)) and either entirely on stored soil moisture or under full irrigation, kernel yields ranged from 343 to 719 g m(-2). Yield increases were strongly associated with increases in kernel number (9150-19950 kernels m(-2)), indicating the sensitivity of this parameter to water and N availability. Total water extraction under a rain shelter was 240 mm with a maximum extraction depth of 1.5 m. A substantial amount of mineral nitrogen available deep in the profile (below 0.9 m) was taken up by the crop. This was the source of nitrogen uptake observed after anthesis. Under dry conditions this late uptake accounted for approximately 50% of total nitrogen uptake and resulted in high (>2%) kernel nitrogen percentages even when no nitrogen was applied,Anthesis LAI values under sub-optimal water supply were reduced by 63% and under sub-optimal nitrogen supply by 50%. Radiation use efficiency (RUE) based on total incident short-wave radiation was 1.34 g MJ(-1) and did not differ among treatments. The conservative nature of RUE was the result of the crop reducing leaf area rather than leaf nitrogen content (which would have affected photosynthetic activity) under these moderate levels of nitrogen limitation. The transpiration efficiency coefficient was also conservative and averaged 4.7 Pa in the dry treatments. Kernel nitrogen percentage varied from 2.08 to 2.42%. The study provides a data set and a basis to consider ways to improve simulation capabilities of water and nitrogen effects on spring wheat. (C) 1997 Elsevier Science B.V.
Resumo:
Activated sludge models are used extensively in the study of wastewater treatment processes. While various commercial implementations of these models are available, there are many people who need to code models themselves using the simulation packages available to them, Quality assurance of such models is difficult. While benchmarking problems have been developed and are available, the comparison of simulation data with that of commercial models leads only to the detection, not the isolation of errors. To identify the errors in the code is time-consuming. In this paper, we address the problem by developing a systematic and largely automated approach to the isolation of coding errors. There are three steps: firstly, possible errors are classified according to their place in the model structure and a feature matrix is established for each class of errors. Secondly, an observer is designed to generate residuals, such that each class of errors imposes a subspace, spanned by its feature matrix, on the residuals. Finally. localising the residuals in a subspace isolates coding errors. The algorithm proved capable of rapidly and reliably isolating a variety of single and simultaneous errors in a case study using the ASM 1 activated sludge model. In this paper a newly coded model was verified against a known implementation. The method is also applicable to simultaneous verification of any two independent implementations, hence is useful in commercial model development.
Resumo:
Many granulation plants operate well below design capacity, suffering from high recycle rates and even periodic instabilities. This behaviour cannot be fully predicted using the present models. The main objective of the paper is to provide an overview of the current status of model development for granulation processes and suggest future directions for research and development. The end-use of the models is focused on the optimal design and control of granulation plants using the improved predictions of process dynamics. The development of novel models involving mechanistically based structural switching methods is proposed in the paper. A number of guidelines are proposed for the selection of control relevant model structures. (C) 2002 Published by Elsevier Science B.V.
Resumo:
Seasonal climate forecasting offers potential for improving management of crop production risks in the cropping systems of NE Australia. But how is this capability best connected to management practice? Over the past decade, we have pursued participative systems approaches involving simulation-aided discussion with advisers and decision-makers. This has led to the development of discussion support software as a key vehicle for facilitating infusion of forecasting capability into practice. In this paper, we set out the basis of our approach, its implementation and preliminary evaluation. We outline the development of the discussion support software Whopper Cropper, which was designed for, and in close consultation with, public and private advisers. Whopper Cropper consists of a database of simulation output and a graphical user interface to generate analyses of risks associated with crop management options. The charts produced provide conversation pieces for advisers to use with their farmer clients in relation to the significant decisions they face. An example application, detail of the software development process and an initial survey of user needs are presented. We suggest that discussion support software is about moving beyond traditional notions of supply-driven decision support systems. Discussion support software is largely demand-driven and can compliment participatory action research programs by providing cost-effective general delivery of simulation-aided discussions about relevant management actions. The critical role of farm management advisers and dialogue among key players is highlighted. We argue that the discussion support concept, as exemplified by the software tool Whopper Cropper and the group processes surrounding it, provides an effective means to infuse innovations, like seasonal climate forecasting, into farming practice. Crown Copyright (C) 2002 Published by Elsevier Science Ltd. All rights reserved.
Resumo:
Functional genomics is the systematic study of genome-wide effects of gene expression on organism growth and development with the ultimate aim of understanding how networks of genes influence traits. Here, we use a dynamic biophysical cropping systems model (APSIM-Sorg) to generate a state space of genotype performance based on 15 genes controlling four adaptive traits and then search this spice using a quantitative genetics model of a plant breeding program (QU-GENE) to simulate recurrent selection. Complex epistatic and gene X environment effects were generated for yield even though gene action at the trait level had been defined as simple additive effects. Given alternative breeding strategies that restricted either the cultivar maturity type or the drought environment type, the positive (+) alleles for 15 genes associated with the four adaptive traits were accumulated at different rates over cycles of selection. While early maturing genotypes were favored in the Severe-Terminal drought environment type, late genotypes were favored in the Mild-Terminal and Midseason drought environment types. In the Severe-Terminal environment, there was an interaction of the stay-green (SG) trait with other traits: Selection for + alleles of the SG genes was delayed until + alleles for genes associated with the transpiration efficiency and osmotic adjustment traits had been fixed. Given limitations in our current understanding of trait interaction and genetic control, the results are not conclusive. However, they demonstrate how the per se complexity of gene X gene X environment interactions will challenge the application of genomics and marker-assisted selection in crop improvement for dryland adaptation.
Resumo:
RESUMO - Introdução: A Inteligência Emocional (IE) é considerada um factor preditivo de sucesso, mais significativo do que outros tipos de inteligência e o seu estudo tem recebido cada vez maior relevância com o objectivo de aumentar os níveis de desempenho em gestão (Goleman, 2009). O desenvolvimento da IE no âmbito da formação em gestão apresenta resultados contraditórios sendo necessário confirmar o potencial de desenvolvimento da IE em programas de formação específicos. Objectivos: Confirmar a importância da IE para a gestão da saúde e perceber o seu potencial de desenvolvimento em programas de formação específicos; analisar o módulo opcional de Emoção, Liderança e Coaching na Gestão em Saúde; e construir uma proposta de modelo que avalie se a participação nessa Unidade Curricular permite aumentar os níveis de IE. Metodologia: Realizou-se uma revisão da literatura, que permitiu ter acesso aos conceitos e teorias e, posteriormente, o estudo de caso do módulo opcional que permitiu compará-lo com outras teorias existentes. Finalmente, construiu-se uma proposta de modelo de avaliação da IE, com um desenho quasi-experimental. Conclusões: A IE é um factor essencial para o sucesso, principalmente na Gestão da Saúde, pelas características do mercado e das organizações. Os instrumentos de avaliação da IE com recurso à medição de competências são os que apresentam menos limitações. O peso do módulo opcional no Curso de Mestrado em Gestão da Saúde, é pouco significativo (3,33% dos ECTS) e apenas 36,6% dos alunos o frequentaram. A estrutura do módulo está alinhada com as directrizes de outras teorias, mas a sua curta duração poderá constituir uma limitação. Sugere-se a criação de apoio tutorial individualizado e prolongado. O modelo de avaliação proposto representa a primeira tentativa de avaliação do desenvolvimento da IE na formação em Gestão da Saúde em Portugal e a sua aplicação permitiria a o aprimoramento do potencial de desenvolvimento das competências dos gestores. ---------------------------------- ABSTRACT - Introduction: Emotional Intelligence (EI) is the most predictive factor of success when compared with other types of intelligence. Since it is believed to increase performance levels, EI study has been given more relevance (Goleman, 2009). EI development studies show contradictory results, becoming necessary to prove the benefits of the development programs. Purposes: This study aimed to confirm the importance of the EI in health care management; to perceive the EI development potential of specific programs; to analyze the optional curricular unit of Emotion, Coaching and Leadership in Health Management; and to build a model that proposes to evaluate the student’s EI development. Methods: After the Literature Revision, the Case Study of the Curricular Unit allowed to compare it with other existing theories. The Model of EI evaluation consists on a quasi-experimental study. Conclusions: EI is an essential factor for success, mainly in Health Care Management, because of its market and organizations characteristics. The ability instruments of EI evaluation are those which show the least limitations. The Curricular Unit represents only 3,33% of the ECTS provided by this Health Management Master. Only 36.6% of master’s students chose to participate in this curricular unit. The structure of the curricular unit is lined up with the guide-lines of other theories. However, being a 6 weeks program, it could represent a limitation. It is suggested to create an individual and longitudinal tutorial support. The EI evaluation model proposed represents the first attempt to evaluate de EI development in Health Management programs in Portugal. Its application could increase the manager’s development efficacy.
Resumo:
Apresenta-se nesta tese uma revisão da literatura sobre a modelação de semicondutores de potência baseada na física e posterior análise de desempenho de dois métodos estocásticos, Particle Swarm Optimizaton (PSO) e Simulated Annealing (SA), quando utilizado para identificação eficiente de parâmetros de modelos de dispositivos semicondutores de potência, baseado na física. O conhecimento dos valores destes parâmetros, para cada dispositivo, é fundamental para uma simulação precisa do comportamento dinâmico do semicondutor. Os parâmetros são extraídos passo-a-passo durante simulação transiente e desempenham um papel relevante. Uma outra abordagem interessante nesta tese relaciona-se com o facto de que nos últimos anos, os métodos de modelação para dispositivos de potência têm emergido, com alta precisão e baixo tempo de execução baseado na Equação de Difusão Ambipolar (EDA) para díodos de potência e implementação no MATLAB numa estratégia de optimização formal. A equação da EDA é resolvida numericamente sob várias condições de injeções e o modelo é desenvolvido e implementado como um subcircuito no simulador IsSpice. Larguras de camada de depleção, área total do dispositivo, nível de dopagem, entre outras, são alguns dos parâmetros extraídos do modelo. Extração de parâmetros é uma parte importante de desenvolvimento de modelo. O objectivo de extração de parâmetros e otimização é determinar tais valores de parâmetros de modelo de dispositivo que minimiza as diferenças entre um conjunto de características medidas e resultados obtidos pela simulação de modelo de dispositivo. Este processo de minimização é frequentemente chamado de ajuste de características de modelos para dados de medição. O algoritmo implementado, PSO é uma técnica de heurística de otimização promissora, eficiente e recentemente proposta por Kennedy e Eberhart, baseado no comportamento social. As técnicas propostas são encontradas para serem robustas e capazes de alcançar uma solução que é caracterizada para ser precisa e global. Comparada com algoritmo SA já realizada, o desempenho da técnica proposta tem sido testado utilizando dados experimentais para extrair parâmetros de dispositivos reais das características I-V medidas. Para validar o modelo, comparação entre resultados de modelo desenvolvido com um outro modelo já desenvolvido são apresentados.
Resumo:
Background: Several researchers seek methods for the selection of homogeneous groups of animals in experimental studies, a fact justified because homogeneity is an indispensable prerequisite for casualization of treatments. The lack of robust methods that comply with statistical and biological principles is the reason why researchers use empirical or subjective methods, influencing their results. Objective: To develop a multivariate statistical model for the selection of a homogeneous group of animals for experimental research and to elaborate a computational package to use it. Methods: The set of echocardiographic data of 115 male Wistar rats with supravalvular aortic stenosis (AoS) was used as an example of model development. Initially, the data were standardized, and became dimensionless. Then, the variance matrix of the set was submitted to principal components analysis (PCA), aiming at reducing the parametric space and at retaining the relevant variability. That technique established a new Cartesian system into which the animals were allocated, and finally the confidence region (ellipsoid) was built for the profile of the animals’ homogeneous responses. The animals located inside the ellipsoid were considered as belonging to the homogeneous batch; those outside the ellipsoid were considered spurious. Results: The PCA established eight descriptive axes that represented the accumulated variance of the data set in 88.71%. The allocation of the animals in the new system and the construction of the confidence region revealed six spurious animals as compared to the homogeneous batch of 109 animals. Conclusion: The biometric criterion presented proved to be effective, because it considers the animal as a whole, analyzing jointly all parameters measured, in addition to having a small discard rate.
Resumo:
Several population pharmacokinetic (PPK) analyses of the anticancer drug imatinib have been performed to investigate different patient populations and covariate effects. The present analysis offers a systematic qualitative and quantitative summary and comparison of those. Its primary objective was to provide useful information for evaluating the expectedness of imatinib plasma concentration measurements in the frame of therapeutic drug monitoring. The secondary objective was to review clinically important concentration-effect relationships to provide help in evaluating the potential suitability of plasma concentration values. Nine PPK models describing total imatinib plasma concentration were identified. Parameter estimates were standardized to common covariate values whenever possible. Predicted median exposure (Cmin) was derived by simulations and ranged between models from 555 to 1388 ng/mL (grand median: 870 ng/mL and interquartile "reference" range: 520-1390 ng/mL). Covariates of potential clinical importance (up to 30% change in pharmacokinetic predicted by at least 1 model) included body weight, albumin, α1 acid glycoprotein, and white blood cell count. Various other covariates were included but were statistically not significant or seemed clinically less important or physiologically controversial. Concentration-response relationships had more importance below the average reference range and concentration-toxicity relationships above. Therapeutic drug monitoring-guided dosage adjustment seems justified for imatinib, but a formal predictive therapeutic range remains difficult to propose in the absence of prospective target concentration intervention trials. To evaluate the expectedness of a drug concentration measurement in practice, this review allows comparison of the measurement either to the average reference range or to a specific range accounting for individual patient characteristics. For future research, external PPK model validation or meta-model development should be considered.
Resumo:
The Center for Transportation Research and Education (CTRE) used the traffic simulation model CORSIM to access proposed capacity and safety improvement strategies for the U.S. 61 corridor through Burlington, Iowa. The comparison between the base and alternative models allow for evaluation of the traffic flow performance under the existing conditions as well as other design scenarios. The models also provide visualization of performance for interpretation by technical staff, public policy makers, and the public. The objectives of this project are to evaluate the use of traffic simulation models for future use by the Iowa Department of Transportation (DOT) and to develop procedures for employing simulation modeling to conduct the analysis of alternative designs. This report presents both the findings of the U.S. 61 evaluation and an overview of model development procedures. The first part of the report includes the simulation modeling development procedures. The simulation analysis is illustrated through the Burlington U.S. 61 corridor case study application. Part I is not intended to be a user manual but simply introductory guidelines for traffic simulation modeling. Part II of the report evaluates the proposed improvement concepts in a side by side comparison of the base and alternative models.
Resumo:
The diffusion of mobile telephony began in 1971 in Finland, when the first car phones, called ARP1 were taken to use. Technologies changed from ARP to NMT and later to GSM. The main application of the technology, however, was voice transfer. The birth of the Internet created an open public data network and easy access to other types of computer-based services over networks. Telephones had been used as modems, but the development of the cellular technologies enabled automatic access from mobile phones to Internet. Also other wireless technologies, for instance Wireless LANs, were also introduced. Telephony had developed from analog to digital in fixed networks and allowed easy integration of fixed and mobile networks. This development opened a completely new functionality to computers and mobile phones. It also initiated the merger of the information technology (IT) and telecommunication (TC) industries. Despite the arising opportunity for firms' new competition the applications based on the new functionality were rare. Furthermore, technology development combined with innovation can be disruptive to industries. This research focuses on the new technology's impact on competition in the ICT industry through understanding the strategic needs and alternative futures of the industry's customers. The change speed inthe ICT industry is high and therefore it was valuable to integrate the DynamicCapability view of the firm in this research. Dynamic capabilities are an application of the Resource-Based View (RBV) of the firm. As is stated in the literature, strategic positioning complements RBV. This theoretical framework leads theresearch to focus on three areas: customer strategic innovation and business model development, external future analysis, and process development combining these two. The theoretical contribution of the research is in the development of methodology integrating theories of the RBV, dynamic capabilities and strategic positioning. The research approach has been constructive due to the actual managerial problems initiating the study. The requirement for iterative and innovative progress in the research supported the chosen research approach. The study applies known methods in product development, for instance, innovation process in theGroup Decision Support Systems (GDSS) laboratory and Quality Function Deployment (QFD), and combines them with known strategy analysis tools like industry analysis and scenario method. As the main result, the thesis presents the strategic innovation process, where new business concepts are used to describe the alternative resource configurations and scenarios as alternative competitive environments, which can be a new way for firms to achieve competitive advantage in high-velocity markets. In addition to the strategic innovation process as a result, thestudy has also resulted in approximately 250 new innovations for the participating firms, reduced technology uncertainty and helped strategic infrastructural decisions in the firms, and produced a knowledge-bank including data from 43 ICT and 19 paper industry firms between the years 1999 - 2004. The methods presentedin this research are also applicable to other industries.
Resumo:
Software engineering is criticized as not being engineering or 'well-developed' science at all. Software engineers seem not to know exactly how long their projects will last, what they will cost, and will the software work properly after release. Measurements have to be taken in software projects to improve this situation. It is of limited use to only collect metrics afterwards. The values of the relevant metrics have to be predicted, too. The predictions (i.e. estimates) form the basis for proper project management. One of the most painful problems in software projects is effort estimation. It has a clear and central effect on other project attributes like cost and schedule, and to product attributes like size and quality. Effort estimation can be used for several purposes. In this thesis only the effort estimation in software projects for project management purposes is discussed. There is a short introduction to the measurement issues, and some metrics relevantin estimation context are presented. Effort estimation methods are covered quite broadly. The main new contribution in this thesis is the new estimation model that has been created. It takes use of the basic concepts of Function Point Analysis, but avoids the problems and pitfalls found in the method. It is relativelyeasy to use and learn. Effort estimation accuracy has significantly improved after taking this model into use. A major innovation related to the new estimationmodel is the identified need for hierarchical software size measurement. The author of this thesis has developed a three level solution for the estimation model. All currently used size metrics are static in nature, but this new proposed metric is dynamic. It takes use of the increased understanding of the nature of the work as specification and design work proceeds. It thus 'grows up' along with software projects. The effort estimation model development is not possible without gathering and analyzing history data. However, there are many problems with data in software engineering. A major roadblock is the amount and quality of data available. This thesis shows some useful techniques that have been successful in gathering and analyzing the data needed. An estimation process is needed to ensure that methods are used in a proper way, estimates are stored, reported and analyzed properly, and they are used for project management activities. A higher mechanism called measurement framework is also introduced shortly. The purpose of the framework is to define and maintain a measurement or estimationprocess. Without a proper framework, the estimation capability of an organization declines. It requires effort even to maintain an achieved level of estimationaccuracy. Estimation results in several successive releases are analyzed. It isclearly seen that the new estimation model works and the estimation improvementactions have been successful. The calibration of the hierarchical model is a critical activity. An example is shown to shed more light on the calibration and the model itself. There are also remarks about the sensitivity of the model. Finally, an example of usage is shown.