933 resultados para Model-In-the-loop


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this paper is to compare 18 reference evapotranspiration models to the standard Penman-Monteith model in the Jaboticabal, Sao Paulo, region for the following time scales: daily, 5-day, 15-day and seasonal. A total of 5 years of daily meteorological data was used for the following analyses: accuracy (mean absolute percentage error, Mape), precision (R-2) and tendency (bias) (systematic error, SE). The results were also compared at the 95% probability level with Tukey's test. The Priestley-Taylor (1972) method was the most accurate for all time scales, the Tanner-Pelton (1960) method was the most accurate in the winter, and the Thornthwaite (1948) method was the most accurate of the methods that only used temperature data in the equations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We use Hirota's method formulated as a recursive scheme to construct a complete set of soliton solutions for the affine Toda field theory based on an arbitrary Lie algebra. Our solutions include a new class of solitons connected with two different types of degeneracies encountered in Hirota's perturbation approach. We also derive an universal mass formula for all Hirota's solutions to the affine Toda model valid for all underlying Lie groups. Embedding of the affine Toda model in the conformal affine Toda model plays a crucial role in this analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The lattice Boltzmann method is a popular approach for simulating hydrodynamic interactions in soft matter and complex fluids. The solvent is represented on a discrete lattice whose nodes are populated by particle distributions that propagate on the discrete links between the nodes and undergo local collisions. On large length and time scales, the microdynamics leads to a hydrodynamic flow field that satisfies the Navier-Stokes equation. In this thesis, several extensions to the lattice Boltzmann method are developed. In complex fluids, for example suspensions, Brownian motion of the solutes is of paramount importance. However, it can not be simulated with the original lattice Boltzmann method because the dynamics is completely deterministic. It is possible, though, to introduce thermal fluctuations in order to reproduce the equations of fluctuating hydrodynamics. In this work, a generalized lattice gas model is used to systematically derive the fluctuating lattice Boltzmann equation from statistical mechanics principles. The stochastic part of the dynamics is interpreted as a Monte Carlo process, which is then required to satisfy the condition of detailed balance. This leads to an expression for the thermal fluctuations which implies that it is essential to thermalize all degrees of freedom of the system, including the kinetic modes. The new formalism guarantees that the fluctuating lattice Boltzmann equation is simultaneously consistent with both fluctuating hydrodynamics and statistical mechanics. This establishes a foundation for future extensions, such as the treatment of multi-phase and thermal flows. An important range of applications for the lattice Boltzmann method is formed by microfluidics. Fostered by the "lab-on-a-chip" paradigm, there is an increasing need for computer simulations which are able to complement the achievements of theory and experiment. Microfluidic systems are characterized by a large surface-to-volume ratio and, therefore, boundary conditions are of special relevance. On the microscale, the standard no-slip boundary condition used in hydrodynamics has to be replaced by a slip boundary condition. In this work, a boundary condition for lattice Boltzmann is constructed that allows the slip length to be tuned by a single model parameter. Furthermore, a conceptually new approach for constructing boundary conditions is explored, where the reduced symmetry at the boundary is explicitly incorporated into the lattice model. The lattice Boltzmann method is systematically extended to the reduced symmetry model. In the case of a Poiseuille flow in a plane channel, it is shown that a special choice of the collision operator is required to reproduce the correct flow profile. This systematic approach sheds light on the consequences of the reduced symmetry at the boundary and leads to a deeper understanding of boundary conditions in the lattice Boltzmann method. This can help to develop improved boundary conditions that lead to more accurate simulation results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Simulations of forest stand dynamics in a modelling framework including Forest Vegetation Simulator (FVS) are diameter driven, thus the diameter or basal area increment model needs a special attention. This dissertation critically evaluates diameter or basal area increment models and modelling approaches in the context of the Great Lakes region of the United States and Canada. A set of related studies are presented that critically evaluate the sub-model for change in individual tree basal diameter used in the Forest Vegetation Simulator (FVS), a dominant forestry model in the Great Lakes region. Various historical implementations of the STEMS (Stand and Tree Evaluation and Modeling System) family of diameter increment models, including the current public release of the Lake States variant of FVS (LS-FVS), were tested for the 30 most common tree species using data from the Michigan Forest Inventory and Analysis (FIA) program. The results showed that current public release of the LS-FVS diameter increment model over-predicts 10-year diameter increment by 17% on average. Also the study affirms that a simple adjustment factor as a function of a single predictor, dbh (diameter at breast height) used in the past versions, provides an inadequate correction of model prediction bias. In order to re-engineer the basal diameter increment model, the historical, conceptual and philosophical differences among the individual tree increment model families and their modelling approaches were analyzed and discussed. Two underlying conceptual approaches toward diameter or basal area increment modelling have been often used: the potential-modifier (POTMOD) and composite (COMP) approaches, which are exemplified by the STEMS/TWIGS and Prognosis models, respectively. It is argued that both approaches essentially use a similar base function and neither is conceptually different from a biological perspective, even though they look different in their model forms. No matter what modelling approach is used, the base function is the foundation of an increment model. Two base functions – gamma and Box-Lucas – were identified as candidate base functions for forestry applications. The results of a comparative analysis of empirical fits showed that quality of fit is essentially similar, and both are sufficiently detailed and flexible for forestry applications. The choice of either base function in order to model diameter or basal area increment is dependent upon personal preference; however, the gamma base function may be preferred over the Box-Lucas, as it fits the periodic increment data in both a linear and nonlinear composite model form. Finally, the utility of site index as a predictor variable has been criticized, as it has been widely used in models for complex, mixed species forest stands though not well suited for this purpose. An alternative to site index in an increment model was explored, using site index and a combination of climate variables and Forest Ecosystem Classification (FEC) ecosites and data from the Province of Ontario, Canada. The results showed that a combination of climate and FEC ecosites variables can replace site index in the diameter increment model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Retinal degenerative diseases, e.g. retinitis pigmentosa, with resulting photoreceptor damage account for the majority of vision loss in the industrial world. Animal models are of pivotal importance to study such diseases. In this regard the photoreceptor-specific toxin N-methyl-N-nitrosourea (MNU) has been widely used in rodents to pharmacologically induce retinal degeneration. Previously, we have established a MNU-induced retinal degeneration model in the zebrafish, another popular model system in visual research. A fascinating difference to mammals is the persistent neurogenesis in the adult zebrafish retina and its regeneration after damage. To quantify this observation we have employed visual acuity measurements in the adult zebrafish. Thereby, the optokinetic reflex was used to follow functional changes in non-anesthetized fish. This was supplemented with histology as well as immunohistochemical staining for apoptosis (TUNEL) and proliferation (PCNA) to correlate the developing morphological changes. In summary, apoptosis of photoreceptors occurs three days after MNU treatment, which is followed by a marked reduction of cells in the outer nuclear layer (ONL). Thereafter, proliferation of cells in the inner nuclear layer (INL) and ONL is observed. Herein, we reveal that not only a complete histological but also a functional regeneration occurs over a time course of 30 days. Now we illustrate the methods to quantify and follow up zebrafish retinal de- and regeneration using MNU in a video-format.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the advancement of Information and Communication Technology ICT which favors increasingly fast, easy, and accessible communication for all and which can reach large groups of people, there have been changes, in recent years in our society that have modified the way we interact, communicate and transmit information. Access to this, it is possible, not only through computers situated in a fixed location, but new mobile devices make it available, wherever the user happens to be located. Now, information "travels" with the user. These forms of communication, transmission and access to information, have also affected the way to conceive and manage business. To these new forms of business that the Internet has brought, is now added the concept of companies in the Cloud Computing ClC. The ClC technology is based on the supply and consumption of services on demand and pay per use, and it gives a 180 degree turn to the business management concept. Small and large businesses may use the latest developments in ICT, to manage their organizations without the need for expensive investments in them. This will enable enterprises to focus more specifically within the scope of their business, leaving the ICT control to the experts. We believe that education can also and should benefit from these new philosophies. ?Due to the global economic crisis in general and each country in particular, economic cutbacks have come to most universities. These are seen in the need to raise tuition rates, which makes increasingly fewer students have the opportunity to pursue higher education?. In this paper we propose using ClC technologies in universities and we make a dissertation on the advantages that it can provide to both: universities and students. For the universities, we expose two focuses, one: ?to reorganize university ICT structures with the ClC philosophy? and the other one, ?to extend the offer of the university education with education on demand?. Regarding the former we propose to use public or private Clouds, to reuse resources across the education community, to save costs on infrastructure investment, in upgrades and in maintenance of ICT, and paying only for what you use and with the ability to scale according to needs. Regarding the latter, we propose an educational model in the ClC, to increase the current university offerings, using educational units in the form of low-cost services and where students pay only for the units consumed on demand. For the students, they could study at any university in the world (virtually), from anywhere, without travel costs: money and time, and what is most important paying only for what they consume. We think that this proposal of education on demand may represent a great change in the current educational model, because strict registration deadlines disappear, and also the problem of economically disadvantaged students, who will not have to raise large amounts of money for an annual tuition. Also it will decrease the problem of loss of the money invested in an enrollment when the student dropout. In summary we think that this proposal is interesting for both, universities and students, we aim for "Higher education from anywhere, with access from any mobile device, at any time, without requiring large investments for students, and with reuse and optimization of resources by universities. Cost by consumption and consumption by service?. We argue for a Universal University "wisdom and knowledge accessible to all?

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Subunits a and c of Fo are thought to cooperatively catalyze proton translocation during ATP synthesis by the Escherichia coli F1Fo ATP synthase. Optimizing mutations in subunit a at residues A217, I221, and L224 improves the partial function of the cA24D/cD61G double mutant and, on this basis, these three residues were proposed to lie on one face of a transmembrane helix of subunit a, which then interacted with the transmembrane helix of subunit c anchoring the essential aspartyl group. To test this model, in the present work Cys residues were introduced into the second transmembrane helix of subunit c and the predicted fourth transmembrane helix of subunit a. After treating the membrane vesicles of these mutants with Cu(1,10-phenanthroline)2SO4 at 0°, 10°, or 20°C, strong a–c dimer formation was observed at all three temperatures in membranes of 7 of the 65 double mutants constructed, i.e., in the aS207C/cI55C, aN214C/cA62C, aN214C/cM65C, aI221C/cG69C, aI223C/cL72C, aL224C/cY73C, and aI225C/cY73C double mutant proteins. The pattern of cross-linking aligns the helices in a parallel fashion over a span of 19 residues with the aN214C residue lying close to the cA62C and cM65C residues in the middle of the membrane. Lesser a–c dimer formation was observed in nine other double mutants after treatment at 20°C in a pattern generally supporting that indicated by the seven landmark residues cited above. Cross-link formation was not observed between helix-1 of subunit c and helix-4 of subunit a in 19 additional combinations of doubly Cys-substituted proteins. These results provide direct chemical evidence that helix-2 of subunit c and helix-4 of subunit a pack close enough to each other in the membrane to interact during function. The proximity of helices supports the possibility of an interaction between Arg210 in helix-4 of subunit a and Asp61 in helix-2 of subunit c during proton translocation, as has been suggested previously.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Plant breeders use many different breeding methods to develop superior cultivars. However, it is difficult, cumbersome, and expensive to evaluate the performance of a breeding method or to compare the efficiencies of different breeding methods within an ongoing breeding program. To facilitate comparisons, we developed a QU-GENE module called QuCim that can simulate a large number of breeding strategies for self-pollinated species. The wheat breeding strategy Selected Bulk used by CIMMYT's wheat breeding program was defined in QuCim as an example of how this is done. This selection method was simulated in QuCim to investigate the effects of deviations from the additive genetic model, in the form of dominance and epistasis, on selection outcomes. The simulation results indicate that the partial dominance model does not greatly influence genetic advance compared with the pure additive model. Genetic advance in genetic systems with overdominance and epistasis are slower than when gene effects are purely additive or partially dominant. The additive gene effect is an appropriate indicator of the change in gene frequency following selection when epistasis is absent. In the absence of epistasis, the additive variance decreases rapidly with selection. However, after several cycles of selection it remains relatively fixed when epistasis is present. The variance from partial dominance is relatively small and therefore hard to detect by the covariance among half sibs and the covariance among full sibs. The dominance variance from the overdominance model can be identified successfully, but it does not change significantly, which confirms that overdominance cannot be utilized by an inbred breeding program. QuCim is an effective tool to compare selection strategies and to validate some theories in quantitative genetics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is based upon a case study of the introduction of automated production technologies at the Longbridge plant of British Leyland in the period 1978 to 1980.The investment in automation was part of an overall programme of modernization to manufacture the new 'Mini Metro' model. In the first Section of the thesis, the different theoretical perspectives on technological change are discussed. Particular emphasis is placed upon the social role of management as the primary controllers of technological change. Their actions are seen to be oriented towards the overall strategy of the firm, integrating the firm's competitive strategy with production methods and techniques.This analysis is grounded in an examination of British Leyland's strategies during the 1970s.. The greater part of the thesis deals with the efforts made by management to secure their strategic objectives in the process of technological change against the conflicting claims of their work-force. Examination of these efforts is linked to the development of industrial relations conflict at Longbridge and in British Leyland as a whole.Emphasis is placed upon the struggle between management in pursuit of their version of efficiency and the trade unions in defence of job controls and demarcations. The thesis concludes that the process of technological change in the motor industry is controlled by social forces,with the introduction of new technologies being closely intertwined with management!s political relations with the trade unions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How are innovative new business models established if organizations constantly compare themselves against existing criteria and expectations? The objective is to address this question from the perspective of innovators and their ability to redefine established expectations and evaluation criteria. The research questions ask whether there are discernible patterns of discursive action through which innovators theorize institutional change and what role such theorizations play for mobilizing support and realizing change projects. These questions are investigated through a case study on a critical area of enterprise computing software, Java application servers. In the present case, business practices and models were already well established among incumbents with critical market areas allocated to few dominant firms. Fringe players started experimenting with a new business approach of selling services around freely available opensource application servers. While most new players struggled, one new entrant succeeded in leading incumbents to adopt and compete on the new model. The case demonstrates that innovative and substantially new models and practices are established in organizational fields when innovators are able to refine expectations and evaluation criteria within an organisational field. The study addresses the theoretical paradox of embedded agency. Actors who are embedded in prevailing institutional logics and structures find it hard to perceive potentially disruptive opportunities that fall outside existing ways of doing things. Changing prevailing institutional logics and structures requires strategic and institutional work aimed at overcoming barriers to innovation. The study addresses this problem through the lens of (new) institutional theory. This discourse methodology traces the process through which innovators were able to establish a new social and business model in the field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most prior new product diffusion (NPD) models do not specifically consider the role of the business model in the process. However, the context of NPD in today's market has been changed dramatically by the introduction of new business models. Through reinterpretation and extension, this paper empirically examines the feasibility of applying Bass-type NPD models to products that are commercialized by different business models. More specifically, the results and analysis of this study consider the subscription business model for service products, the freemium business model for digital products, and a pre-paid and post-paid business model that is widely used by mobile network providers. The paper offers new insights derived from implementing the models in real-life cases. It also highlights three themes for future research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the use of a formal optimisation procedure to optimise a plug-in hybrid electric bus using two different case studies to meet two different performance criteria; minimum journey cost and maximum battery life. The approach is to choose a commercially available vehicle and seek to improve its performance by varying key design parameters. Central to this approach is the ability to develop a representative backward facing model of the vehicle in MATLAB/Simulink along with appropriate optimisation objective and penalty functions. The penalty functions being the margin by which a particular design fails to meet the performance specification. The model is validated against data collected from an actual vehicle and is used to estimate the vehicle performance parameters in a model-in-the-loop process within an optimisation routine. For the purposes of this paper, the journey cost/battery life over a drive cycle is optimised whilst other performance indices are met (or exceeded). Among the available optimisation methods, Powell's method and Simulated Annealing are adopted. The results show this method as a valid alternative modelling approach to vehicle powertrain optimisation. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The standard highway assignment model in the Florida Standard Urban Transportation Modeling Structure (FSUTMS) is based on the equilibrium traffic assignment method. This method involves running several iterations of all-or-nothing capacity-restraint assignment with an adjustment of travel time to reflect delays encountered in the associated iteration. The iterative link time adjustment process is accomplished through the Bureau of Public Roads (BPR) volume-delay equation. Since FSUTMS' traffic assignment procedure outputs daily volumes, and the input capacities are given in hourly volumes, it is necessary to convert the hourly capacities to their daily equivalents when computing the volume-to-capacity ratios used in the BPR function. The conversion is accomplished by dividing the hourly capacity by a factor called the peak-to-daily ratio, or referred to as CONFAC in FSUTMS. The ratio is computed as the highest hourly volume of a day divided by the corresponding total daily volume. ^ While several studies have indicated that CONFAC is a decreasing function of the level of congestion, a constant value is used for each facility type in the current version of FSUTMS. This ignores the different congestion level associated with each roadway and is believed to be one of the culprits of traffic assignment errors. Traffic counts data from across the state of Florida were used to calibrate CONFACs as a function of a congestion measure using the weighted least squares method. The calibrated functions were then implemented in FSUTMS through a procedure that takes advantage of the iterative nature of FSUTMS' equilibrium assignment method. ^ The assignment results based on constant and variable CONFACs were then compared against the ground counts for three selected networks. It was found that the accuracy from the two assignments was not significantly different, that the hypothesized improvement in assignment results from the variable CONFAC model was not empirically evident. It was recognized that many other factors beyond the scope and control of this study could contribute to this finding. It was recommended that further studies focus on the use of the variable CONFAC model with recalibrated parameters for the BPR function and/or with other forms of volume-delay functions. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis considers Eliot's critical writing from the late 1910s till the mid-1930s, in the light of his PhD thesis - Knowledge and Experience in the Philosophy of F. H. Bradley - and a range of unpublished material: T S. Eliot's Philosophical Essays and Notes (1913- 4) in the Hayward Bequest (King's College, Cambridge University); T. S. Eliot's Family Papers in the T. S. Eliot Collection at the Houghton Library (Harvard University); and items from the Harvard University Archives at the Pusey Library. 'Me thesis offers a comprehensive view of Eliot's critical development throughout this important period. It starts by considering The Sacred Wood's ambivalence towards the metaphysical philosophy of F. H. Bradley and Eliot's apparent adoption of a scientific method, under the influence of Bertrand Russell. It will be argued that Eliot uses rhetorical strategies which simultaneously subvert the method he is propounding, and which set the tone for an assessment of his criticism throughout the 1920s. His indecision, in this period, about the label 'Metaphysical' for some poets of the seventeenth century, reveals the persistence of the philosophical thought he apparently rejects in 1916, when he chooses not to pursue a career in philosophy in Harvard. This rhetorical tactic achieves its fulfilment in Dante (1929), where Eliot finds a model in the medieval allegorical method and 'philosophical' poetry. Allegory is also examined in connection with the evaluation of Eliot's critical writings themselves to determine, for instance, the figurative dimension of his early scientific vocabulary and uncover metaphysical residues he had explicitly disowned but would later embrace. Finally, it is suggested that, the hermeneutics of allegory are historical and it is used here to test the relationship between Eliot's early and later critical writings, that is the early physics and the later metaphysics.