953 resultados para model complexity
Resumo:
We review the study of flower color polymorphisms in the morning glory as a model for the analysis of adaptation. The pathway involved in the determination of flower color phenotype is traced from the molecular and genetic levels to the phenotypic level. Many of the genes that determine the enzymatic components of flavonoid biosynthesis are redundant, but, despite this complexity, it is possible to associate discrete floral phenotypes with individual genes. An important finding is that almost all of the mutations that determine phenotypic differences are the result of transposon insertions. Thus, the flower color diversity seized on by early human domesticators of this plant is a consequence of the rich variety of mobile elements that reside in the morning glory genome. We then consider a long history of research aimed at uncovering the ecological fate of these various flower phenotypes in the southeastern U.S. A large body of work has shown that insect pollinators discriminate against white phenotypes when white flowers are rare in populations. Because the plant is self-compatible, pollinator bias causes an increase in self-fertilization in white maternal plants, which should lead to an increase in the frequency of white genes, according to modifier gene theory. Studies of geographical distributions indicate other, as yet undiscovered, disadvantages associated with the white phenotype. The ultimate goal of connecting ecology to molecular genetics through the medium of phenotype is yet to be attained, but this approach may represent a model for analyzing the translation between these two levels of biological organization.
Resumo:
We summarize studies of earthquake fault models that give rise to slip complexities like those in natural earthquakes. For models of smooth faults between elastically deformable continua, it is critical that the friction laws involve a characteristic distance for slip weakening or evolution of surface state. That results in a finite nucleation size, or coherent slip patch size, h*. Models of smooth faults, using numerical cell size properly small compared to h*, show periodic response or complex and apparently chaotic histories of large events but have not been found to show small event complexity like the self-similar (power law) Gutenberg-Richter frequency-size statistics. This conclusion is supported in the present paper by fully inertial elastodynamic modeling of earthquake sequences. In contrast, some models of locally heterogeneous faults with quasi-independent fault segments, represented approximately by simulations with cell size larger than h* so that the model becomes "inherently discrete," do show small event complexity of the Gutenberg-Richter type. Models based on classical friction laws without a weakening length scale or for which the numerical procedure imposes an abrupt strength drop at the onset of slip have h* = 0 and hence always fall into the inherently discrete class. We suggest that the small-event complexity that some such models show will not survive regularization of the constitutive description, by inclusion of an appropriate length scale leading to a finite h*, and a corresponding reduction of numerical grid size.
Resumo:
We study a simple antiplane fault of finite length embedded in a homogeneous isotropic elastic solid to understand the origin of seismic source heterogeneity in the presence of nonlinear rate- and state-dependent friction. All the mechanical properties of the medium and friction are assumed homogeneous. Friction includes a characteristic length that is longer than the grid size so that our models have a well-defined continuum limit. Starting from a heterogeneous initial stress distribution, we apply a slowly increasing uniform stress load far from the fault and we simulate the seismicity for a few 1000 events. The style of seismicity produced by this model is determined by a control parameter associated with the degree of rate dependence of friction. For classical friction models with rate-independent friction, no complexity appears and seismicity is perfectly periodic. For weakly rate-dependent friction, large ruptures are still periodic, but small seismicity becomes increasingly nonstationary. When friction is highly rate-dependent, seismicity becomes nonperiodic and ruptures of all sizes occur inside the fault. Highly rate-dependent friction destabilizes the healing process producing premature healing of slip and partial stress drop. Partial stress drop produces large variations in the state of stress that in turn produce earthquakes of different sizes. Similar results have been found by other authors using the Burridge and Knopoff model. We conjecture that all models in which static stress drop is only a fraction of the dynamic stress drop produce stress heterogeneity.
Resumo:
We summarize recent evidence that models of earthquake faults with dynamically unstable friction laws but no externally imposed heterogeneities can exhibit slip complexity. Two models are described here. The first is a one-dimensional model with velocity-weakening stick-slip friction; the second is a two-dimensional elastodynamic model with slip-weakening friction. Both exhibit small-event complexity and chaotic sequences of large characteristic events. The large events in both models are composed of Heaton pulses. We argue that the key ingredients of these models are reasonably accurate representations of the properties of real faults.
Resumo:
Conceptual frameworks of dryland degradation commonly include ecohydrological feedbacks between landscape spatial organization and resource loss, so that decreasing cover and size of vegetation patches result in higher water and soil losses, which lead to further vegetation loss. However, the impacts of these feedbacks on dryland dynamics in response to external stress have barely been tested. Using a spatially-explicit model, we represented feedbacks between vegetation pattern and landscape resource loss by establishing a negative dependence of plant establishment on the connectivity of runoff-source areas (e.g., bare soils). We assessed the impact of various feedback strengths on the response of dryland ecosystems to changing external conditions. In general, for a given external pressure, these connectivity-mediated feedbacks decrease vegetation cover at equilibrium, which indicates a decrease in ecosystem resistance. Along a gradient of gradual increase of environmental pressure (e.g., aridity), the connectivity-mediated feedbacks decrease the amount of pressure required to cause a critical shift to a degraded state (ecosystem resilience). If environmental conditions improve, these feedbacks increase the pressure release needed to achieve the ecosystem recovery (restoration potential). The impact of these feedbacks on dryland response to external stress is markedly non-linear, which relies on the non-linear negative relationship between bare-soil connectivity and vegetation cover. Modelling studies on dryland vegetation dynamics not accounting for the connectivity-mediated feedbacks studied here may overestimate the resistance, resilience and restoration potential of drylands in response to environmental and human pressures. Our results also suggest that changes in vegetation pattern and associated hydrological connectivity may be more informative early-warning indicators of dryland degradation than changes in vegetation cover.
Resumo:
Business Intelligence (BI) applications have been gradually ported to the Web in search of a global platform for the consumption and publication of data and services. On the Internet, apart from techniques for data/knowledge management, BI Web applications need interfaces with a high level of interoperability (similar to the traditional desktop interfaces) for the visualisation of data/knowledge. In some cases, this has been provided by Rich Internet Applications (RIA). The development of these BI RIAs is a process traditionally performed manually and, given the complexity of the final application, it is a process which might be prone to errors. The application of model-driven engineering techniques can reduce the cost of development and maintenance (in terms of time and resources) of these applications, as they demonstrated by other types of Web applications. In the light of these issues, the paper introduces the Sm4RIA-B methodology, i.e., a model-driven methodology for the development of RIA as BI Web applications. In order to overcome the limitations of RIA regarding knowledge management from the Web, this paper also presents a new RIA platform for BI, called RI@BI, which extends the functionalities of traditional RIAs by means of Semantic Web technologies and B2B techniques. Finally, we evaluate the whole approach on a case study—the development of a social network site for an enterprise project manager.
Resumo:
This paper introduces a new mathematical model for the simultaneous synthesis of heat exchanger networks (HENs), wherein the handling pressure of process streams is used to enhance the heat integration. The proposed approach combines generalized disjunctive programming (GDP) and mixed-integer nonlinear programming (MINLP) formulation, in order to minimize the total annualized cost composed by operational and capital expenses. A multi-stage superstructure is developed for the HEN synthesis, assuming constant heat capacity flow rates and isothermal mixing, and allowing for streams splits. In this model, the pressure and temperature of streams must be treated as optimization variables, increasing further the complexity and difficulty to solve the problem. In addition, the model allows for coupling of compressors and turbines to save energy. A case study is performed to verify the accuracy of the proposed model. In this example, the optimal integration between the heat and work decreases the need for thermal utilities in the HEN design. As a result, the total annualized cost is also reduced due to the decrease in the operational expenses related to the heating and cooling of the streams.
Resumo:
Purpose: Retinitis pigmentosa includes a group of progressive retinal degenerative diseases that affect the structure and function of photoreceptors. Secondarily to the loss of photoreceptors, there is a reduction in retinal vascularization, which seems to influence the cellular degenerative process. Retinal macroglial cells, astrocytes, and Müller cells provide support for retinal neurons and are fundamental for maintaining normal retinal function. The aim of this study was to investigate the evolution of macroglial changes during retinal degeneration in P23H rats. Methods: Homozygous P23H line-3 rats aged from P18 to 18 months were used to study the evolution of the disease, and SD rats were used as controls. Immunolabeling with antibodies against GFAP, vimentin, and transducin were used to visualize macroglial cells and cone photoreceptors. Results: In P23H rats, increased GFAP labeling in Müller cells was observed as an early indicator of retinal gliosis. At 4 and 12 months of age, the apical processes of Müller cells in P23H rats clustered in firework-like structures, which were associated with ring-like shaped areas of cone degeneration in the outer nuclear layer. These structures were not observed at 16 months of age. The number of astrocytes was higher in P23H rats than in the SD matched controls at 4 and 12 months of age, supporting the idea of astrocyte proliferation. As the disease progressed, astrocytes exhibited a deteriorated morphology and marked hypertrophy. The increase in the complexity of the astrocytic processes correlated with greater connexin 43 expression and higher density of connexin 43 immunoreactive puncta within the ganglion cell layer (GCL) of P23H vs. SD rat retinas. Conclusions: In the P23H rat model of retinitis pigmentosa, the loss of photoreceptors triggers major changes in the number and morphology of glial cells affecting the inner retina.
Resumo:
Model Hamiltonians have been, and still are, a valuable tool for investigating the electronic structure of systems for which mean field theories work poorly. This review will concentrate on the application of Pariser–Parr–Pople (PPP) and Hubbard Hamiltonians to investigate some relevant properties of polycyclic aromatic hydrocarbons (PAH) and graphene. When presenting these two Hamiltonians we will resort to second quantisation which, although not the way chosen in its original proposal of the former, is much clearer. We will not attempt to be comprehensive, but rather our objective will be to try to provide the reader with information on what kinds of problems they will encounter and what tools they will need to solve them. One of the key issues concerning model Hamiltonians that will be treated in detail is the choice of model parameters. Although model Hamiltonians reduce the complexity of the original Hamiltonian, they cannot be solved in most cases exactly. So, we shall first consider the Hartree–Fock approximation, still the only tool for handling large systems, besides density functional theory (DFT) approaches. We proceed by discussing to what extent one may exactly solve model Hamiltonians and the Lanczos approach. We shall describe the configuration interaction (CI) method, a common technology in quantum chemistry but one rarely used to solve model Hamiltonians. In particular, we propose a variant of the Lanczos method, inspired by CI, that has the novelty of using as the seed of the Lanczos process a mean field (Hartree–Fock) determinant (the method will be named LCI). Two questions of interest related to model Hamiltonians will be discussed: (i) when including long-range interactions, how crucial is including in the Hamiltonian the electronic charge that compensates ion charges? (ii) Is it possible to reduce a Hamiltonian incorporating Coulomb interactions (PPP) to an 'effective' Hamiltonian including only on-site interactions (Hubbard)? The performance of CI will be checked on small molecules. The electronic structure of azulene and fused azulene will be used to illustrate several aspects of the method. As regards graphene, several questions will be considered: (i) paramagnetic versus antiferromagnetic solutions, (ii) forbidden gap versus dot size, (iii) graphene nano-ribbons, and (iv) optical properties.
Resumo:
Electrical energy storage is a really important issue nowadays. As electricity is not easy to be directly stored, it can be stored in other forms and converted back to electricity when needed. As a consequence, storage technologies for electricity can be classified by the form of storage, and in particular we focus on electrochemical energy storage systems, better known as electrochemical batteries. Largely the more widespread batteries are the Lead-Acid ones, in the two main types known as flooded and valve-regulated. Batteries need to be present in many important applications such as in renewable energy systems and in motor vehicles. Consequently, in order to simulate these complex electrical systems, reliable battery models are needed. Although there exist some models developed by experts of chemistry, they are too complex and not expressed in terms of electrical networks. Thus, they are not convenient for a practical use by electrical engineers, who need to interface these models with other electrical systems models, usually described by means of electrical circuits. There are many techniques available in literature by which a battery can be modeled. Starting from the Thevenin based electrical model, it can be adapted to be more reliable for Lead-Acid battery type, with the addition of a parasitic reaction branch and a parallel network. The third-order formulation of this model can be chosen, being a trustworthy general-purpose model, characterized by a good ratio between accuracy and complexity. Considering the equivalent circuit network, all the useful equations describing the battery model are discussed, and then implemented one by one in Matlab/Simulink. The model has been finally validated, and then used to simulate the battery behaviour in different typical conditions.
Resumo:
Based on the three-dimensional elastic inclusion model proposed by Dobrovolskii, we developed a rheological inclusion model to study earthquake preparation processes. By using the Corresponding Principle in the theory of rheologic mechanics, we derived the analytic expressions of viscoelastic displacement U(r, t) , V(r, t) and W(r, t), normal strains epsilon(xx) (r, t), epsilon(yy) (r, t) and epsilon(zz) (r, t) and the bulk strain theta (r, t) at an arbitrary point (x, y, z) in three directions of X axis, Y axis and Z axis produced by a three-dimensional inclusion in the semi-infinite rheologic medium defined by the standard linear rheologic model. Subsequent to the spatial-temporal variation of bulk strain being computed on the ground produced by such a spherical rheologic inclusion, interesting results are obtained, suggesting that the bulk strain produced by a hard inclusion change with time according to three stages (alpha, beta, gamma) with different characteristics, similar to that of geodetic deformation observations, but different with the results of a soft inclusion. These theoretical results can be used to explain the characteristics of spatial-temporal evolution, patterns, quadrant-distribution of earthquake precursors, the changeability, spontaneity and complexity of short-term and imminent-term precursors. It offers a theoretical base to build physical models for earthquake precursors and to predict the earthquakes.
Resumo:
New tools derived from advances in molecular biology have not been widely adopted in plant breeding because of the inability to connect information at gene level to the phenotype in a manner that is useful for selection. We explore whether a crop growth and development modelling framework can link phenotype complexity to underlying genetic systems in a way that strengthens molecular breeding strategies. We use gene-to-phenotype simulation studies on sorghum to consider the value to marker-assisted selection of intrinsically stable QTLs that might be generated by physiological dissection of complex traits. The consequences on grain yield of genetic variation in four key adaptive traits – phenology, osmotic adjustment, transpiration efficiency, and staygreen – were simulated for a diverse set of environments by placing the known extent of genetic variation in the context of the physiological determinants framework of a crop growth and development model. It was assumed that the three to five genes associated with each trait, had two alleles per locus acting in an additive manner. The effects on average simulated yield, generated by differing combinations of positive alleles for the traits incorporated, varied with environment type. The full matrix of simulated phenotypes, which consisted of 547 location-season combinations and 4235 genotypic expression states, was analysed for genetic and environmental effects. The analysis was conducted in stages with gradually increased understanding of gene-to-phenotype relationships, which would arise from physiological dissection and modelling. It was found that environmental characterisation and physiological knowledge helped to explain and unravel gene and environment context dependencies. We simulated a marker-assisted selection (MAS) breeding strategy based on the analyses of gene effects. When marker scores were allocated based on the contribution of gene effects to yield in a single environment, there was a wide divergence in rate of yield gain over all environments with breeding cycle depending on the environment chosen for the QTL analysis. It was suggested that knowledge resulting from trait physiology and modelling would overcome this dependency by identifying stable QTLs. The improved predictive power would increase the utility of the QTLs in MAS. Developing and implementing this gene-to-phenotype capability in crop improvement requires enhanced attention to phenotyping, ecophysiological modelling, and validation studies to test the stability of candidate QTLs.
Resumo:
The Operator Choice Model (OCM) was developed to model the behaviour of operators attending to complex tasks involving interdependent concurrent activities, such as in Air Traffic Control (ATC). The purpose of the OCM is to provide a flexible framework for modelling and simulation that can be used for quantitative analyses in human reliability assessment, comparison between human computer interaction (HCI) designs, and analysis of operator workload. The OCM virtual operator is essentially a cycle of four processes: Scan Classify Decide Action Perform Action. Once a cycle is complete, the operator will return to the Scan process. It is also possible to truncate a cycle and return to Scan after each of the processes. These processes are described using Continuous Time Probabilistic Automata (CTPA). The details of the probability and timing models are specific to the domain of application, and need to be specified using domain experts. We are building an application of the OCM for use in ATC. In order to develop a realistic model we are calibrating the probability and timing models that comprise each process using experimental data from a series of experiments conducted with student subjects. These experiments have identified the factors that influence perception and decision making in simplified conflict detection and resolution tasks. This paper presents an application of the OCM approach to a simple ATC conflict detection experiment. The aim is to calibrate the OCM so that its behaviour resembles that of the experimental subjects when it is challenged with the same task. Its behaviour should also interpolate when challenged with scenarios similar to those used to calibrate it. The approach illustrated here uses logistic regression to model the classifications made by the subjects. This model is fitted to the calibration data, and provides an extrapolation to classifications in scenarios outside of the calibration data. A simple strategy is used to calibrate the timing component of the model, and the results for reaction times are compared between the OCM and the student subjects. While this approach to timing does not capture the full complexity of the reaction time distribution seen in the data from the student subjects, the mean and the tail of the distributions are similar.
Resumo:
As advances in molecular biology continue to reveal additional layers of complexity in gene regulation, computational models need to incorporate additional features to explore the implications of new theories and hypotheses. It has recently been suggested that eukaryotic organisms owe their phenotypic complexity and diversity to the exploitation of small RNAs as signalling molecules. Previous models of genetic systems are, for several reasons, inadequate to investigate this theory. In this study, we present an artificial genome model of genetic regulatory networks based upon previous work by Torsten Reil, and demonstrate how this model generates networks with biologically plausible structural and dynamic properties. We also extend the model to explore the implications of incorporating regulation by small RNA molecules in a gene network. We demonstrate how, using these signals, highly connected networks can display dynamics that are more stable than expected given their level of connectivity.
Resumo:
This thesis is concerned with the inventory control of items that can be considered independent of one another. The decisions when to order and in what quantity, are the controllable or independent variables in cost expressions which are minimised. The four systems considered are referred to as (Q, R), (nQ,R,T), (M,T) and (M,R,T). Wiith ((Q,R) a fixed quantity Q is ordered each time the order cover (i.e. stock in hand plus on order ) equals or falls below R, the re-order level. With the other three systems reviews are made only at intervals of T. With (nQ,R,T) an order for nQ is placed if on review the inventory cover is less than or equal to R, where n, which is an integer, is chosen at the time so that the new order cover just exceeds R. In (M, T) each order increases the order cover to M. Fnally in (M, R, T) when on review, order cover does not exceed R, enough is ordered to increase it to M. The (Q, R) system is examined at several levels of complexity, so that the theoretical savings in inventory costs obtained with more exact models could be compared with the increases in computational costs. Since the exact model was preferable for the (Q,R) system only exact models were derived for theoretical systems for the other three. Several methods of optimization were tried, but most were found inappropriate for the exact models because of non-convergence. However one method did work for each of the exact models. Demand is considered continuous, and with one exception, the distribution assumed is the normal distribution truncated so that demand is never less than zero. Shortages are assumed to result in backorders, not lost sales. However, the shortage cost is a function of three items, one of which, the backorder cost, may be either a linear, quadratic or an exponential function of the length of time of a backorder, with or without period of grace. Lead times are assumed constant or gamma distributed. Lastly, the actual supply quantity is allowed to be distributed. All the sets of equations were programmed for a KDF 9 computer and the computed performances of the four inventory control procedures are compared under each assurnption.