926 resultados para model complexity


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Retinitis pigmentosa includes a group of progressive retinal degenerative diseases that affect the structure and function of photoreceptors. Secondarily to the loss of photoreceptors, there is a reduction in retinal vascularization, which seems to influence the cellular degenerative process. Retinal macroglial cells, astrocytes, and Müller cells provide support for retinal neurons and are fundamental for maintaining normal retinal function. The aim of this study was to investigate the evolution of macroglial changes during retinal degeneration in P23H rats. Methods: Homozygous P23H line-3 rats aged from P18 to 18 months were used to study the evolution of the disease, and SD rats were used as controls. Immunolabeling with antibodies against GFAP, vimentin, and transducin were used to visualize macroglial cells and cone photoreceptors. Results: In P23H rats, increased GFAP labeling in Müller cells was observed as an early indicator of retinal gliosis. At 4 and 12 months of age, the apical processes of Müller cells in P23H rats clustered in firework-like structures, which were associated with ring-like shaped areas of cone degeneration in the outer nuclear layer. These structures were not observed at 16 months of age. The number of astrocytes was higher in P23H rats than in the SD matched controls at 4 and 12 months of age, supporting the idea of astrocyte proliferation. As the disease progressed, astrocytes exhibited a deteriorated morphology and marked hypertrophy. The increase in the complexity of the astrocytic processes correlated with greater connexin 43 expression and higher density of connexin 43 immunoreactive puncta within the ganglion cell layer (GCL) of P23H vs. SD rat retinas. Conclusions: In the P23H rat model of retinitis pigmentosa, the loss of photoreceptors triggers major changes in the number and morphology of glial cells affecting the inner retina.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Model Hamiltonians have been, and still are, a valuable tool for investigating the electronic structure of systems for which mean field theories work poorly. This review will concentrate on the application of Pariser–Parr–Pople (PPP) and Hubbard Hamiltonians to investigate some relevant properties of polycyclic aromatic hydrocarbons (PAH) and graphene. When presenting these two Hamiltonians we will resort to second quantisation which, although not the way chosen in its original proposal of the former, is much clearer. We will not attempt to be comprehensive, but rather our objective will be to try to provide the reader with information on what kinds of problems they will encounter and what tools they will need to solve them. One of the key issues concerning model Hamiltonians that will be treated in detail is the choice of model parameters. Although model Hamiltonians reduce the complexity of the original Hamiltonian, they cannot be solved in most cases exactly. So, we shall first consider the Hartree–Fock approximation, still the only tool for handling large systems, besides density functional theory (DFT) approaches. We proceed by discussing to what extent one may exactly solve model Hamiltonians and the Lanczos approach. We shall describe the configuration interaction (CI) method, a common technology in quantum chemistry but one rarely used to solve model Hamiltonians. In particular, we propose a variant of the Lanczos method, inspired by CI, that has the novelty of using as the seed of the Lanczos process a mean field (Hartree–Fock) determinant (the method will be named LCI). Two questions of interest related to model Hamiltonians will be discussed: (i) when including long-range interactions, how crucial is including in the Hamiltonian the electronic charge that compensates ion charges? (ii) Is it possible to reduce a Hamiltonian incorporating Coulomb interactions (PPP) to an 'effective' Hamiltonian including only on-site interactions (Hubbard)? The performance of CI will be checked on small molecules. The electronic structure of azulene and fused azulene will be used to illustrate several aspects of the method. As regards graphene, several questions will be considered: (i) paramagnetic versus antiferromagnetic solutions, (ii) forbidden gap versus dot size, (iii) graphene nano-ribbons, and (iv) optical properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electrical energy storage is a really important issue nowadays. As electricity is not easy to be directly stored, it can be stored in other forms and converted back to electricity when needed. As a consequence, storage technologies for electricity can be classified by the form of storage, and in particular we focus on electrochemical energy storage systems, better known as electrochemical batteries. Largely the more widespread batteries are the Lead-Acid ones, in the two main types known as flooded and valve-regulated. Batteries need to be present in many important applications such as in renewable energy systems and in motor vehicles. Consequently, in order to simulate these complex electrical systems, reliable battery models are needed. Although there exist some models developed by experts of chemistry, they are too complex and not expressed in terms of electrical networks. Thus, they are not convenient for a practical use by electrical engineers, who need to interface these models with other electrical systems models, usually described by means of electrical circuits. There are many techniques available in literature by which a battery can be modeled. Starting from the Thevenin based electrical model, it can be adapted to be more reliable for Lead-Acid battery type, with the addition of a parasitic reaction branch and a parallel network. The third-order formulation of this model can be chosen, being a trustworthy general-purpose model, characterized by a good ratio between accuracy and complexity. Considering the equivalent circuit network, all the useful equations describing the battery model are discussed, and then implemented one by one in Matlab/Simulink. The model has been finally validated, and then used to simulate the battery behaviour in different typical conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on the three-dimensional elastic inclusion model proposed by Dobrovolskii, we developed a rheological inclusion model to study earthquake preparation processes. By using the Corresponding Principle in the theory of rheologic mechanics, we derived the analytic expressions of viscoelastic displacement U(r, t) , V(r, t) and W(r, t), normal strains epsilon(xx) (r, t), epsilon(yy) (r, t) and epsilon(zz) (r, t) and the bulk strain theta (r, t) at an arbitrary point (x, y, z) in three directions of X axis, Y axis and Z axis produced by a three-dimensional inclusion in the semi-infinite rheologic medium defined by the standard linear rheologic model. Subsequent to the spatial-temporal variation of bulk strain being computed on the ground produced by such a spherical rheologic inclusion, interesting results are obtained, suggesting that the bulk strain produced by a hard inclusion change with time according to three stages (alpha, beta, gamma) with different characteristics, similar to that of geodetic deformation observations, but different with the results of a soft inclusion. These theoretical results can be used to explain the characteristics of spatial-temporal evolution, patterns, quadrant-distribution of earthquake precursors, the changeability, spontaneity and complexity of short-term and imminent-term precursors. It offers a theoretical base to build physical models for earthquake precursors and to predict the earthquakes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New tools derived from advances in molecular biology have not been widely adopted in plant breeding because of the inability to connect information at gene level to the phenotype in a manner that is useful for selection. We explore whether a crop growth and development modelling framework can link phenotype complexity to underlying genetic systems in a way that strengthens molecular breeding strategies. We use gene-to-phenotype simulation studies on sorghum to consider the value to marker-assisted selection of intrinsically stable QTLs that might be generated by physiological dissection of complex traits. The consequences on grain yield of genetic variation in four key adaptive traits – phenology, osmotic adjustment, transpiration efficiency, and staygreen – were simulated for a diverse set of environments by placing the known extent of genetic variation in the context of the physiological determinants framework of a crop growth and development model. It was assumed that the three to five genes associated with each trait, had two alleles per locus acting in an additive manner. The effects on average simulated yield, generated by differing combinations of positive alleles for the traits incorporated, varied with environment type. The full matrix of simulated phenotypes, which consisted of 547 location-season combinations and 4235 genotypic expression states, was analysed for genetic and environmental effects. The analysis was conducted in stages with gradually increased understanding of gene-to-phenotype relationships, which would arise from physiological dissection and modelling. It was found that environmental characterisation and physiological knowledge helped to explain and unravel gene and environment context dependencies. We simulated a marker-assisted selection (MAS) breeding strategy based on the analyses of gene effects. When marker scores were allocated based on the contribution of gene effects to yield in a single environment, there was a wide divergence in rate of yield gain over all environments with breeding cycle depending on the environment chosen for the QTL analysis. It was suggested that knowledge resulting from trait physiology and modelling would overcome this dependency by identifying stable QTLs. The improved predictive power would increase the utility of the QTLs in MAS. Developing and implementing this gene-to-phenotype capability in crop improvement requires enhanced attention to phenotyping, ecophysiological modelling, and validation studies to test the stability of candidate QTLs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Operator Choice Model (OCM) was developed to model the behaviour of operators attending to complex tasks involving interdependent concurrent activities, such as in Air Traffic Control (ATC). The purpose of the OCM is to provide a flexible framework for modelling and simulation that can be used for quantitative analyses in human reliability assessment, comparison between human computer interaction (HCI) designs, and analysis of operator workload. The OCM virtual operator is essentially a cycle of four processes: Scan Classify Decide Action Perform Action. Once a cycle is complete, the operator will return to the Scan process. It is also possible to truncate a cycle and return to Scan after each of the processes. These processes are described using Continuous Time Probabilistic Automata (CTPA). The details of the probability and timing models are specific to the domain of application, and need to be specified using domain experts. We are building an application of the OCM for use in ATC. In order to develop a realistic model we are calibrating the probability and timing models that comprise each process using experimental data from a series of experiments conducted with student subjects. These experiments have identified the factors that influence perception and decision making in simplified conflict detection and resolution tasks. This paper presents an application of the OCM approach to a simple ATC conflict detection experiment. The aim is to calibrate the OCM so that its behaviour resembles that of the experimental subjects when it is challenged with the same task. Its behaviour should also interpolate when challenged with scenarios similar to those used to calibrate it. The approach illustrated here uses logistic regression to model the classifications made by the subjects. This model is fitted to the calibration data, and provides an extrapolation to classifications in scenarios outside of the calibration data. A simple strategy is used to calibrate the timing component of the model, and the results for reaction times are compared between the OCM and the student subjects. While this approach to timing does not capture the full complexity of the reaction time distribution seen in the data from the student subjects, the mean and the tail of the distributions are similar.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As advances in molecular biology continue to reveal additional layers of complexity in gene regulation, computational models need to incorporate additional features to explore the implications of new theories and hypotheses. It has recently been suggested that eukaryotic organisms owe their phenotypic complexity and diversity to the exploitation of small RNAs as signalling molecules. Previous models of genetic systems are, for several reasons, inadequate to investigate this theory. In this study, we present an artificial genome model of genetic regulatory networks based upon previous work by Torsten Reil, and demonstrate how this model generates networks with biologically plausible structural and dynamic properties. We also extend the model to explore the implications of incorporating regulation by small RNA molecules in a gene network. We demonstrate how, using these signals, highly connected networks can display dynamics that are more stable than expected given their level of connectivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is concerned with the inventory control of items that can be considered independent of one another. The decisions when to order and in what quantity, are the controllable or independent variables in cost expressions which are minimised. The four systems considered are referred to as (Q, R), (nQ,R,T), (M,T) and (M,R,T). Wiith ((Q,R) a fixed quantity Q is ordered each time the order cover (i.e. stock in hand plus on order ) equals or falls below R, the re-order level. With the other three systems reviews are made only at intervals of T. With (nQ,R,T) an order for nQ is placed if on review the inventory cover is less than or equal to R, where n, which is an integer, is chosen at the time so that the new order cover just exceeds R. In (M, T) each order increases the order cover to M. Fnally in (M, R, T) when on review, order cover does not exceed R, enough is ordered to increase it to M. The (Q, R) system is examined at several levels of complexity, so that the theoretical savings in inventory costs obtained with more exact models could be compared with the increases in computational costs. Since the exact model was preferable for the (Q,R) system only exact models were derived for theoretical systems for the other three. Several methods of optimization were tried, but most were found inappropriate for the exact models because of non-convergence. However one method did work for each of the exact models. Demand is considered continuous, and with one exception, the distribution assumed is the normal distribution truncated so that demand is never less than zero. Shortages are assumed to result in backorders, not lost sales. However, the shortage cost is a function of three items, one of which, the backorder cost, may be either a linear, quadratic or an exponential function of the length of time of a backorder, with or without period of grace. Lead times are assumed constant or gamma distributed. Lastly, the actual supply quantity is allowed to be distributed. All the sets of equations were programmed for a KDF 9 computer and the computed performances of the four inventory control procedures are compared under each assurnption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The soil-plant-moisture subsystem is an important component of the hydrological cycle. Over the last 20 or so years a number of computer models of varying complexity have represented this subsystem with differing degrees of success. The aim of this present work has been to improve and extend an existing model. The new model is less site specific thus allowing for the simulation of a wide range of soil types and profiles. Several processes, not included in the original model, are simulated by the inclusion of new algorithms, including: macropore flow; hysteresis and plant growth. Changes have also been made to the infiltration, water uptake and water flow algorithms. Using field data from various sources, regression equations have been derived which relate parameters in the suction-conductivity-moisture content relationships to easily measured soil properties such as particle-size distribution data. Independent tests have been performed on laboratory data produced by Hedges (1989). The parameters found by regression for the suction relationships were then used in equations describing the infiltration and macropore processes. An extensive literature review produced a new model for calculating plant growth from actual transpiration, which was itself partly determined by the root densities and leaf area indices derived by the plant growth model. The new infiltration model uses intensity/duration curves to disaggregate daily rainfall inputs into hourly amounts. The final model has been calibrated and tested against field data, and its performance compared to that of the original model. Simulations have also been carried out to investigate the effects of various parameters on infiltration, macropore flow, actual transpiration and plant growth. Qualitatively comparisons have been made between these results and data given in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A main unsolved problem in the RNA World scenario for the origin of life is how a template-dependent RNA polymerase ribozyme emerged from short RNA oligomers obtained by random polymerization on mineral surfaces. A number of computational studies have shown that the structural repertoire yielded by that process is dominated by topologically simple structures, notably hairpin-like ones. A fraction of these could display RNA ligase activity and catalyze the assembly of larger, eventually functional RNA molecules retaining their previous modular structure: molecular complexity increases but template replication is absent. This allows us to build up a stepwise model of ligation- based, modular evolution that could pave the way to the emergence of a ribozyme with RNA replicase activity, step at which information-driven Darwinian evolution would be triggered. Copyright © 2009 RNA Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current models of word production assume that words are stored as linear sequences of phonemes which are structured into syllables only at the moment of production. This is because syllable structure is always recoverable from the sequence of phonemes. In contrast, we present theoretical and empirical evidence that syllable structure is lexically represented. Storing syllable structure would have the advantage of making representations more stable and resistant to damage. On the other hand, re-syllabifications affect only a minimal part of phonological representations and occur only in some languages and depending on speech register. Evidence for these claims comes from analyses of aphasic errors which not only respect phonotactic constraints, but also avoid transformations which move the syllabic structure of the word further away from the original structure, even when equating for segmental complexity. This is true across tasks, types of errors, and, crucially, types of patients. The same syllabic effects are shown by apraxic patients and by phonological patients who have more central difficulties in retrieving phonological representations. If syllable structure was only computed after phoneme retrieval, it would have no way to influence the errors of phonological patients. Our results have implications for psycholinguistic and computational models of language as well as for clinical and educational practices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – The purpose of this research is to develop a holistic approach to maximize the customer service level while minimizing the logistics cost by using an integrated multiple criteria decision making (MCDM) method for the contemporary transshipment problem. Unlike the prevalent optimization techniques, this paper proposes an integrated approach which considers both quantitative and qualitative factors in order to maximize the benefits of service deliverers and customers under uncertain environments. Design/methodology/approach – This paper proposes a fuzzy-based integer linear programming model, based on the existing literature and validated with an example case. The model integrates the developed fuzzy modification of the analytic hierarchy process (FAHP), and solves the multi-criteria transshipment problem. Findings – This paper provides several novel insights about how to transform a company from a cost-based model to a service-dominated model by using an integrated MCDM method. It suggests that the contemporary customer-driven supply chain remains and increases its competitiveness from two aspects: optimizing the cost and providing the best service simultaneously. Research limitations/implications – This research used one illustrative industry case to exemplify the developed method. Considering the generalization of the research findings and the complexity of the transshipment service network, more cases across multiple industries are necessary to further enhance the validity of the research output. Practical implications – The paper includes implications for the evaluation and selection of transshipment service suppliers, the construction of optimal transshipment network as well as managing the network. Originality/value – The major advantages of this generic approach are that both quantitative and qualitative factors under fuzzy environment are considered simultaneously and also the viewpoints of service deliverers and customers are focused. Therefore, it is believed that it is useful and applicable for the transshipment service network design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Let V be an array. The range query problem concerns the design of data structures for implementing the following operations. The operation update(j,x) has the effect vj ← vj + x, and the query operation retrieve(i,j) returns the partial sum vi + ... + vj. These tasks are to be performed on-line. We define an algebraic model – based on the use of matrices – for the study of the problem. In this paper we establish as well a lower bound for the sum of the average complexity of both kinds of operations, and demonstrate that this lower bound is near optimal – in terms of asymptotic complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this letter, a nonlinear semi-analytical model (NSAM) for simulation of few-mode fiber transmission is proposed. The NSAM considers the mode mixing arising from the Kerr effect and waveguide imperfections. An analytical explanation of the model is presented, as well as simulation results for the transmission over a two mode fiber (TMF) of 112 Gb/s using coherently detected polarization multiplexed quadrature phase-shift-keying modulation. The simulations show that by transmitting over only one of the two modes on TMFs, long-haul transmission can be realized without increase of receiver complexity. For a 6000-km transmission link, a small modal dispersion penalty is observed in the linear domain, while a significant increase of the nonlinear threshold is observed due to the large core of TMF. © 2006 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern advances in technology have led to more complex manufacturing processes whose success centres on the ability to control these processes with a very high level of accuracy. Plant complexity inevitably leads to poor models that exhibit a high degree of parametric or functional uncertainty. The situation becomes even more complex if the plant to be controlled is characterised by a multivalued function or even if it exhibits a number of modes of behaviour during its operation. Since an intelligent controller is expected to operate and guarantee the best performance where complexity and uncertainty coexist and interact, control engineers and theorists have recently developed new control techniques under the framework of intelligent control to enhance the performance of the controller for more complex and uncertain plants. These techniques are based on incorporating model uncertainty. The newly developed control algorithms for incorporating model uncertainty are proven to give more accurate control results under uncertain conditions. In this paper, we survey some approaches that appear to be promising for enhancing the performance of intelligent control systems in the face of higher levels of complexity and uncertainty.