915 resultados para models of computation
Resumo:
Previous genetic association studies have overlooked the potential for biased results when analyzing different population structures in ethnically diverse populations. The purpose of the present study was to quantify this bias in two-locus association studies conducted on an admixtured urban population. We studied the genetic structure distribution of angiotensin-converting enzyme insertion/deletion (ACE I/D) and angiotensinogen methionine/threonine (M/T) polymorphisms in 382 subjects from three subgroups in a highly admixtured urban population. Group I included 150 white subjects; group II, 142 mulatto subjects, and group III, 90 black subjects. We conducted sample size simulation studies using these data in different genetic models of gene action and interaction and used genetic distance calculation algorithms to help determine the population structure for the studied loci. Our results showed a statistically different population structure distribution of both ACE I/D (P = 0.02, OR = 1.56, 95% CI = 1.05-2.33 for the D allele, white versus black subgroup) and angiotensinogen M/T polymorphism (P = 0.007, OR = 1.71, 95% CI = 1.14-2.58 for the T allele, white versus black subgroup). Different sample sizes are predicted to be determinant of the power to detect a given genotypic association with a particular phenotype when conducting two-locus association studies in admixtured populations. In addition, the postulated genetic model is also a major determinant of the power to detect any association in a given sample size. The present simulation study helped to demonstrate the complex interrelation among ethnicity, power of the association, and the postulated genetic model of action of a particular allele in the context of clustering studies. This information is essential for the correct planning and interpretation of future association studies conducted on this population.
Resumo:
The viscoelastic properties of edible films can provide information at the structural level of the biopolymers used. The objective of this work was to test three simple models of linear viscoelastic theory (Maxwell, Generalized Maxwell with two units in parallel, and Burgers) using the results of stress relaxation tests in edible films of myofibrillar proteins of Nile Tilapia. The films were elaborated according to a casting technique and pre-conditioned at 58% relative humidity and 22ºC for 4 days. The testing sample (15mm x 118mm) was submitted to tests of stress relaxation in an equipment of physical measurements, TA.XT2i. The deformation, imposed to the sample, was 1%, guaranteeing the permanency in the domain of the linear viscoelasticity. The models were fitted to experimental data (stress x time) by nonlinear regression. The Generalized Maxwell model with two units in parallel and the Burgers model represented the relaxation curves of stress satisfactorily. The viscoelastic properties varied in a way that they were less dependent on the thickness of the films.
Resumo:
Increased rotational speed brings many advantages to an electric motor. One of the benefits is that when the desired power is generated at increased rotational speed, the torque demanded from the rotor decreases linearly, and as a consequence, a motor of smaller size can be used. Using a rotor with high rotational speed in a system with mechanical bearings can, however, create undesirable vibrations, and therefore active magnetic bearings (AMBs) are often considered a good option for the main bearings, as the rotor then has no mechanical contact with other parts of the system but levitates on the magnetic forces. On the other hand, such systems can experience overloading or a sudden shutdown of the electrical system, whereupon the magnetic field becomes extinct, and as a result of rotor delevitation, mechanical contact occurs. To manage such nonstandard operations, AMB-systems require mechanical touchdown bearings with an oversized bore diameter. The need for touchdown bearings seems to be one of the barriers preventing greater adoption of AMB technology, because in the event of an uncontrolled touchdown, failure may occur, for example, in the bearing’s cage or balls, or in the rotor. This dissertation consists of two parts: First, touchdown bearing misalignment in the contact event is studied. It is found that misalignment increases the likelihood of a potentially damaging whirling motion of the rotor. A model for analysis of the stresses occurring in the rotor is proposed. In the studies of misalignment and stresses, a flexible rotor using a finite element approach is applied. Simplified models of cageless and caged bearings are used for the description of touchdown bearings. The results indicate that an increase in misalignment can have a direct influence on the bending and shear stresses occurring in the rotor during the contact event. Thus, it was concluded that analysis of stresses arising in the contact event is essential to guarantee appropriate system dimensioning for possible contact events with misaligned touchdown bearings. One of the conclusions drawn from the first part of the study is that knowledge of the forces affecting the balls and cage of the touchdown bearings can enable a more reliable estimation of the service life of the bearing. Therefore, the second part of the dissertation investigates the forces occurring in the cage and balls of touchdown bearings and introduces two detailed models of touchdown bearings in which all bearing parts are modelled as independent bodies. Two multibody-based two-dimensional models of touchdown bearings are introduced for dynamic analysis of the contact event. All parts of the bearings are modelled with geometrical surfaces, and the bodies interact with each other through elastic contact forces. To assist in identification of the forces affecting the balls and cage in the contact event, the first model describes a touchdown bearing without a cage, and the second model describes a touchdown bearing with a cage. The introduced models are compared with the simplified models used in the first part of the dissertation through parametric study. Damages to the rotor, cage and balls are some of the main reasons for failures of AMB-systems. The stresses in the rotor in the contact event are defined in this work. Furthermore, the forces affecting key bodies of the bearings, cage and balls can be studied using the models of touchdown bearings introduced in this dissertation. Knowledge obtained from the introduced models is valuable since it can enable an optimum structure for a rotor and touchdown bearings to be designed.
Resumo:
There is much evidence to support an age-related decline in source memory ability. However, the underlying mechanisms responsible for this decline are not well understood. The current study was carried out to determine the electrophysiological correlates of source memory discrimination in younger and older adults. Event-related potentials (ERPs) and continuous electrocardiographic (ECG) data were collected from younger (M= 21 years) and older (M= 71 years) adults during a source memory task. Older adults were more likely to make source memory errors for recently repeated, non-target words than were younger adults. Moreover, their ERP records for correct trials showed an increased amplitude in the late positive (LP) component (400-800 msec) for the most recently presented, non-target stimuli relative to the LP noted for target items. Younger adults showed an opposite pattern, with a large LP component for target items, and a much smaller LP component for the recently repeated non-target items. Computation of parasympathetic activity in the vagus nerve was performed on the ECG data (Porges, 1985). The resulting measure, vagal tone, was used as an index of physiological responsivity. The vagal tone index of physiological responsivity was negatively related to the LP amplitude for the most recently repeated, non-target words in both groups, after accounting for age effects. The ERP data support the hypothesis that the tendency to make source memory errors on the part of older adults is related to the ability to selectively control attentional processes during task performance. Furthermore, the relationship between vagal tone and ERP reactivity suggests that there is a physiological basis to the heightened reactivity measured in the LP response to recently repeated non-target items such that, under decreased physiological resources, there is an impairment in the ability to selectively inhibit bottom-up, stimulus based properties in favour of task-related goals in older adults. The inconsistency of these results with other explanatory models of source memory deficits is discussed. It is concluded that the data are consistent with a physiological reactivity model requiring inhibition of reactivity to irrelevant, but perceptually-fluent, stimuli.
Resumo:
The present research focused on the pathways through which the symptoms of posttraumatic stress disorder (PTSD) may negatively impact intimacy. Previous research has confirmed a link between self-reported PTSD symptoms and intimacy; however, a thorough examination of mediating paths, partner effects, and secondary traumatization has not yet been realized. With a sample of 297 heterosexual couples, intraindividual and dyadic models were developed to explain the relationships between PTSD symptoms and intimacy in the context of interdependence theory, attachment theory, and models of selfpreservation (e.g., fight-or-flight). The current study replicated the findings of others and has supported a process in which affective (alexithymia, negative affect, positive affect) and communication (demand-withdraw behaviour, self-concealment, and constructive communication) pathways mediate the intraindividual and dyadic relationships between PTSD symptoms and intimacy. Moreover, it also found that the PTSD symptoms of each partner were significantly related; however, this was only the case for those dyads in which the partners had disclosed most everything about their traumatic experiences. As such, secondary traumatization was supported. Finally, although the overall pattern of results suggest a total negative effect of PTSD symptoms on intimacy, a sex difference was evident such that the direct effect of the woman's PTSD symptoms were positively associated with both her and her partner's intimacy. I t is possible that the Tend-andBefriend model of threat response, wherein women are said to foster social bonds in the face of distress, may account for this sex difference. Overall, however, it is clear that PTSD symptoms were negatively associated with relationship quality and attention to this impact in the development of diagnostic criteria and treatment protocols is necessary.
Resumo:
Volume(density)-independent pair-potentials cannot describe metallic cohesion adequately as the presence of the free electron gas renders the total energy strongly dependent on the electron density. The embedded atom method (EAM) addresses this issue by replacing part of the total energy with an explicitly density-dependent term called the embedding function. Finnis and Sinclair proposed a model where the embedding function is taken to be proportional to the square root of the electron density. Models of this type are known as Finnis-Sinclair many body potentials. In this work we study a particular parametrization of the Finnis-Sinclair type potential, called the "Sutton-Chen" model, and a later version, called the "Quantum Sutton-Chen" model, to study the phonon spectra and the temperature variation thermodynamic properties of fcc metals. Both models give poor results for thermal expansion, which can be traced to rapid softening of transverse phonon frequencies with increasing lattice parameter. We identify the power law decay of the electron density with distance assumed by the model as the main cause of this behaviour and show that an exponentially decaying form of charge density improves the results significantly. Results for Sutton-Chen and our improved version of Sutton-Chen models are compared for four fcc metals: Cu, Ag, Au and Pt. The calculated properties are the phonon spectra, thermal expansion coefficient, isobaric heat capacity, adiabatic and isothermal bulk moduli, atomic root-mean-square displacement and Gr\"{u}neisen parameter. For the sake of comparison we have also considered two other models where the distance-dependence of the charge density is an exponential multiplied by polynomials. None of these models exhibits the instability against thermal expansion (premature melting) as shown by the Sutton-Chen model. We also present results obtained via pure pair potential models, in order to identify advantages and disadvantages of methods used to obtain the parameters of these potentials.
Object-Oriented Genetic Programming for the Automatic Inference of Graph Models for Complex Networks
Resumo:
Complex networks are systems of entities that are interconnected through meaningful relationships. The result of the relations between entities forms a structure that has a statistical complexity that is not formed by random chance. In the study of complex networks, many graph models have been proposed to model the behaviours observed. However, constructing graph models manually is tedious and problematic. Many of the models proposed in the literature have been cited as having inaccuracies with respect to the complex networks they represent. However, recently, an approach that automates the inference of graph models was proposed by Bailey [10] The proposed methodology employs genetic programming (GP) to produce graph models that approximate various properties of an exemplary graph of a targeted complex network. However, there is a great deal already known about complex networks, in general, and often specific knowledge is held about the network being modelled. The knowledge, albeit incomplete, is important in constructing a graph model. However it is difficult to incorporate such knowledge using existing GP techniques. Thus, this thesis proposes a novel GP system which can incorporate incomplete expert knowledge that assists in the evolution of a graph model. Inspired by existing graph models, an abstract graph model was developed to serve as an embryo for inferring graph models of some complex networks. The GP system and abstract model were used to reproduce well-known graph models. The results indicated that the system was able to evolve models that produced networks that had structural similarities to the networks generated by the respective target models.
Resumo:
This thesis entitled Reliability Modelling and Analysis in Discrete time Some Concepts and Models Useful in the Analysis of discrete life time data.The present study consists of five chapters. In Chapter II we take up the derivation of some general results useful in reliability modelling that involves two component mixtures. Expression for the failure rate, mean residual life and second moment of residual life of the mixture distributions in terms of the corresponding quantities in the component distributions are investigated. Some applications of these results are also pointed out. The role of the geometric,Waring and negative hypergeometric distributions as models of life lengths in the discrete time domain has been discussed already. While describing various reliability characteristics, it was found that they can be often considered as a class. The applicability of these models in single populations naturally extends to the case of populations composed of sub-populations making mixtures of these distributions worth investigating. Accordingly the general properties, various reliability characteristics and characterizations of these models are discussed in chapter III. Inference of parameters in mixture distribution is usually a difficult problem because the mass function of the mixture is a linear function of the component masses that makes manipulation of the likelihood equations, leastsquare function etc and the resulting computations.very difficult. We show that one of our characterizations help in inferring the parameters of the geometric mixture without involving computational hazards. As mentioned in the review of results in the previous sections, partial moments were not studied extensively in literature especially in the case of discrete distributions. Chapters IV and V deal with descending and ascending partial factorial moments. Apart from studying their properties, we prove characterizations of distributions by functional forms of partial moments and establish recurrence relations between successive moments for some well known families. It is further demonstrated that partial moments are equally efficient and convenient compared to many of the conventional tools to resolve practical problems in reliability modelling and analysis. The study concludes by indicating some new problems that surfaced during the course of the present investigation which could be the subject for a future work in this area.
Resumo:
Pollutants that once enter into the earth’s atmosphere become part of the atmosphere and hence their dispersion, dilution, direction of transportation etc. are governed by the meteorological conditions. The thesis deals with the study of the atmospheric dispersion capacity, wind climatology, atmospheric stability, pollutant distribution by means of a model and the suggestions for a comprehensive planning for the industrially developing city, Cochin. The definition, sources, types and effects of air pollution have been dealt with briefly. The influence of various meteorological parameters such as vector wind, temperature and its vertical structure and atmospheric stability in relation to pollutant dispersal have been studied. The importance of inversions, mixing heights, ventilation coefficients were brought out. The spatial variation of mixing heights studies for the first time on a microscale region, serves to delineate the regions of good and poor dispersal capacity. A study of wind direction fluctuation, σθ and its relation to stability and mixing heights were shown to be much useful. It was shown that there is a necessity to look into the method of σθ computation. The development of Gausssian Plume Model along with the application for multiple sources was presented. The pollutant chosen was sulphur dioxide and industrial sources alone were considered. The percentage frequency of occurrence of inversions and isothermals are found to be low in all months during the year. The spatial variation of mixing heights revealed that a single mixing height cannot be taken as a representative for the whole city have low mixing heights and monsoonal months showed lowest mixing heights. The study of ventilation co-efficients showed values less than the required optimum value 6000m2/5. However, the low values may be due to the consideration of surface wind alone instead of the vertically averaged wind. Relatively more calm conditions and light winds during night and strong winds during day time were observed. During the most of the year westerlies during day time and northeasterlies during night time are the dominant winds. Unstable conditions with high values of σθ during day time and stable conditions with lower values of σθ during night time are the prominent features. Monsoonal months showed neutral stability for most of the time. A study σθ of and Pasquill Stability category has revealed the difficulty in giving a unique value of for each stability category. For the first time regression equations have been developed relating mixing heights and σθ. A closer examination of σθ revealed that half of the range of wind direction fluctuations is to be taken, instead of one by sixth, to compute σθ. The spatial distribution of SO2 showed a more or less uniform distribution with a slight intrusion towards south. Winter months showed low concentrations contrary to the expectations. The variations of the concentration is found to be influenced more by the mixing height and the stack height rather than wind speed. In the densely populated areas the concentration is more than the threshold limit value. However, the values reported appear to be high, because no depletion of the material is assumed through dry or wet depositions and also because of the inclusion of calm conditions with a very light wind speed. A reduction of emission during night time with a consequent rise during day time would bring down the levels of pollution. The probable locations for the new industries could be the extreme southeast parts because the concentration towards the north falls off very quickly resulting low concentrations. In such a case pollutant spread would be towards south and west, thus keeping the city interior relatively free from pollution. A more detailed examination of the pollutant spread by means of models that would take the dry and wet depositions may be necessary. Nevertheless, the present model serves to give the trend of the distribution of pollutant concentration with which one can suggest the optimum locations for the new industries
Resumo:
This thesis is an outcome of the investigations carried out on the development of an Artificial Neural Network (ANN) model to implement 2-D DFT at high speed. A new definition of 2-D DFT relation is presented. This new definition enables DFT computation organized in stages involving only real addition except at the final stage of computation. The number of stages is always fixed at 4. Two different strategies are proposed. 1) A visual representation of 2-D DFT coefficients. 2) A neural network approach. The visual representation scheme can be used to compute, analyze and manipulate 2D signals such as images in the frequency domain in terms of symbols derived from 2x2 DFT. This, in turn, can be represented in terms of real data. This approach can help analyze signals in the frequency domain even without computing the DFT coefficients. A hierarchical neural network model is developed to implement 2-D DFT. Presently, this model is capable of implementing 2-D DFT for a particular order N such that ((N))4 = 2. The model can be developed into one that can implement the 2-D DFT for any order N upto a set maximum limited by the hardware constraints. The reported method shows a potential in implementing the 2-D DF T in hardware as a VLSI / ASIC
Resumo:
The objective of the study of \Queueing models with vacations and working vacations" was two fold; to minimize the server idle time and improve the e ciency of the service system. Keeping this in mind we considered queueing models in di erent set up in this thesis. Chapter 1 introduced the concepts and techniques used in the thesis and also provided a summary of the work done. In chapter 2 we considered an M=M=2 queueing model, where one of the two heterogeneous servers takes multiple vacations. We studied the performance of the system with the help of busy period analysis and computation of mean waiting time of a customer in the stationary regime. Conditional stochastic decomposition of queue length was derived. To improve the e ciency of this system we came up with a modi ed model in chapter 3. In this model the vacationing server attends the customers, during vacation at a slower service rate. Chapter 4 analyzed a working vacation queueing model in a more general set up. The introduction of N policy makes this MAP=PH=1 model di erent from all working vacation models available in the literature. A detailed analysis of performance of the model was provided with the help of computation of measures such as mean waiting time of a customer who gets service in normal mode and vacation mode.
Resumo:
In this thesis we have presented several inventory models of utility. Of these inventory with retrial of unsatisfied demands and inventory with postponed work are quite recently introduced concepts, the latt~~ being introduced for the first time. Inventory with service time is relatively new with a handful of research work reported. The di lficuity encoLlntered in inventory with service, unlike the queueing process, is that even the simplest case needs a 2-dimensional process for its description. Only in certain specific cases we can introduce generating function • to solve for the system state distribution. However numerical procedures can be developed for solving these problem.
Resumo:
The restarting automaton is a restricted model of computation that was introduced by Jancar et al. to model the so-called analysis by reduction, which is a technique used in linguistics to analyze sentences of natural languages. The most general models of restarting automata make use of auxiliary symbols in their rewrite operations, although this ability does not directly correspond to any aspect of the analysis by reduction. Here we put restrictions on the way in which restarting automata use auxiliary symbols, and we investigate the influence of these restrictions on their expressive power. In fact, we consider two types of restrictions. First, we consider the number of auxiliary symbols in the tape alphabet of a restarting automaton as a measure of its descriptional complexity. Secondly, we consider the number of occurrences of auxiliary symbols on the tape as a dynamic complexity measure. We establish some lower and upper bounds with respect to these complexity measures concerning the ability of restarting automata to recognize the (deterministic) context-free languages and some of their subclasses.
Resumo:
The identification of chemical mechanism that can exhibit oscillatory phenomena in reaction networks are currently of intense interest. In particular, the parametric question of the existence of Hopf bifurcations has gained increasing popularity due to its relation to the oscillatory behavior around the fixed points. However, the detection of oscillations in high-dimensional systems and systems with constraints by the available symbolic methods has proven to be difficult. The development of new efficient methods are therefore required to tackle the complexity caused by the high-dimensionality and non-linearity of these systems. In this thesis, we mainly present efficient algorithmic methods to detect Hopf bifurcation fixed points in (bio)-chemical reaction networks with symbolic rate constants, thereby yielding information about their oscillatory behavior of the networks. The methods use the representations of the systems on convex coordinates that arise from stoichiometric network analysis. One of the methods called HoCoQ reduces the problem of determining the existence of Hopf bifurcation fixed points to a first-order formula over the ordered field of the reals that can then be solved using computational-logic packages. The second method called HoCaT uses ideas from tropical geometry to formulate a more efficient method that is incomplete in theory but worked very well for the attempted high-dimensional models involving more than 20 chemical species. The instability of reaction networks may lead to the oscillatory behaviour. Therefore, we investigate some criterions for their stability using convex coordinates and quantifier elimination techniques. We also study Muldowney's extension of the classical Bendixson-Dulac criterion for excluding periodic orbits to higher dimensions for polynomial vector fields and we discuss the use of simple conservation constraints and the use of parametric constraints for describing simple convex polytopes on which periodic orbits can be excluded by Muldowney's criteria. All developed algorithms have been integrated into a common software framework called PoCaB (platform to explore bio- chemical reaction networks by algebraic methods) allowing for automated computation workflows from the problem descriptions. PoCaB also contains a database for the algebraic entities computed from the models of chemical reaction networks.
Resumo:
The performance of a model-based diagnosis system could be affected by several uncertainty sources, such as,model errors,uncertainty in measurements, and disturbances. This uncertainty can be handled by mean of interval models.The aim of this thesis is to propose a methodology for fault detection, isolation and identification based on interval models. The methodology includes some algorithms to obtain in an automatic way the symbolic expression of the residual generators enhancing the structural isolability of the faults, in order to design the fault detection tests. These algorithms are based on the structural model of the system. The stages of fault detection, isolation, and identification are stated as constraint satisfaction problems in continuous domains and solved by means of interval based consistency techniques. The qualitative fault isolation is enhanced by a reasoning in which the signs of the symptoms are derived from analytical redundancy relations or bond graph models of the system. An initial and empirical analysis regarding the differences between interval-based and statistical-based techniques is presented in this thesis. The performance and efficiency of the contributions are illustrated through several application examples, covering different levels of complexity.