974 resultados para code rewriting model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Satellite laser ranging (SLR) to the satellites of the global navigation satellite systems (GNSS) provides substantial and valuable information about the accuracy and quality of GNSS orbits and allows for the SLR-GNSS co-location in space. In the framework of the NAVSTAR-SLR experiment two GPS satellites of Block-IIA were equipped with laser retroreflector arrays (LRAs), whereas all satellites of the GLONASS system are equipped with LRAs in an operational mode. We summarize the outcome of the NAVSTAR-SLR experiment by processing 20 years of SLR observations to GPS and 12 years of SLR observations to GLONASS satellites using the reprocessed microwave orbits provided by the center for orbit determination in Europe (CODE). The dependency of the SLR residuals on the size, shape, and number of corner cubes in LRAs is studied. We show that the mean SLR residuals and the RMS of residuals depend on the coating of the LRAs and the block or type of GNSS satellites. The SLR mean residuals are also a function of the equipment used at SLR stations including the single-photon and multi-photon detection modes. We also show that the SLR observations to GNSS satellites are important to validate GNSS orbits and to assess deficiencies in the solar radiation pressure models. We found that the satellite signature effect, which is defined as a spread of optical pulse signals due to reflection from multiple reflectors, causes the variations of mean SLR residuals of up to 15 mm between the observations at nadir angles of 0∘ and 14∘. in case of multi-photon SLR stations. For single-photon SLR stations this effect does not exceed 1 mm. When using the new empirical CODE orbit model (ECOM), the SLR mean residual falls into the range 0.1–1.8 mm for high-performing single-photon SLR stations observing GLONASS-M satellites with uncoated corner cubes. For best-performing multi-photon stations the mean SLR residuals are between −12.2 and −25.6 mm due to the satellite signature effect.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

On the eastern flank of the Juan de Fuca Ridge, reaction between upwelling basement fluid and sediment alters hydrothermal fluxes of Ca, SiO2(aq), SO4, PO4, NH4, and alkalinity. We used the Global Implicit Multicomponent Reactive Transport (GIMRT) code to model the processes occurring in the sediment column (diagenesis, sediment burial, fluid advection, and multicomponent diffusion) and to estimate net seafloor fluxes of solutes. Within the sediment section, the reactions controlling the concentrations of the solutes listed above are organic matter degradation via SO4 reduction, dissolution of amorphous silica, reductive dissolution of amorphous Fe(III)-(hydr)oxide, and precipitation of calcite, carbonate fluorapatite, and amorphous Fe(II)-sulfide. Rates of specific discharge estimated from pore-water Mg profiles are 2 to 3 mm/yr. At this site the basement hydrothermal system is a source of NH4, SiO2(aq), and Ca, and a sink of SO4, PO4, and alkalinity. Reaction within the sediment column increases the hydrothermal sources of NH4 and SiO2(aq), increases the hydrothermal sinks of SO4 and PO4, and decreases the hydrothermal source of Ca. Reaction within the sediment column has a spatially variable effect on the hydrothermal flux of alkalinity. Because the model we used was capable of simulating the observed pore-water chemistry by using mechanistic descriptions of the biogeochemical processes occurring in the sediment column, it could be used to examine the physical controls on hydrothermal fluxes of solutes in this setting. Two series of simulations in which we varied fluid flow rate (1 to 100 mm/yr) and sediment thickness (10 to 100 m) predict that given the reactions modeled in this study, the sediment section will contribute most significantly to fluxes of SO4 and NH4 at slow flow rates and intermediate sediment thickness and to fluxes of SiO2(aq) at slow flow rates and large sediment thickness. Reaction within the sediment section could approximately double the hydrothermal sink of PO4 over a range of flow rates and sediment thickness, and could slightly decrease (by

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Energía termosolar (de concentración) es uno de los nombres que hacen referencia en español al término inglés “concentrating solar power”. Se trata de una tecnología basada en la captura de la potencia térmica de la radiación solar, de forma que permita alcanzar temperaturas capaces de alimentar un ciclo termodinámico convencional (o avanzado); el futuro de esta tecnología depende principalmente de su capacidad para concentrar la radiación solar de manera eficiente y económica. La presente tesis está orientada hacia la resolución de ciertos problemas importantes relacionados con este objetivo. La mencionada necesidad de reducir costes en la concentración de radiación solar directa, asegurando el objetivo termodinámico de calentar un fluido hasta una determinada temperatura, es de vital importancia. Los colectores lineales Fresnel han sido identificados en la literatura científica como una tecnología con gran potencial para alcanzar esta reducción de costes. Dicha tecnología ha sido seleccionada por numerosas razones, entre las que destacan su gran libertad de diseño y su actual estado inmaduro. Con el objetivo de responder a este desafío se desarrollado un detallado estudio de las propiedades ópticas de los colectores lineales Fresnel, para lo cual se han utilizado métodos analíticos y numéricos de manera combinada. En primer lugar, se han usado unos modelos para la predicción de la localización y la irradiación normal directa del sol junto a unas relaciones analíticas desarrolladas para estudiar el efecto de múltiples variables de diseño en la energía incidente sobre los espejos. Del mismo modo, se han obtenido analíticamente los errores debidos al llamado “off-axis aberration”, a la apertura de los rayos reflejados en los espejos y a las sombras y bloqueos entre espejos. Esto ha permitido la comparación de diferentes formas de espejo –planos, circulares o parabólicos–, así como el diseño preliminar de la localización y anchura de los espejos y receptor sin necesidad de costosos métodos numéricos. En segundo lugar, se ha desarrollado un modelo de trazado de rayos de Monte Carlo con el objetivo de comprobar la validez del estudio analítico, pero sobre todo porque este no es preciso en el estudio de la reflexión en espejos. El código desarrollado está específicamente ideado para colectores lineales Fresnel, lo que ha permitido la reducción del tiempo de cálculo en varios órdenes de magnitud en comparación con un programa comercial más general. Esto justifica el desarrollo de un nuevo código en lugar de la compra de una licencia de otro programa. El modelo ha sido usado primeramente para comparar la intensidad de flujo térmico y rendimiento de colectores Fresnel, con y sin reflector secundario, con los colectores cilíndrico parabólicos. Finalmente, la conjunción de los resultados obtenidos en el estudio analítico con el programa numérico ha sido usada para optimizar el campo solar para diferentes orientaciones –Norte-Sur y Este-Oeste–, diferentes localizaciones –Almería y Aswan–, diferentes inclinaciones hacia el Trópico –desde 0 deg hasta 32 deg– y diferentes mínimos de intensidad del flujo en el centro del receptor –10 kW/m2 y 25 kW/m2–. La presente tesis ha conducido a importantes descubrimientos que deben ser considerados a la hora de diseñar un campo solar Fresnel. En primer lugar, los espejos utilizados no deben ser plano, sino cilíndricos o parabólicos, ya que los espejos curvos implican mayores concentraciones y rendimiento. Por otro lado, se ha llegado a la conclusión de que la orientación Este-Oeste es más propicia para localizaciones con altas latitudes, como Almería, mientras que en zonas más cercanas a los trópicos como Aswan los campos Norte-Sur conducen a mayores rendimientos. Es de destacar que la orientación Este-Oeste requiere aproximadamente la mitad de espejos que los campos Norte-Sur, puediendo estar inclinados hacia los Trópicos para mejorar el rendimiento, y que alcanzan parecidos valores de intensidad térmica en el receptor todos los días a mediodía. Sin embargo, los campos con orientación Norte-Sur permiten un flujo más constante a lo largo de un día. Por último, ha sido demostrado que el uso de diseños pre-optimizados analíticamente, con anchura de espejos y espaciado entre espejos variables a lo ancho del campo, pueden implicar aumentos de la energía generada por metro cuadrado de espejos de hasta el 6%. El rendimiento óptico anual de los colectores cilíndrico parabólicos es 23 % mayor que el rendimiento de los campos Fresnel en Almería, mientras que la diferencia es de solo 9 % en Aswan. Ello implica que, para alcanzar el mismo precio de electricidad que la tecnología de referencia, la reducción de costes de instalación por metro cuadrado de espejo debe estar entre el 10 % y el 25 %, y que los colectores lineales Fresnel tienen más posibilidades de ser desarrollados en zonas de bajas latitudes. Como consecuencia de los estudios desarrollados en esta tesis se ha patentado un sistema de almacenamiento que tiene en cuenta la variación del flujo térmico en el receptor a lo largo del día, especialmente para campos con orientación Este-Oeste. Este invento permitiría el aprovechamiento de la energía incidente durante más parte del año, aumentando de manera apreciable los rendimientos óptico y térmico. Abstract Concentrating solar power is the common name of a technology based on capturing the thermal power of solar radiation, in a suitable way to reach temperatures able to activate a conventional (or advanced) thermodynamic cycle to generate electricity; this quest mainly depends on our ability to concentrate solar radiation in a cheap and efficient way. The present thesis is focused to highlight and help solving some of the important issues related to this problem. The need of reducing costs in concentrating the direct solar radiation, but without jeopardizing the thermodynamic objective of heating a fluid up to the required temperature, is of prime importance. Linear Fresnel collectors have been identified in the scientific literature as a technology with high potential to reach this cost reduction. This technology has been selected because of a number of reasons, particularly the degrees of freedom of this type of concentrating configuration and its current immature state. In order to respond to this challenge, a very detailed exercise has been carried out on the optical properties of linear Fresnel collectors. This has been done combining analytic and numerical methods. First, the effect of the design variables on the ratio of energy impinging onto the reflecting surface has been studied using analytically developed equations, together with models that predict the location and direct normal irradiance of the sun at any moment. Similarly, errors due to off-axis aberration, to the aperture of the reflected energy beam and to shading and blocking effects have been obtained analytically. This has allowed the comparison of different shapes of mirrors –flat, cylindrical or parabolic–, as well as a preliminary optimization of the location and width of mirrors and receiver with no need of time-consuming numerical models. Second, in order to prove the validity of the analytic results, but also due to the fact that the study of the reflection process is not precise enough when using analytic equations, a Monte Carlo Ray Trace model has been developed. The developed code is designed specifically for linear Fresnel collectors, which has reduced the computing time by several orders of magnitude compared to a wider commercial software. This justifies the development of the new code. The model has been first used to compare radiation flux intensities and efficiencies of linear Fresnel collectors, both multitube receiver and secondary reflector receiver technologies, with parabolic trough collectors. Finally, the results obtained in the analytic study together with the numeric model have used in order to optimize the solar field for different orientations –North-South and East-West–, different locations –Almería and Aswan–, different tilts of the field towards the Tropic –from 0 deg to 32 deg– and different flux intensity minimum requirements –10 kW/m2 and 25 kW/m2. This thesis work has led to several important findings that should be considered in the design of Fresnel solar fields. First, flat mirrors should not be used in any case, as cylindrical and parabolic mirrors lead to higher flux intensities and efficiencies. Second, it has been concluded that, in locations relatively far from the Tropics such as Almería, East-West embodiments are more efficient, while in Aswan North- South orientation leads to a higher annual efficiency. It must be noted that East-West oriented solar fields require approximately half the number of mirrors than NS oriented fields, can be tilted towards the Equator in order to increase the efficiency and attain similar values of flux intensity at the receiver every day at midday. On the other hand, in NS embodiments the flux intensity is more even during each single day. Finally, it has been proved that the use of analytic designs with variable shift between mirrors and variable width of mirrors across the field can lead to improvements in the electricity generated per reflecting surface square meter up to 6%. The annual optical efficiency of parabolic troughs has been found to be 23% higher than the efficiency of Fresnel fields in Almería, but it is only around 9% higher in Aswan. This implies that, in order to attain the same levelized cost of electricity than parabolic troughs, the required reduction of installation costs per mirror square meter is in the range of 10-25%. Also, it is concluded that linear Fresnel collectors are more suitable for low latitude areas. As a consequence of the studies carried out in this thesis, an innovative storage system has been patented. This system takes into account the variation of the flux intensity along the day, especially for East-West oriented solar fields. As a result, the invention would allow to exploit the impinging radiation along longer time every day, increasing appreciably the optical and thermal efficiencies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A numerical and experimental study of ballistic impacts at various temperatures on precipitation hardened Inconel 718 nickel-base superalloy plates has been performed. A coupled elastoplastic-damage constitutive model with Lode angle dependent failure criterion has been implemented in LS-DYNA non-linear finite element code to model the mechanical behaviour of such an alloy. The ballistic impact tests have been carried out at three temperatures: room temperature (25 °C), 400 °C and 700 °C. The numerical study showed that the mesh size is crucial to predict correctly the shear bands detected in the tested plates. Moreover, the mesh size convergence has been achieved for element sizes on the same order that the shear bands. The residual velocity as well as the ballistic limit prediction has been considered excellent for high temperature ballistic tests. Nevertheless, the model has been less accurate for the numerical simulations performed at room temperature, being though in reasonable agreement with the experimental data. Additionally, the influence that the Lode angle had on quasi-static failure patterns such as cup-cone and slanted failure has been studied numerically. The study has revealed that the combined action of weakened constitutive equations and Lode angle dependent failure criterion has been necessary to predict the previously-mentioned failure patterns

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Almost a decade has passed since the objectives and benefits of autonomic computing were stated, yet even the latest system designs and deployments exhibit only limited and isolated elements of autonomic functionality. In previous work, we identified several of the key challenges behind this delay in the adoption of autonomic solutions, and proposed a generic framework for the development of autonomic computing systems that overcomes these challenges. In this article, we describe how existing technologies and standards can be used to realise our autonomic computing framework, and present its implementation as a service-oriented architecture. We show how this implementation employs a combination of automated code generation, model-based and object-oriented development techniques to ensure that the framework can be used to add autonomic capabilities to systems whose characteristics are unknown until runtime. We then use our framework to develop two autonomic solutions for the allocation of server capacity to services of different priorities and variable workloads, thus illustrating its application in the context of a typical data-centre resource management problem.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Flash memory is a leading storage media with excellent features such as random access and high storage density. However, it also faces significant reliability and endurance challenges. In flash memory, the charge level in the cells can be easily increased, but removing charge requires an expensive erasure operation. In this thesis we study rewriting schemes that enable the data stored in a set of cells to be rewritten by only increasing the charge level in the cells. We consider two types of modulation scheme; a convectional modulation based on the absolute levels of the cells, and a recently-proposed scheme based on the relative cell levels, called rank modulation. The contributions of this thesis to the study of rewriting schemes for rank modulation include the following: we

•propose a new method of rewriting in rank modulation, beyond the previously proposed method of “push-to-the-top”;

•study the limits of rewriting with the newly proposed method, and derive a tight upper bound of 1 bit per cell;

•extend the rank-modulation scheme to support rankings with repetitions, in order to improve the storage density;

•derive a tight upper bound of 2 bits per cell for rewriting in rank modulation with repetitions;

•construct an efficient rewriting scheme that asymptotically approaches the upper bound of 2 bit per cell.

The next part of this thesis studies rewriting schemes for a conventional absolute-levels modulation. The considered model is called “write-once memory” (WOM). We focus on WOM schemes that achieve the capacity of the model. In recent years several capacity-achieving WOM schemes were proposed, based on polar codes and randomness extractors. The contributions of this thesis to the study of WOM scheme include the following: we

•propose a new capacity-achievingWOM scheme based on sparse-graph codes, and show its attractive properties for practical implementation;

•improve the design of polarWOMschemes to remove the reliance on shared randomness and include an error-correction capability.

The last part of the thesis studies the local rank-modulation (LRM) scheme, in which a sliding window going over a sequence of real-valued variables induces a sequence of permutations. The LRM scheme is used to simulate a single conventional multi-level flash cell. The simulated cell is realized by a Gray code traversing all the relative-value states where, physically, the transition between two adjacent states in the Gray code is achieved by using a single “push-to-the-top” operation. The main results of the last part of the thesis are two constructions of Gray codes with asymptotically-optimal rate.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called "crowding". Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, "compulsory averaging", and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The code STATFLUX, implementing a new and simple statistical procedure for the calculation of transfer coefficients in radionuclide transport to animals and plants, is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. Flow parameters were estimated by employing two different least-squares procedures: Derivative and Gauss-Marquardt methods, with the available experimental data of radionuclide concentrations as the input functions of time. The solution of the inverse problem, which relates a given set of flow parameter with the time evolution of concentration functions, is achieved via a Monte Carlo Simulation procedure.Program summaryTitle of program: STATFLUXCatalogue identifier: ADYS_v1_0Program summary URL: http://cpc.cs.qub.ac.uk/summaries/ADYS_v1_0Program obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandLicensing provisions: noneComputer for which the program is designed and others on which it has been tested: Micro-computer with Intel Pentium III, 3.0 GHzInstallation: Laboratory of Linear Accelerator, Department of Experimental Physics, University of São Paulo, BrazilOperating system: Windows 2000 and Windows XPProgramming language used: Fortran-77 as implemented in Microsoft Fortran 4.0. NOTE: Microsoft Fortran includes non-standard features which are used in this program. Standard Fortran compilers such as, g77, f77, ifort and NAG95, are not able to compile the code and therefore it has not been possible for the CPC Program Library to test the program.Memory, required to execute with typical data: 8 Mbytes of RAM memory and 100 MB of Hard disk memoryNo. of bits in a word: 16No. of lines in distributed program, including test data, etc.: 6912No. of bytes in distributed Program, including test data, etc.: 229 541Distribution format: tar.gzNature of the physical problem: the investigation of transport mechanisms for radioactive substances, through environmental pathways, is very important for radiological protection of populations. One such pathway, associated with the food chain, is the grass-animal-man sequence. The distribution of trace elements in humans and laboratory animals has been intensively studied over the past 60 years [R.C. Pendlenton, C.W. Mays, R.D. Lloyd, A.L. Brooks, Differential accumulation of iodine-131 from local fallout in people and milk, Health Phys. 9 (1963) 1253-1262]. In addition, investigations on the incidence of cancer in humans, and a possible causal relationship to radioactive fallout, have been undertaken [E.S. Weiss, M.L. Rallison, W.T. London, W.T. Carlyle Thompson, Thyroid nodularity in southwestern Utah school children exposed to fallout radiation, Amer. J. Public Health 61 (1971) 241-249; M.L. Rallison, B.M. Dobyns, F.R. Keating, J.E. Rall, F.H. Tyler, Thyroid diseases in children, Amer. J. Med. 56 (1974) 457-463; J.L. Lyon, M.R. Klauber, J.W. Gardner, K.S. Udall, Childhood leukemia associated with fallout from nuclear testing, N. Engl. J. Med. 300 (1979) 397-402]. From the pathways of entry of radionuclides in the human (or animal) body, ingestion is the most important because it is closely related to life-long alimentary (or dietary) habits. Those radionuclides which are able to enter the living cells by either metabolic or other processes give rise to localized doses which can be very high. The evaluation of these internally localized doses is of paramount importance for the assessment of radiobiological risks and radiological protection. The time behavior of trace concentration in organs is the principal input for prediction of internal doses after acute or chronic exposure. The General Multiple-Compartment Model (GMCM) is the powerful and more accepted method for biokinetical studies, which allows the calculation of concentration of trace elements in organs as a function of time, when the flow parameters of the model are known. However, few biokinetics data exist in the literature, and the determination of flow and transfer parameters by statistical fitting for each system is an open problem.Restriction on the complexity of the problem: This version of the code works with the constant volume approximation, which is valid for many situations where the biological half-live of a trace is lower than the volume rise time. Another restriction is related to the central flux model. The model considered in the code assumes that exist one central compartment (e.g., blood), that connect the flow with all compartments, and the flow between other compartments is not included.Typical running time: Depends on the choice for calculations. Using the Derivative Method the time is very short (a few minutes) for any number of compartments considered. When the Gauss-Marquardt iterative method is used the calculation time can be approximately 5-6 hours when similar to 15 compartments are considered. (C) 2006 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Based on literature review, electronic systems design employ largely top-down methodology. The top-down methodology is vital for success in the synthesis and implementation of electronic systems. In this context, this paper presents a new computational tool, named BD2XML, to support electronic systems design. From a block diagram system of mixed-signal is generated object code in XML markup language. XML language is interesting because it has great flexibility and readability. The BD2XML was developed with object-oriented paradigm. It was used the AD7528 converter modeled in MATLAB / Simulink as a case study. The MATLAB / Simulink was chosen as a target due to its wide dissemination in academia and industry. From this case study it is possible to demonstrate the functionality of the BD2XML and make it a reflection on the design challenges. Therefore, an automatic tool for electronic systems design reduces the time and costs of the design.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

La tesi ha lo scopo di esplorare la produzione di sistemi software per Embedded Systems mediante l'utilizzo di tecniche relative al mondo del Model Driven Software Development. La fase più importante dello sviluppo sarà la definizione di un Meta-Modello che caratterizza i concetti fondamentali relativi agli embedded systems. Tale modello cercherà di astrarre dalla particolare piattaforma utilizzata ed individuare quali astrazioni caratterizzano il mondo degli embedded systems in generale. Tale meta-modello sarà quindi di tipo platform-independent. Per la generazione automatica di codice è stata adottata una piattaforma di riferimento, cioè Arduino. Arduino è un sistema embedded che si sta sempre più affermando perché coniuga un buon livello di performance ed un prezzo relativamente basso. Tale piattaforma permette lo sviluppo di sistemi special purpose che utilizzano sensori ed attuatori di vario genere, facilmente connessi ai pin messi a disposizione. Il meta-modello definito è un'istanza del meta-metamodello MOF, definito formalmente dall'organizzazione OMG. Questo permette allo sviluppatore di pensare ad un sistema sotto forma di modello, istanza del meta-modello definito. Un meta-modello può essere considerato anche come la sintassi astratta di un linguaggio, quindi può essere definito da un insieme di regole EBNF. La tecnologia utilizzata per la definizione del meta-modello è stata Xtext: un framework che permette la scrittura di regole EBNF e che genera automaticamente il modello Ecore associato al meta-modello definito. Ecore è l'implementazione di EMOF in ambiente Eclipse. Xtext genera inoltre dei plugin che permettono di avere un editor guidato dalla sintassi, definita nel meta-modello. La generazione automatica di codice è stata realizzata usando il linguaggio Xtend2. Tale linguaggio permette di esplorare l'Abstract Syntax Tree generato dalla traduzione del modello in Ecore e di generare tutti i file di codice necessari. Il codice generato fornisce praticamente tutta la schematic part dell'applicazione, mentre lascia all'application designer lo sviluppo della business logic. Dopo la definizione del meta-modello di un sistema embedded, il livello di astrazione è stato spostato più in alto, andando verso la definizione della parte di meta-modello relativa all'interazione di un sistema embedded con altri sistemi. Ci si è quindi spostati verso un ottica di Sistema, inteso come insieme di sistemi concentrati che interagiscono. Tale difinizione viene fatta dal punto di vista del sistema concentrato di cui si sta definendo il modello. Nella tesi viene inoltre introdotto un caso di studio che, anche se abbastanza semplice, fornisce un esempio ed un tutorial allo sviluppo di applicazioni mediante l'uso del meta-modello. Ci permette inoltre di notare come il compito dell'application designer diventi piuttosto semplice ed immediato, sempre se basato su una buona analisi del problema. I risultati ottenuti sono stati di buona qualità ed il meta-modello viene tradotto in codice che funziona correttamente.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The objective of this work is to characterize the genome of the chromosome 1 of A.thaliana, a small flowering plants used as a model organism in studies of biology and genetics, on the basis of a recent mathematical model of the genetic code. I analyze and compare different portions of the genome: genes, exons, coding sequences (CDS), introns, long introns, intergenes, untranslated regions (UTR) and regulatory sequences. In order to accomplish the task, I transformed nucleotide sequences into binary sequences based on the definition of the three different dichotomic classes. The descriptive analysis of binary strings indicate the presence of regularities in each portion of the genome considered. In particular, there are remarkable differences between coding sequences (CDS and exons) and non-coding sequences, suggesting that the frame is important only for coding sequences and that dichotomic classes can be useful to recognize them. Then, I assessed the existence of short-range dependence between binary sequences computed on the basis of the different dichotomic classes. I used three different measures of dependence: the well-known chi-squared test and two indices derived from the concept of entropy i.e. Mutual Information (MI) and Sρ, a normalized version of the “Bhattacharya Hellinger Matusita distance”. The results show that there is a significant short-range dependence structure only for the coding sequences whose existence is a clue of an underlying error detection and correction mechanism. No doubt, further studies are needed in order to assess how the information carried by dichotomic classes could discriminate between coding and noncoding sequence and, therefore, contribute to unveil the role of the mathematical structure in error detection and correction mechanisms. Still, I have shown the potential of the approach presented for understanding the management of genetic information.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We have extended the Boltzmann code CLASS and studied a specific scalar tensor dark energy model: Induced Gravity

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A Reynolds-Stress Turbulence Model has been incorporated with success into the KIVA code, a computational fluid dynamics hydrocode for three-dimensional simulation of fluid flow in engines. The newly implemented Reynolds-stress turbulence model greatly improves the robustness of KIVA, which in its original version has only eddy-viscosity turbulence models. Validation of the Reynolds-stress turbulence model is accomplished by conducting pipe-flow and channel-flow simulations, and comparing the computed results with experimental and direct numerical simulation data. Flows in engines of various geometry and operating conditions are calculated using the model, to study the complex flow fields as well as confirm the model’s validity. Results show that the Reynolds-stress turbulence model is able to resolve flow details such as swirl and recirculation bubbles. The model is proven to be an appropriate choice for engine simulations, with consistency and robustness, while requiring relatively low computational effort.