927 resultados para Continuous steam injection and reservoir simulation
Resumo:
This thesis deals with the use of simulation as a problem-solving tool to solve a few logistic system related problems. More specifically it relates to studies on transport terminals. Transport terminals are key elements in the supply chains of industrial systems. One of the problems related to use of simulation is that of the multiplicity of models needed to study different problems. There is a need for development of methodologies related to conceptual modelling which will help reduce the number of models needed. Three different logistic terminal systems Viz. a railway yard, container terminal of apart and airport terminal were selected as cases for this study. The standard methodology for simulation development consisting of system study and data collection, conceptual model design, detailed model design and development, model verification and validation, experimentation, and analysis of results, reporting of finding were carried out. We found that models could be classified into tightly pre-scheduled, moderately pre-scheduled and unscheduled systems. Three types simulation models( called TYPE 1, TYPE 2 and TYPE 3) of various terminal operations were developed in the simulation package Extend. All models were of the type discrete-event simulation. Simulation models were successfully used to help solve strategic, tactical and operational problems related to three important logistic terminals as set in our objectives. From the point of contribution to conceptual modelling we have demonstrated that clubbing problems into operational, tactical and strategic and matching them with tightly pre-scheduled, moderately pre-scheduled and unscheduled systems is a good workable approach which reduces the number of models needed to study different terminal related problems.
Resumo:
Landwirtschaft spielt eine zentrale Rolle im Erdsystem. Sie trägt durch die Emission von CO2, CH4 und N2O zum Treibhauseffekt bei, kann Bodendegradation und Eutrophierung verursachen, regionale Wasserkreisläufe verändern und wird außerdem stark vom Klimawandel betroffen sein. Da all diese Prozesse durch die zugrunde liegenden Nährstoff- und Wasserflüsse eng miteinander verknüpft sind, sollten sie in einem konsistenten Modellansatz betrachtet werden. Dennoch haben Datenmangel und ungenügendes Prozessverständnis dies bis vor kurzem auf der globalen Skala verhindert. In dieser Arbeit wird die erste Version eines solchen konsistenten globalen Modellansatzes präsentiert, wobei der Schwerpunkt auf der Simulation landwirtschaftlicher Erträge und den resultierenden N2O-Emissionen liegt. Der Grund für diese Schwerpunktsetzung liegt darin, dass die korrekte Abbildung des Pflanzenwachstums eine essentielle Voraussetzung für die Simulation aller anderen Prozesse ist. Des weiteren sind aktuelle und potentielle landwirtschaftliche Erträge wichtige treibende Kräfte für Landnutzungsänderungen und werden stark vom Klimawandel betroffen sein. Den zweiten Schwerpunkt bildet die Abschätzung landwirtschaftlicher N2O-Emissionen, da bislang kein prozessbasiertes N2O-Modell auf der globalen Skala eingesetzt wurde. Als Grundlage für die globale Modellierung wurde das bestehende Agrarökosystemmodell Daycent gewählt. Neben der Schaffung der Simulationsumgebung wurden zunächst die benötigten globalen Datensätze für Bodenparameter, Klima und landwirtschaftliche Bewirtschaftung zusammengestellt. Da für Pflanzzeitpunkte bislang keine globale Datenbasis zur Verfügung steht, und diese sich mit dem Klimawandel ändern werden, wurde eine Routine zur Berechnung von Pflanzzeitpunkten entwickelt. Die Ergebnisse zeigen eine gute Übereinstimmung mit Anbaukalendern der FAO, die für einige Feldfrüchte und Länder verfügbar sind. Danach wurde das Daycent-Modell für die Ertragsberechnung von Weizen, Reis, Mais, Soja, Hirse, Hülsenfrüchten, Kartoffel, Cassava und Baumwolle parametrisiert und kalibriert. Die Simulationsergebnisse zeigen, dass Daycent die wichtigsten Klima-, Boden- und Bewirtschaftungseffekte auf die Ertragsbildung korrekt abbildet. Berechnete Länderdurchschnitte stimmen gut mit Daten der FAO überein (R2 = 0.66 für Weizen, Reis und Mais; R2 = 0.32 für Soja), und räumliche Ertragsmuster entsprechen weitgehend der beobachteten Verteilung von Feldfrüchten und subnationalen Statistiken. Vor der Modellierung landwirtschaftlicher N2O-Emissionen mit dem Daycent-Modell stand eine statistische Analyse von N2O-und NO-Emissionsmessungen aus natürlichen und landwirtschaftlichen Ökosystemen. Die als signifikant identifizierten Parameter für N2O (Düngemenge, Bodenkohlenstoffgehalt, Boden-pH, Textur, Feldfrucht, Düngersorte) und NO (Düngemenge, Bodenstickstoffgehalt, Klima) entsprechen weitgehend den Ergebnissen einer früheren Analyse. Für Emissionen aus Böden unter natürlicher Vegetation, für die es bislang keine solche statistische Untersuchung gab, haben Bodenkohlenstoffgehalt, Boden-pH, Lagerungsdichte, Drainierung und Vegetationstyp einen signifikanten Einfluss auf die N2O-Emissionen, während NO-Emissionen signifikant von Bodenkohlenstoffgehalt und Vegetationstyp abhängen. Basierend auf den daraus entwickelten statistischen Modellen betragen die globalen Emissionen aus Ackerböden 3.3 Tg N/y für N2O, und 1.4 Tg N/y für NO. Solche statistischen Modelle sind nützlich, um Abschätzungen und Unsicherheitsbereiche von N2O- und NO-Emissionen basierend auf einer Vielzahl von Messungen zu berechnen. Die Dynamik des Bodenstickstoffs, insbesondere beeinflusst durch Pflanzenwachstum, Klimawandel und Landnutzungsänderung, kann allerdings nur durch die Anwendung von prozessorientierten Modellen berücksichtigt werden. Zur Modellierung von N2O-Emissionen mit dem Daycent-Modell wurde zunächst dessen Spurengasmodul durch eine detailliertere Berechnung von Nitrifikation und Denitrifikation und die Berücksichtigung von Frost-Auftau-Emissionen weiterentwickelt. Diese überarbeitete Modellversion wurde dann an N2O-Emissionsmessungen unter verschiedenen Klimaten und Feldfrüchten getestet. Sowohl die Dynamik als auch die Gesamtsummen der N2O-Emissionen werden befriedigend abgebildet, wobei die Modelleffizienz für monatliche Mittelwerte zwischen 0.1 und 0.66 für die meisten Standorte liegt. Basierend auf der überarbeiteten Modellversion wurden die N2O-Emissionen für die zuvor parametrisierten Feldfrüchte berechnet. Emissionsraten und feldfruchtspezifische Unterschiede stimmen weitgehend mit Literaturangaben überein. Düngemittelinduzierte Emissionen, die momentan vom IPCC mit 1.25 +/- 1% der eingesetzten Düngemenge abgeschätzt werden, reichen von 0.77% (Reis) bis 2.76% (Mais). Die Summe der berechneten Emissionen aus landwirtschaftlichen Böden beträgt für die Mitte der 1990er Jahre 2.1 Tg N2O-N/y, was mit den Abschätzungen aus anderen Studien übereinstimmt.
Resumo:
Die in dieser Arbeit durchgeführten Untersuchungen zeigen, daß es möglich ist, komplexe thermische Systeme, unter Verwendung der thermisch-, elektrischen Analogien, mit PSpice zu simulieren. Im Mittelpunkt der Untersuchungen standen hierbei Strangkühlkörper zur Kühlung von elektronischen Bauelementen. Es konnte gezeigt werden,daß alle Wärmeübertragungsarten, (Wärmeleitung, Konvektion und Wärmestrahlung) in der Simulation berücksichtigt werden können. Für die Berechnung der Konvektion wurden verschiedene Methoden hergeleitet. Diese gelten zum einen für verschiedene Kühlkörpergeometrien, wie z.B. ebene Flächen und Kühlrippenzwischenräume, andererseits unterscheiden sie sich, je nachdem, ob freie oder erzwungene Konvektion betrachtet wird. Für die Wärmestrahlung zwischen den Kühlrippen wurden verschiedenen Berechnungsmethoden entwickelt. Für die Simulation mit PSpice wurde die Berechnung der Wärmestrahlung zwischen den Kühlrippen vereinfacht. Es konnte gezeigt werden, daß die Fehler, die durch die Vereinfachung entstehen, vernachlässigbar klein sind. Für das thermische Verhalten einer zu kühlenden Wärmequelle wurde ein allgemeines Modell entworfen. Zur Bestimmung der Modellparameter wurden verschiedene Meßverfahren entwickelt. Für eine im Fachgebiet Elektromechanik entwickelte Wärmequelle zum Test von Kühlvorrichtungen wurde mit Hilfe dieser Meßverfahren eine Parameterbestimmung durchgeführt. Die Erstellung des thermischen Modells eines Kühlkörpers für die Simulation in PSpice erfordert die Analyse der Kühlkörpergeometrie. Damit diese Analyse weitestgehend automatisiert werden kann, wurden verschiedene Algorithmen unter Matlab entwickelt. Es wurde ein Algorithmus entwickelt, der es ermöglicht, den Kühlkörper in Elementarzellen zu zerlegen, die für die Erstellung des Simulationsmodells benötigt werden. Desweiteren ist es für die Simulation notwendig zu wissen, welche der Elementarzellen am Rand des Kühlkörpers liegen, welche der Elementarzellen an einem Kühlrippenzwischenraum liegen und welche Kühlkörperkanten schräg verlaufen. Auch zur Lösung dieser Aufgaben wurden verschiedene Algorithmen entwickelt. Diese Algorithmen wurden zu einem Programm zusammengefaßt, das es gestattet, unterschiedliche Strangkühlkörper zu simulieren und die Simulationsergebnisse in Form der Temperaturverteilung auf der Montagefläche des Kühlkörpers grafisch darzustellen. Es können stationäre und transiente Simulationen durchgeführt werden. Desweiteren kann der thermische Widerstand des Kühlkörpers RthK als Funktion der Verlustleistung der Wärmequelle dargestellt werden. Zur Verifikation der Simulationsergebnisse wurden Temperaturmessungen an Kühlkörpern durchgeführt und mit den Simulationsergebnissen verglichen. Diese Vergleiche zeigen, daß die Abweichungen im Bereich der Streuung der Temperaturmessung liegen. Das hier entwickelte Verfahren zur thermischen Simulation von Strangkühlkörpern kann somit als gut bewertet werden.
Resumo:
The research of this thesis dissertation covers developments and applications of short-and long-term climate predictions. The short-term prediction emphasizes monthly and seasonal climate, i.e. forecasting from up to the next month over a season to up to a year or so. The long-term predictions pertain to the analysis of inter-annual- and decadal climate variations over the whole 21st century. These two climate prediction methods are validated and applied in the study area, namely, Khlong Yai (KY) water basin located in the eastern seaboard of Thailand which is a major industrial zone of the country and which has been suffering from severe drought and water shortage in recent years. Since water resources are essential for the further industrial development in this region, a thorough analysis of the potential climate change with its subsequent impact on the water supply in the area is at the heart of this thesis research. The short-term forecast of the next-season climate, such as temperatures and rainfall, offers a potential general guideline for water management and reservoir operation. To that avail, statistical models based on autoregressive techniques, i.e., AR-, ARIMA- and ARIMAex-, which includes additional external regressors, and multiple linear regression- (MLR) models, are developed and applied in the study region. Teleconnections between ocean states and the local climate are investigated and used as extra external predictors in the ARIMAex- and the MLR-model and shown to enhance the accuracy of the short-term predictions significantly. However, as the ocean state – local climate teleconnective relationships provide only a one- to four-month ahead lead time, the ocean state indices can support only a one-season-ahead forecast. Hence, GCM- climate predictors are also suggested as an additional predictor-set for a more reliable and somewhat longer short-term forecast. For the preparation of “pre-warning” information for up-coming possible future climate change with potential adverse hydrological impacts in the study region, the long-term climate prediction methodology is applied. The latter is based on the downscaling of climate predictions from several single- and multi-domain GCMs, using the two well-known downscaling methods SDSM and LARS-WG and a newly developed MLR-downscaling technique that allows the incorporation of a multitude of monthly or daily climate predictors from one- or several (multi-domain) parent GCMs. The numerous downscaling experiments indicate that the MLR- method is more accurate than SDSM and LARS-WG in predicting the recent past 20th-century (1971-2000) long-term monthly climate in the region. The MLR-model is, consequently, then employed to downscale 21st-century GCM- climate predictions under SRES-scenarios A1B, A2 and B1. However, since the hydrological watershed model requires daily-scale climate input data, a new stochastic daily climate generator is developed to rescale monthly observed or predicted climate series to daily series, while adhering to the statistical and geospatial distributional attributes of observed (past) daily climate series in the calibration phase. Employing this daily climate generator, 30 realizations of future daily climate series from downscaled monthly GCM-climate predictor sets are produced and used as input in the SWAT- distributed watershed model, to simulate future streamflow and other hydrological water budget components in the study region in a multi-realization manner. In addition to a general examination of the future changes of the hydrological regime in the KY-basin, potential future changes of the water budgets of three main reservoirs in the basin are analysed, as these are a major source of water supply in the study region. The results of the long-term 21st-century downscaled climate predictions provide evidence that, compared with the past 20th-reference period, the future climate in the study area will be more extreme, particularly, for SRES A1B. Thus, the temperatures will be higher and exhibit larger fluctuations. Although the future intensity of the rainfall is nearly constant, its spatial distribution across the region is partially changing. There is further evidence that the sequential rainfall occurrence will be decreased, so that short periods of high intensities will be followed by longer dry spells. This change in the sequential rainfall pattern will also lead to seasonal reductions of the streamflow and seasonal changes (decreases) of the water storage in the reservoirs. In any case, these predicted future climate changes with their hydrological impacts should encourage water planner and policy makers to develop adaptation strategies to properly handle the future water supply in this area, following the guidelines suggested in this study.
Resumo:
The characteristics of service independence and flexibility of ATM networks make the control problems of such networks very critical. One of the main challenges in ATM networks is to design traffic control mechanisms that enable both economically efficient use of the network resources and desired quality of service to higher layer applications. Window flow control mechanisms of traditional packet switched networks are not well suited to real time services, at the speeds envisaged for the future networks. In this work, the utilisation of the Probability of Congestion (PC) as a bandwidth decision parameter is presented. The validity of PC utilisation is compared with QOS parameters in buffer-less environments when only the cell loss ratio (CLR) parameter is relevant. The convolution algorithm is a good solution for CAC in ATM networks with small buffers. If the source characteristics are known, the actual CLR can be very well estimated. Furthermore, this estimation is always conservative, allowing the retention of the network performance guarantees. Several experiments have been carried out and investigated to explain the deviation between the proposed method and the simulation. Time parameters for burst length and different buffer sizes have been considered. Experiments to confine the limits of the burst length with respect to the buffer size conclude that a minimum buffer size is necessary to achieve adequate cell contention. Note that propagation delay is a no dismiss limit for long distance and interactive communications, then small buffer must be used in order to minimise delay. Under previous premises, the convolution approach is the most accurate method used in bandwidth allocation. This method gives enough accuracy in both homogeneous and heterogeneous networks. But, the convolution approach has a considerable computation cost and a high number of accumulated calculations. To overcome this drawbacks, a new method of evaluation is analysed: the Enhanced Convolution Approach (ECA). In ECA, traffic is grouped in classes of identical parameters. By using the multinomial distribution function instead of the formula-based convolution, a partial state corresponding to each class of traffic is obtained. Finally, the global state probabilities are evaluated by multi-convolution of the partial results. This method avoids accumulated calculations and saves storage requirements, specially in complex scenarios. Sorting is the dominant factor for the formula-based convolution, whereas cost evaluation is the dominant factor for the enhanced convolution. A set of cut-off mechanisms are introduced to reduce the complexity of the ECA evaluation. The ECA also computes the CLR for each j-class of traffic (CLRj), an expression for the CLRj evaluation is also presented. We can conclude that by combining the ECA method with cut-off mechanisms, utilisation of ECA in real-time CAC environments as a single level scheme is always possible.
Resumo:
The performance of the atmospheric component of the new Hadley Centre Global Environmental Model (HadGEM1) is assessed in terms of its ability to represent a selection of key aspects of variability in the Tropics and extratropics. These include midlatitude storm tracks and blocking activity, synoptic variability over Europe, and the North Atlantic Oscillation together with tropical convection, the Madden-Julian oscillation, and the Asian summer monsoon. Comparisons with the previous model, the Third Hadley Centre Coupled Ocean-Atmosphere GCM (HadCM3), demonstrate that there has been a considerable increase in the transient eddy kinetic energy (EKE), bringing HadGEM1 into closer agreement with current reanalyses. This increase in EKE results from the increased horizontal resolution and, in combination with the improved physical parameterizations, leads to improvements in the representation of Northern Hemisphere storm tracks and blocking. The simulation of synoptic weather regimes over Europe is also greatly improved compared to HadCM3, again due to both increased resolution and other model developments. The variability of convection in the equatorial region is generally stronger and closer to observations than in HadCM3. There is, however, still limited convective variance coincident with several of the observed equatorial wave modes. Simulation of the Madden-Julian oscillation is improved in HadGEM1: both the activity and interannual variability are increased and the eastward propagation, although slower than observed, is much better simulated. While some aspects of the climatology of the Asian summer monsoon are improved in HadGEM1, the upper-level winds are too weak and the simulation of precipitation deteriorates. The dominant modes of monsoon interannual variability are similar in the two models, although in HadCM3 this is linked to SST forcing, while in HadGEM1 internal variability dominates. Overall, analysis of the phenomena considered here indicates that HadGEM1 performs well and, in many important respects, improves upon HadCM3. Together with the improved representation of the mean climate, this improvement in the simulation of atmospheric variability suggests that HadGEM1 provides a sound basis for future studies of climate and climate change.
Resumo:
Transient and continuous recombinant protein expression by HEK cells was evaluated in a perfused monolithic bioreactor. Highly porous synthetic cryogel scaffolds (10ml bed volume) were characterised by scanning electron microscopy and tested as cell substrates. Efficient seeding was achieved (94% inoculum retained, with 91-95% viability). Metabolite monitoring indicated continuous cell growth, and endpoint cell density was estimated by genomic DNA quantification to be 5.2x108, 1.1x109 and 3.5x1010 at day 10, 14 and 18. Culture of stably transfected cells allowed continuous production of the Drosophila cytokine Spätzle by the bioreactor at the same rate as in monolayer culture (total 1.2 mg at d18) and this protein was active. In transient transfection experiments more protein was produced per cell compared with monolayer culture. Confocal microscopy confirmed homogenous GFP expression after transient transfection within the bioreactor. Monolithic bioreactors are thus shown to be a flexible and powerful tool for manufacturing recombinant proteins.
Resumo:
Estimates of the response of crops to climate change rarely quantify the uncertainty inherent in the simulation of both climate and crops. We present a crop simulation ensemble for a location in India, perturbing the response of both crop and climate under both baseline (12 720 simulations) and doubled-CO2 (171720 simulations) climates. Some simulations used parameter values representing genotypic adaptation to mean temperature change. Firstly, observed and simulated yields in the baseline climate were compared. Secondly, the response of yield to changes in mean temperature was examined and compared to that found in the literature. No consistent response to temperature change was found across studies. Thirdly, the relative contribution of uncertainty in crop and climate simulation to the total uncertainty in projected yield changes was examined. In simulations without genotypic adaptation, most of the uncertainty came from the climate model parameters. Comparison with the simulations with genotypic adaptation and with a previous study suggested that the relatively low crop parameter uncertainty derives from the observational constraints on the crop parameters used in this study. Fourthly, the simulations were used, together with an observed dataset and a simple analysis of crop cardinal temperatures and thermal time, to estimate the potential for adaptation using existing cultivars. The results suggest that the germplasm for complete adaptation of groundnut cultivation in western India to a doubled-CO2 environment may not exist. In conjunction with analyses of germplasm and local management
Resumo:
The control of fishing mortality via fishing effort remains fundamental to most fisheries management strategies even at the local community or co-management level. Decisions to support such strategies require knowledge of the underlying response of the catch to changes in effort. Even under adaptive management strategies, imprecise knowledge of the response is likely to help accelerate the adaptive learning process. Data and institutional capacity requirements to employ multi-species biomass dynamics and age-structured models invariably render their use impractical particularly in less developed regions of the world. Surplus production models fitted to catch and effort data aggregated across all species offer viable alternatives. The current paper seeks models of this type that best describe the multi-species catch–effort responses in floodplain-rivers, lakes and reservoirs and reef-based fisheries based upon among fishery comparisons, building on earlier work. Three alternative surplus production models were fitted to estimates of catch per unit area (CPUA) and fisher density for 258 fisheries in Africa, Asia and South America. In all cases examined, the best or equal best fitting model was the Fox type, explaining up to 90% of the variation in CPUA. For lake and reservoir fisheries in Africa and Asia, the Schaefer and an asymptotic model fitted equally well. The Fox model estimates of fisher density (fishers km−2) at maximum yield (iMY) for floodplain-rivers, African lakes and reservoirs and reef-based fisheries are 13.7 (95% CI [11.8, 16.4]); 27.8 (95% CI [17.5, 66.7]) and 643 (95% CI [459,1075]), respectively and compare well with earlier estimates. Corresponding estimates of maximum yield are also given. The significantly higher value of iMY for reef-based fisheries compared to estimates for rivers and lakes reflects the use of a different measure of fisher density based upon human population size estimates. The models predict that maximum yield is achieved at a higher fishing intensity in Asian lakes compared to those in Africa. This may reflect the common practice in Asia of stocking lakes to augment natural recruitment. Because of the equilibrium assumptions underlying the models, all the estimates of maximum yield and corresponding levels of effort should be treated with caution.
Resumo:
Inhibition of myostatin signalling or its biological activity has recently emerged as a potential remedial approach against muscle wasting and degenerative diseases such as muscular dystrophies. In the present study we systemically administered a recombinant AAV8 vector expressing a mutated myostatin propeptide (AAV8ProMyo) to healthy mice in order to assess its impact on the histological, cellular and physiological properties of the skeletal muscle, exploiting the fact that myostatin is naturally inhibited by its own propeptide. We report that a single intravenous administration of AAV8ProMyo leads to increases in muscle mass of tibialis anterior, extensor digitorum longus and gastrocnemius muscles 8 weeks post-injection and tibialis anterior, gastrocnemius and rectus femoris muscles 17 weeks post-injection. Moreover, treatment resulted in muscle fibre hypertrophy but not hyperplasia, with IIB myofibres responding to the greatest extent following propeptide-induced myostatin inhibition. Additionally, myofibre nuclear: cytoplasmic ratio was decreased in the AAV8ProMyo treated animals. Importantly, the hypertrophic EDL muscle 8 weeks after AAV8ProMyo treatment did not show the dramatic decrease in specific force displayed by the germline myostatin null mice. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The new dioxatetraazamacrocycle (L-1) was synthesized by a 2 + 2 condensation and characterized. Stability constants of its copper(II) complexes were determined by spectrophotometry in DMSO at 298.2 K in 0. 10 mol dm(-3) KClO4. Mainly dinuclear complexes are formed and the presence of mononuclear species is dependent on the counterion (Cl- or ClO4-). The association constants of the dinuclear copper(II) complexes with dicarboxylate anions [oxalate (oxa(2-)), malonate (mal(2-)), succinate (suc(2-)), and glutarate (glu(2-))] were also determined by spectrophotometry at 298.2 K in DMSO, and it was found that values decrease with an increase of the alkyl chain between the carboxylate groups. X-Band EPR spectra of the dicopper(II) complexes and of their cascade species in frozen DMSO exhibit dipole-dipole coupling, and their simulation, together with their UV-vis spectra, showed that the copper centres of the complexes in solution had square pyramidal geometries though with different distortions. From the experimental data, it was also possible to predict the Cu...Cu distances, the minimum being found at 6.4 angstrom for the (Cu2LCl4)-Cl-1 complex and then successively this distance slightly increases when the chloride anions are replaced by dicarboxylate anions, from 6.6 angstrom for oxa(2-) to 7.8 for glu(2-). The crystal structures of the dinuclear copper cascade species with oxa(2-) and suc(2-) were determined and showed one anion bridging both copper centres and Cu...Cu distances of 5.485(7) angstrom and 6.442(8) angstrom, respectively.
Resumo:
Analysis of X-ray powder data for the melt-crystallisable aromatic poly(thioether thioether ketone) [-S-Ar-S-Ar-CO-Ar](n), ('PTTK', Ar= 1,4-phenylene), reveals that it adopts a crystal structure very different from that established for its ether-analogue PEEK. Molecular modelling and diffraction-simulation studies of PTTK show that the structure of this polymer is analogous to that of melt-crystallised poly(thioetherketone) [-SAr-CO-Ar](n) in which the carbonyl linkages in symmetry-related chains are aligned anti-parallel to one another. and that these bridging units are crystallographically interchangeable. The final model for the crystal structure of PTTK is thus disordered, in the monoclinic space group 121a (two chains per unit cell), with cell dimensions a = 7.83, b = 6.06, c = 10.35 angstrom, beta = 93.47 degrees. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
An important goal in computational neuroanatomy is the complete and accurate simulation of neuronal morphology. We are developing computational tools to model three-dimensional dendritic structures based on sets of stochastic rules. This paper reports an extensive, quantitative anatomical characterization of simulated motoneurons and Purkinje cells. We used several local and global algorithms implemented in the L-Neuron and ArborVitae programs to generate sets of virtual neurons. Parameters statistics for all algorithms were measured from experimental data, thus providing a compact and consistent description of these morphological classes. We compared the emergent anatomical features of each group of virtual neurons with those of the experimental database in order to gain insights on the plausibility of the model assumptions, potential improvements to the algorithms, and non-trivial relations among morphological parameters. Algorithms mainly based on local constraints (e.g., branch diameter) were successful in reproducing many morphological properties of both motoneurons and Purkinje cells (e.g. total length, asymmetry, number of bifurcations). The addition of global constraints (e.g., trophic factors) improved the angle-dependent emergent characteristics (average Euclidean distance from the soma to the dendritic terminations, dendritic spread). Virtual neurons systematically displayed greater anatomical variability than real cells, suggesting the need for additional constraints in the models. For several emergent anatomical properties, a specific algorithm reproduced the experimental statistics better than the others did. However, relative performances were often reversed for different anatomical properties and/or morphological classes. Thus, combining the strengths of alternative generative models could lead to comprehensive algorithms for the complete and accurate simulation of dendritic morphology.
Resumo:
This paper first points out the important fact that the rectangle formulas of continuous convolution discretization, which was widely used in conventional digital deconvolution algorithms, can result in zero-time error. Then, an improved digital deconvolution equation is suggested which is equivalent to the trapezoid formulas of continuous convolution discretization and can overcome the disadvantage of conventional equation satisfactorily. Finally, a simulation in computer is given, thus confirming the theoretical result.
Resumo:
Variations in the Atlantic Meridional Overturning Circulation (MOC) exert an important influence on climate, particularly on decadal time scales. Simulation of the MOC in coupled climate models is compromised, to a degree that is unknown, by their lack of fidelity in resolving some of the key processes involved. There is an overarching need to increase the resolution and fidelity of climate models, but also to assess how increases in resolution influence the simulation of key phenomena such as the MOC. In this study we investigate the impact of significantly increasing the (ocean and atmosphere) resolution of a coupled climate model on the simulation of MOC variability by comparing high and low resolution versions of the same model. In both versions, decadal variability of the MOC is closely linked to density anomalies that propagate from the Labrador Sea southward along the deep western boundary. We demonstrate that the MOC adjustment proceeds more rapidly in the higher resolution model due the increased speed of western boundary waves. However, the response of the Atlantic Sea Surface Temperatures (SSTs) to MOC variations is relatively robust - in pattern if not in magnitude - across the two resolutions. The MOC also excites a coupled ocean-atmosphere response in the tropical Atlantic in both model versions. In the higher resolution model, but not the lower resolution model, there is evidence of a significant response in the extratropical atmosphere over the North Atlantic 6 years after a maximum in the MOC. In both models there is evidence of a weak negative feedback on deep density anomalies in the Labrador Sea, and hence on the MOC (with a time scale of approximately ten years). Our results highlight the need for further work to understand the decadal variability of the MOC and its simulation in climate models.