939 resultados para sampled value process bus
Resumo:
We report the first tungsten isotopic measurements in stardust silicon carbide (SiC) grains recovered from the Murchison carbonaceous chondrite. The isotopes (182,183,184,186)Wand (179,180)Hf were measured on both an aggregate (KJB fraction) and single stardust SiC grains (LS+ LU fraction) believed to have condensed in the outflows of low-mass carbon-rich asymptotic giant branch (AGB) stars with close-to-solar metallicity. The SiC aggregate shows small deviations from terrestrial (= solar) composition in the (182)W/(184)Wand (183)W/(184)Wratios, with deficits in (182)W and (183)W with respect to (184)W. The (186)W/(184)W ratio, however, shows no apparent deviation from the solar value. Tungsten isotopic measurements in single mainstream stardust SiC grains revealed lower than solar (182)W/(184)W, (183)W/(184)W, and (186)W/(184)W ratios. We have compared the SiC data with theoretical predictions of the evolution of W isotopic ratios in the envelopes of AGB stars. These ratios are affected by the slow neutron-capture process and match the SiC data regarding their (182)W/(184)W, (183)W/(184)W, and (179)Hf/(180)Hf isotopic compositions, although a small adjustment in the s-process production of (183)W is needed in order to have a better agreement between the SiC data and model predictions. The models cannot explain the (186)W/(184)W ratios observed in the SiC grains, even when the current (185)W neutron-capture cross section is increased by a factor of two. Further study is required to better assess how model uncertainties (e. g., the formation of the (13)C neutron source, the mass-loss law, the modeling of the third dredge-up, and the efficiency of the (22)Ne neutron source) may affect current s-process predictions.
Resumo:
The present work is inserted into the broad context of the upgrading of lignocellulosic fibers. Sisal was chosen in the present study because more than 50% of the world's sisal is cultivated in Brazil, it has a short life cycle and its fiber has a high cellulose content. Specifically, in the present study, the subject addressed was the hydrolysis of the sisal pulp, using sulfuric acid as the catalyst. To assess the influence of parameters such as the concentration of the sulfuric acid and the temperature during this process, the pulp was hydrolyzed with various concentrations of sulfuric acid (30-50%) at 70 A degrees C and with 30% acid (v/v) at various temperatures (60-100 A degrees C). During hydrolysis, aliquots were withdrawn from the reaction media, and the solid (non-hydrolyzed pulp) was separated from the liquid (liquor) by filtering each aliquot. The sugar composition of the liquor was analyzed by HPLC, and the non-hydrolyzed pulps were characterized by viscometry (average molar mass), and X-ray diffraction (crystallinity). The results support the following conclusions: acid hydrolysis using 30% H2SO4 at 100 A degrees C can produce sisal microcrystalline cellulose and the conditions that led to the largest glucose yield and lowest decomposition rate were 50% H2SO4 at 70 A degrees C. In summary, the study of sisal pulp hydrolysis using concentrated acid showed that certain conditions are suitable for high recovery of xylose and good yield of glucose. Moreover, the unreacted cellulose can be targeted for different applications in bio-based materials. A kinetic study based on the glucose yield was performed for all reaction conditions using the kinetic model proposed by Saeman. The results showed that the model adjusted to all 30-35% H2SO4 reactions but not to greater concentrations of sulfuric acid. The present study is part of an ongoing research program, and the results reported here will be used as a comparison against the results obtained when using treated sisal pulp as the starting material.
Resumo:
Abstract Background Obstructive sleep apnea (OSA) is a respiratory disease characterized by the collapse of the extrathoracic airway and has important social implications related to accidents and cardiovascular risk. The main objective of the present study was to investigate whether the drop in expiratory flow and the volume expired in 0.2 s during the application of negative expiratory pressure (NEP) are associated with the presence and severity of OSA in a population of professional interstate bus drivers who travel medium and long distances. Methods/Design An observational, analytic study will be carried out involving adult male subjects of an interstate bus company. Those who agree to participate will undergo a detailed patient history, physical examination involving determination of blood pressure, anthropometric data, circumference measurements (hips, waist and neck), tonsils and Mallampati index. Moreover, specific questionnaires addressing sleep apnea and excessive daytime sleepiness will be administered. Data acquisition will be completely anonymous. Following the medical examination, the participants will perform a spirometry, NEP test and standard overnight polysomnography. The NEP test is performed through the administration of negative pressure at the mouth during expiration. This is a practical test performed while awake and requires little cooperation from the subject. In the absence of expiratory flow limitation, the increase in the pressure gradient between the alveoli and open upper airway caused by NEP results in an increase in expiratory flow. Discussion Despite the abundance of scientific evidence, OSA is still underdiagnosed in the general population. In addition, diagnostic procedures are expensive, and predictive criteria are still unsatisfactory. Because increased upper airway collapsibility is one of the main determinants of OSA, the response to the application of NEP could be a predictor of this disorder. With the enrollment of this study protocol, the expectation is to encounter predictive NEP values for different degrees of OSA in order to contribute toward an early diagnosis of this condition and reduce its impact and complications among commercial interstate bus drivers.
Resumo:
Abstract Background In recent years, biorefining of lignocellulosic biomass to produce multi-products such as ethanol and other biomaterials has become a dynamic research area. Pretreatment technologies that fractionate sugarcane bagasse are essential for the successful use of this feedstock in ethanol production. In this paper, we investigate modifications in the morphology and chemical composition of sugarcane bagasse submitted to a two-step treatment, using diluted acid followed by a delignification process with increasing sodium hydroxide concentrations. Detailed chemical and morphological characterization of the samples after each pretreatment condition, studied by high performance liquid chromatography, solid-state nuclear magnetic resonance, diffuse reflectance Fourier transformed infrared spectroscopy and scanning electron microscopy, is reported, together with sample crystallinity and enzymatic digestibility. Results Chemical composition analysis performed on samples obtained after different pretreatment conditions showed that up to 96% and 85% of hemicellulose and lignin fractions, respectively, were removed by this two-step method when sodium hydroxide concentrations of 1% (m/v) or higher were used. The efficient lignin removal resulted in an enhanced hydrolysis yield reaching values around 100%. Considering the cellulose loss due to the pretreatment (maximum of 30%, depending on the process), the total cellulose conversion increases significantly from 22.0% (value for the untreated bagasse) to 72.4%. The delignification process, with consequent increase in the cellulose to lignin ratio, is also clearly observed by nuclear magnetic resonance and diffuse reflectance Fourier transformed infrared spectroscopy experiments. We also demonstrated that the morphological changes contributing to this remarkable improvement occur as a consequence of lignin removal from the sample. Bagasse unstructuring is favored by the loss of cohesion between neighboring cell walls, as well as by changes in the inner cell wall structure, such as damaging, hole formation and loss of mechanical resistance, facilitating liquid and enzyme access to crystalline cellulose. Conclusions The results presented herewith show the efficiency of the proposed method for improving the enzymatic digestibility of sugarcane bagasse and provide understanding of the pretreatment action mechanism. Combining the different techniques applied in this work warranted thorough information about the undergoing morphological and chemical changes and was an efficient approach to understand the morphological effects resulting from sample delignification and its influence on the enhanced hydrolysis results.
Resumo:
We present a one-dimensional nonlocal hopping model with exclusion on a ring. The model is related to the Raise and Peel growth model. A nonnegative parameter u controls the ratio of the local backwards and nonlocal forwards hopping rates. The phase diagram, and consequently the values of the current, depend on u and the density of particles. In the special case of half-lling and u = 1 the system is conformal invariant and an exact value of the current for any size L of the system is conjectured and checked for large lattice sizes in Monte Carlo simulations. For u > 1 the current has a non-analytic dependence on the density when the latter approaches the half-lling value.
Resumo:
In electronic commerce, systems development is based on two fundamental types of models, business models and process models. A business model is concerned with value exchanges among business partners, while a process model focuses on operational and procedural aspects of business communication. Thus, a business model defines the what in an e-commerce system, while a process model defines the how. Business process design can be facilitated and improved by a method for systematically moving from a business model to a process model. Such a method would provide support for traceability, evaluation of design alternatives, and seamless transition from analysis to realization. This work proposes a unified framework that can be used as a basis to analyze, to interpret and to understand different concepts associated at different stages in e-Commerce system development. In this thesis, we illustrate how UN/CEFACT’s recommended metamodels for business and process design can be analyzed, extended and then integrated for the final solutions based on the proposed unified framework. Also, as an application of the framework, we demonstrate how process-modeling tasks can be facilitated in e-Commerce system design. The proposed methodology, called BP3 stands for Business Process Patterns Perspective. The BP3 methodology uses a question-answer interface to capture different business requirements from the designers. It is based on pre-defined process patterns, and the final solution is generated by applying the captured business requirements by means of a set of production rules to complete the inter-process communication among these patterns.
Resumo:
The increasing aversion to technological risks of the society requires the development of inherently safer and environmentally friendlier processes, besides assuring the economic competitiveness of the industrial activities. The different forms of impact (e.g. environmental, economic and societal) are frequently characterized by conflicting reduction strategies and must be holistically taken into account in order to identify the optimal solutions in process design. Though the literature reports an extensive discussion of strategies and specific principles, quantitative assessment tools are required to identify the marginal improvements in alternative design options, to allow the trade-off among contradictory aspects and to prevent the “risk shift”. In the present work a set of integrated quantitative tools for design assessment (i.e. design support system) was developed. The tools were specifically dedicated to the implementation of sustainability and inherent safety in process and plant design activities, with respect to chemical and industrial processes in which substances dangerous for humans and environment are used or stored. The tools were mainly devoted to the application in the stages of “conceptual” and “basic design”, when the project is still open to changes (due to the large number of degrees of freedom) which may comprise of strategies to improve sustainability and inherent safety. The set of developed tools includes different phases of the design activities, all through the lifecycle of a project (inventories, process flow diagrams, preliminary plant lay-out plans). The development of such tools gives a substantial contribution to fill the present gap in the availability of sound supports for implementing safety and sustainability in early phases of process design. The proposed decision support system was based on the development of a set of leading key performance indicators (KPIs), which ensure the assessment of economic, societal and environmental impacts of a process (i.e. sustainability profile). The KPIs were based on impact models (also complex), but are easy and swift in the practical application. Their full evaluation is possible also starting from the limited data available during early process design. Innovative reference criteria were developed to compare and aggregate the KPIs on the basis of the actual sitespecific impact burden and the sustainability policy. Particular attention was devoted to the development of reliable criteria and tools for the assessment of inherent safety in different stages of the project lifecycle. The assessment follows an innovative approach in the analysis of inherent safety, based on both the calculation of the expected consequences of potential accidents and the evaluation of the hazards related to equipment. The methodology overrides several problems present in the previous methods proposed for quantitative inherent safety assessment (use of arbitrary indexes, subjective judgement, build-in assumptions, etc.). A specific procedure was defined for the assessment of the hazards related to the formations of undesired substances in chemical systems undergoing “out of control” conditions. In the assessment of layout plans, “ad hoc” tools were developed to account for the hazard of domino escalations and the safety economics. The effectiveness and value of the tools were demonstrated by the application to a large number of case studies concerning different kinds of design activities (choice of materials, design of the process, of the plant, of the layout) and different types of processes/plants (chemical industry, storage facilities, waste disposal). An experimental survey (analysis of the thermal stability of isomers of nitrobenzaldehyde) provided the input data necessary to demonstrate the method for inherent safety assessment of materials.
Resumo:
[EN]This paper presents our research about nucleation and its dependency with external conditions, as well as the internal characteristics of the solution itself. Among the research lines of our group, we has been studying the influence of electric fields over two different but related compounds: Lithium-Potassium Sulfate and Lithium-Amonium Sulfate, which both of them show a variation on the nucleation ratio when an electric field is applied during the crystal growth. Moreover, in this paper will be explained a laboratory protocol to teach universitary Science students the nucleation process itself and how it depends on external applied conditions, e.g. electric fields.
Resumo:
Subduction zones are the favorite places to generate tsunamigenic earthquakes, where friction between oceanic and continental plates causes the occurrence of a strong seismicity. The topics and the methodologies discussed in this thesis are focussed to the understanding of the rupture process of the seismic sources of great earthquakes that generate tsunamis. The tsunamigenesis is controlled by several kinematical characteristic of the parent earthquake, as the focal mechanism, the depth of the rupture, the slip distribution along the fault area and by the mechanical properties of the source zone. Each of these factors plays a fundamental role in the tsunami generation. Therefore, inferring the source parameters of tsunamigenic earthquakes is crucial to understand the generation of the consequent tsunami and so to mitigate the risk along the coasts. The typical way to proceed when we want to gather information regarding the source process is to have recourse to the inversion of geophysical data that are available. Tsunami data, moreover, are useful to constrain the portion of the fault area that extends offshore, generally close to the trench that, on the contrary, other kinds of data are not able to constrain. In this thesis I have discussed the rupture process of some recent tsunamigenic events, as inferred by means of an inverse method. I have presented the 2003 Tokachi-Oki (Japan) earthquake (Mw 8.1). In this study the slip distribution on the fault has been inferred by inverting tsunami waveform, GPS, and bottom-pressure data. The joint inversion of tsunami and geodetic data has revealed a much better constrain for the slip distribution on the fault rather than the separate inversions of single datasets. Then we have studied the earthquake occurred on 2007 in southern Sumatra (Mw 8.4). By inverting several tsunami waveforms, both in the near and in the far field, we have determined the slip distribution and the mean rupture velocity along the causative fault. Since the largest patch of slip was concentrated on the deepest part of the fault, this is the likely reason for the small tsunami waves that followed the earthquake, pointing out how much the depth of the rupture plays a crucial role in controlling the tsunamigenesis. Finally, we have presented a new rupture model for the great 2004 Sumatra earthquake (Mw 9.2). We have performed the joint inversion of tsunami waveform, GPS and satellite altimetry data, to infer the slip distribution, the slip direction, and the rupture velocity on the fault. Furthermore, in this work we have presented a novel method to estimate, in a self-consistent way, the average rigidity of the source zone. The estimation of the source zone rigidity is important since it may play a significant role in the tsunami generation and, particularly for slow earthquakes, a low rigidity value is sometimes necessary to explain how a relatively low seismic moment earthquake may generate significant tsunamis; this latter point may be relevant for explaining the mechanics of the tsunami earthquakes, one of the open issues in present day seismology. The investigation of these tsunamigenic earthquakes has underlined the importance to use a joint inversion of different geophysical data to determine the rupture characteristics. The results shown here have important implications for the implementation of new tsunami warning systems – particularly in the near-field – the improvement of the current ones, and furthermore for the planning of the inundation maps for tsunami-hazard assessment along the coastal area.
Resumo:
In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.
Resumo:
Die vorliegende Arbeit ist motiviert durch biologische Fragestellungen bezüglich des Verhaltens von Membranpotentialen in Neuronen. Ein vielfach betrachtetes Modell für spikende Neuronen ist das Folgende. Zwischen den Spikes verhält sich das Membranpotential wie ein Diffusionsprozess X der durch die SDGL dX_t= beta(X_t) dt+ sigma(X_t) dB_t gegeben ist, wobei (B_t) eine Standard-Brown'sche Bewegung bezeichnet. Spikes erklärt man wie folgt. Sobald das Potential X eine gewisse Exzitationsschwelle S überschreitet entsteht ein Spike. Danach wird das Potential wieder auf einen bestimmten Wert x_0 zurückgesetzt. In Anwendungen ist es manchmal möglich, einen Diffusionsprozess X zwischen den Spikes zu beobachten und die Koeffizienten der SDGL beta() und sigma() zu schätzen. Dennoch ist es nötig, die Schwellen x_0 und S zu bestimmen um das Modell festzulegen. Eine Möglichkeit, dieses Problem anzugehen, ist x_0 und S als Parameter eines statistischen Modells aufzufassen und diese zu schätzen. In der vorliegenden Arbeit werden vier verschiedene Fälle diskutiert, in denen wir jeweils annehmen, dass das Membranpotential X zwischen den Spikes eine Brown'sche Bewegung mit Drift, eine geometrische Brown'sche Bewegung, ein Ornstein-Uhlenbeck Prozess oder ein Cox-Ingersoll-Ross Prozess ist. Darüber hinaus beobachten wir die Zeiten zwischen aufeinander folgenden Spikes, die wir als iid Treffzeiten der Schwelle S von X gestartet in x_0 auffassen. Die ersten beiden Fälle ähneln sich sehr und man kann jeweils den Maximum-Likelihood-Schätzer explizit angeben. Darüber hinaus wird, unter Verwendung der LAN-Theorie, die Optimalität dieser Schätzer gezeigt. In den Fällen OU- und CIR-Prozess wählen wir eine Minimum-Distanz-Methode, die auf dem Vergleich von empirischer und wahrer Laplace-Transformation bezüglich einer Hilbertraumnorm beruht. Wir werden beweisen, dass alle Schätzer stark konsistent und asymptotisch normalverteilt sind. Im letzten Kapitel werden wir die Effizienz der Minimum-Distanz-Schätzer anhand simulierter Daten überprüfen. Ferner, werden Anwendungen auf reale Datensätze und deren Resultate ausführlich diskutiert.
Resumo:
This thesis presents a process-based modelling approach to quantify carbon uptake by lichens and bryophytes at the global scale. Based on the modelled carbon uptake, potential global rates of nitrogen fixation, phosphorus uptake and chemical weathering by the organisms are estimated. In this way, the significance of lichens and bryophytes for global biogeochemical cycles can be assessed. The model uses gridded climate data and key properties of the habitat (e.g. disturbance intervals) to predict processes which control net carbon uptake, namely photosynthesis, respiration, water uptake and evaporation. It relies on equations used in many dynamical vegetation models, which are combined with concepts specific to lichens and bryophytes, such as poikilohydry or the effect of water content on CO2 diffusivity. To incorporate the great functional variation of lichens and bryophytes at the global scale, the model parameters are characterised by broad ranges of possible values instead of a single, globally uniform value. The predicted terrestrial net uptake of 0.34 to 3.3 Gt / yr of carbon and global patterns of productivity are in accordance with empirically-derived estimates. Based on the simulated estimates of net carbon uptake, further impacts of lichens and bryophytes on biogeochemical cycles are quantified at the global scale. Thereby the focus is on three processes, namely nitrogen fixation, phosphorus uptake and chemical weathering. The presented estimates have the form of potential rates, which means that the amount of nitrogen and phosphorus is quantified which is needed by the organisms to build up biomass, also accounting for resorption and leaching of nutrients. Subsequently, the potential phosphorus uptake on bare ground is used to estimate chemical weathering by the organisms, assuming that they release weathering agents to obtain phosphorus. The predicted requirement for nitrogen ranges from 3.5 to 34 Tg / yr and for phosphorus it ranges from 0.46 to 4.6 Tg / yr. Estimates of chemical weathering are between 0.058 and 1.1 km³ / yr of rock. These values seem to have a realistic order of magnitude and they support the notion that lichens and bryophytes have the potential to play an important role for global biogeochemical cycles.
Resumo:
In this article we propose a bootstrap test for the probability of ruin in the compound Poisson risk process. We adopt the P-value approach, which leads to a more complete assessment of the underlying risk than the probability of ruin alone. We provide second-order accurate P-values for this testing problem and consider both parametric and nonparametric estimators of the individual claim amount distribution. Simulation studies show that the suggested bootstrap P-values are very accurate and outperform their analogues based on the asymptotic normal approximation.
Resumo:
Fully engaging in a new culture means translating oneself into a different set of cultural values, and many of the values can be foreign to the individual. The individual may face conflicting tensions between the psychological need to be a part of the new society and feelings of guilt or betrayal towards the former society, culture or self. Many international students from Myanmar, most of whom have little international experience, undergo this value and cultural translation during their time in American colleges. It is commonly assumed that something will be lost in the process of translation and that the students become more Westernized or never fit into both Myanmar and US cultures. However, the study of the narratives of the Myanmar students studying in the US reveals a more complex reality. Because individuals have multifaceted identities and many cultures and subcultures are fluctuating and intertwined with one another, the students¿ cross-cultural interactions can also help them acquire new ways of seeing things. Through their struggle to engage in the US college culture, many students display the theory of ¿cosmopolitanism¿ in their transformative identity formation process and thus, define and identify themselves beyond one set of cultural norms.
Resumo:
Elevated transaminases in asymptomatic patients can be detected in more than 5 % of the investigations. If there are no obvious reasons, the finding should be confirmed within the next 3 months. Frequent causes are non-alcoholic fatty liver disease (NAFLD), non-alcoholic steatohepatitis (NASH), alcohol, hepatitis B or C, hemochromatosis and drugs or toxins. Rarer causes are autoimmune hepatitis, M. Wilson and α1-antitrypsine deficiency. There are also non-hepatic causes such as celiac disease or hemolysis and myopathies in the case of an exclusive increase of ASAT. I recommend a two-step investigational procedure; the more frequent causes are examined first before the rare causes are studied. The value of the proposed investigations is discussed.