960 resultados para Théorie générale of stochastic processes
Resumo:
Energy efficiency is one of the major objectives which should be achieved in order to implement the limited energy resources of the world in a sustainable way. Since radiative heat transfer is the dominant heat transfer mechanism in most of fossil fuel combustion systems, more accurate insight and models may cause improvement in the energy efficiency of the new designed combustion systems. The radiative properties of combustion gases are highly wavelength dependent. Better models for calculating the radiative properties of combustion gases are highly required in the modeling of large scale industrial combustion systems. With detailed knowledge of spectral radiative properties of gases, the modeling of combustion processes in the different applications can be more accurate. In order to propose a new method for effective non gray modeling of radiative heat transfer in combustion systems, different models for the spectral properties of gases including SNBM, EWBM, and WSGGM have been studied in this research. Using this detailed analysis of different approaches, the thesis presents new methods for gray and non gray radiative heat transfer modeling in homogeneous and inhomogeneous H2O–CO2 mixtures at atmospheric pressure. The proposed method is able to support the modeling of a wide range of combustion systems including the oxy-fired combustion scenario. The new methods are based on implementing some pre-obtained correlations for the total emissivity and band absorption coefficient of H2O–CO2 mixtures in different temperatures, gas compositions, and optical path lengths. They can be easily used within any commercial CFD software for radiative heat transfer modeling resulting in more accurate, simple, and fast calculations. The new methods were successfully used in CFD modeling by applying them to industrial scale backpass channel under oxy-fired conditions. The developed approaches are more accurate compared with other methods; moreover, they can provide complete explanation and detailed analysis of the radiation heat transfer in different systems under different combustion conditions. The methods were verified by applying them to some benchmarks, and they showed a good level of accuracy and computational speed compared to other methods. Furthermore, the implementation of the suggested banded approach in CFD software is very easy and straightforward.
Resumo:
Separation of carboxylic acids from aqueous streams is an important part of their manufacturing process. The aqueous solutions are usually dilute containing less than 10 % acids. Separation by distillation is difficult as the boiling points of acids are only marginally higher than that of water. Because of this distillation is not only difficult but also expensive due to the evaporation of large amounts of water. Carboxylic acids have traditionally been precipitated as calcium salts. The yields of these processes are usually relatively low and the chemical costs high. Especially the decomposition of calcium salts with sulfuric acid produces large amounts of calcium sulfate sludge. Solvent extraction has been studied as an alternative method for recovery of carboxylic acids. Solvent extraction is based on mixing of two immiscible liquids and the transfer of the wanted components form one liquid to another due to equilibrium difference. In the case of carboxylic acids, the acids are transferred from aqueous phase to organic solvent due to physical and chemical interactions. The acids and the extractant form complexes which are soluble in the organic phase. The extraction efficiency is affected by many factors, for instance initial acid concentration, type and concentration of the extractant, pH, temperature and extraction time. In this paper, the effects of initial acid concentration, type of extractant and temperature on extraction efficiency were studied. As carboxylic acids are usually the products of the processes, they are wanted to be recovered. Hence the acids have to be removed from the organic phase after the extraction. The removal of acids from the organic phase also regenerates the extractant which can be then recycled in the process. The regeneration of the extractant was studied by back-extracting i.e. stripping the acids form the organic solution into diluent sodium hydroxide solution. In the solvent regeneration, the regenerability of different extractants and the effect of initial acid concentration and temperature were studied.
Resumo:
This thesis focuses on the molecular mechanisms regulating the photosynthetic electron transfer reactions upon changes in light intensity. To investigate these mechanisms, I used mutants of the model plant Arabidopsis thaliana impaired in various aspects of regulation of the photosynthetic light reactions. These included mutants of photosystem II (PSII) and light harvesting complex II (LHCII) phosphorylation (stn7 and stn8), mutants of energy-dependent non-photochemical quenching (NPQ) (npq1 and npq4) and of regulation of photosynthetic electron transfer (pgr5). All of these processes have been extensively investigated during the past decades, mainly on plants growing under steady-state conditions, and therefore many aspects of acclimation processes may have been neglected. In this study, plants were grown under fluctuating light, i.e. the alternation of low and high intensities of light, in order to maximally challenge the photosynthetic regulatory mechanisms. In pgr5 and stn7 mutants, the growth in fluctuating light condition mainly damaged PSI while PSII was rather unaffected. It is shown that the PGR5 protein regulates the linear electron transfer: it is essential for the induction of transthylakoid ΔpH that, in turn, activates energy-dependent NPQ and downregulates the activity of cytochrome b6f. This regulation was shown to be essential for the photoprotection of PSI under fluctuations in light intensity. The stn7 mutants were able to acclimate under constant growth light conditions by modulating the PSII/PSI ratio, while under fluctuating growth light they failed in implementing this acclimation strategy. LHCII phosphorylation ensures the balance of the excitation energy distribution between PSII and PSI by increasing the probability for excitons to be trapped by PSI. LHCII can be phosphorylated over all of the thylakoid membrane (grana cores as well as stroma lamellae) and when phosphorylated it constitutes a common antenna for PSII and PSI. Moreover, LHCII was shown to work as a functional bridge that allows the energy transfer between PSII units in grana cores and between PSII and PSI centers in grana margins. Consequently, PSI can function as a quencher of excitation energy. Eventually, the LHCII phosphorylation, NPQ and the photosynthetic control of linear electron transfer via cytochrome b6f work in concert to maintain the redox poise of the electron transfer chain. This is a prerequisite for successful plant growth upon changing natural light conditions, both in short- and long-term.
Resumo:
Stochastic approximation methods for stochastic optimization are considered. Reviewed the main methods of stochastic approximation: stochastic quasi-gradient algorithm, Kiefer-Wolfowitz algorithm and adaptive rules for them, simultaneous perturbation stochastic approximation (SPSA) algorithm. Suggested the model and the solution of the retailer's profit optimization problem and considered an application of the SQG-algorithm for the optimization problems with objective functions given in the form of ordinary differential equation.
Resumo:
Fireside deposits can be found in many types of utility and industrial furnaces. The deposits in furnaces are problematic because they can reduce heat transfer, block gas paths and cause corrosion. To tackle these problems, it is vital to estimate the influence of deposits on heat transfer, to minimize deposit formation and to optimize deposit removal. It is beneficial to have a good understanding of the mechanisms of fireside deposit formation. Numerical modeling is a powerful tool for investigating the heat transfer in furnaces, and it can provide valuable information for understanding the mechanisms of deposit formation. In addition, a sub-model of deposit formation is generally an essential part of a comprehensive furnace model. This work investigates two specific processes of fireside deposit formation in two industrial furnaces. The first process is the slagging wall found in furnaces with molten deposits running on the wall. A slagging wall model is developed to take into account the two-layer structure of the deposits. With the slagging wall model, the thickness and the surface temperature of the molten deposit layer can be calculated. The slagging wall model is used to predict the surface temperature and the heat transfer to a specific section of a super-heater tube panel with the boundary condition obtained from a Kraft recovery furnace model. The slagging wall model is also incorporated into the computational fluid dynamics (CFD)-based Kraft recovery furnace model and applied on the lower furnace walls. The implementation of the slagging wall model includes a grid simplification scheme. The wall surface temperature calculated with the slagging wall model is used as the heat transfer boundary condition. Simulation of a Kraft recovery furnace is performed, and it is compared with two other cases and measurements. In the two other cases, a uniform wall surface temperature and a wall surface temperature calculated with a char bed burning model are used as the heat transfer boundary conditions. In this particular furnace, the wall surface temperatures from the three cases are similar and are in the correct range of the measurements. Nevertheless, the wall surface temperature profiles with the slagging wall model and the char bed burning model are different because the deposits are represented differently in the two models. In addition, the slagging wall model is proven to be computationally efficient. The second process is deposit formation due to thermophoresis of fine particles to the heat transfer surface. This process is considered in the simulation of a heat recovery boiler of the flash smelting process. In order to determine if the small dust particles stay on the wall, a criterion based on the analysis of forces acting on the particle is applied. Time-dependent simulation of deposit formation in the heat recovery boiler is carried out and the influence of deposits on heat transfer is investigated. The locations prone to deposit formation are also identified in the heat recovery boiler. Modeling of the two processes in the two industrial furnaces enhances the overall understanding of the processes. The sub-models developed in this work can be applied in other similar deposit formation processes with carefully-defined boundary conditions.
Resumo:
Lignocellulosic biomasses (e.g., wood and straws) are a potential renewable source for the production of a wide variety of chemicals that could be used to replace those currently produced by petrochemical industry. This would lead to lower greenhouse gas emissions and waste amounts, and to economical savings. There are many possible pathways available for the manufacturing of chemicals from lignocellulosic biomasses. One option is to hydrolyze the cellulose and hemicelluloses of these biomasses into monosaccharides using concentrated sulfuric acid as catalyst. This process is an efficient method for producing monosaccharides which are valuable platforn chemicals. Also other valuable products are formed in the hydrolysis. Unfortunately, the concentrated acid hydrolysis has been deemed unfeasible mainly due to high chemical consumption resulting from the need to remove sulfuric acid from the obtained hydrolysates prior to the downstream processing of the monosaccharides. Traditionally, this has been done by neutralization with lime. This, however, results in high chemical consumption. In addition, the by-products formed in the hydrolysis are not removed and may, thus, hinder the monosaccharide processing. In order to improve the feasibility of the concentrated acid hydrolysis, the chemical consumption should be decreased by recycling of sulfuric acid without neutralization. Furthermore, the monosaccharides and the other products formed in the hydrolysis should be recovered selectively for efficient downstream processing. The selective recovery of the hydrolysis by-products would have additional economical benefits on the process due to their high value. In this work, the use of chromatographic fractionation for the recycling of sulfuric acid and the selective recovery of the main components from the hydrolysates formed in the concentrated acid hydrolysis was investigated. Chromatographic fractionation based on the electrolyte exclusion with gel type strong acid cation exchange resins in acid (H+) form as a stationary phase was studied. A systematic experimental and model-based study regarding the separation task at hand was conducted. The phenomena affecting the separation were determined and their effects elucidated. Mathematical models that take accurately into account these phenomena were derived and used in the simulation of the fractionation process. The main components of the concentrated acid hydrolysates (sulfuric acid, monosaccharides, and acetic acid) were included into this model. Performance of the fractionation process was investigated experimentally and by simulations. Use of different process options was also studied. Sulfuric acid was found to have a significant co-operative effect on the sorption of the other components. This brings about interesting and beneficial effects in the column operations. It is especially beneficial for the separation of sulfuric acid and the monosaccharides. Two different approaches for the modelling of the sorption equilibria were investigated in this work: a simple empirical approach and a thermodynamically consistent approach (the Adsorbed Solution theory). Accurate modelling of the phenomena observed in this work was found to be possible using the simple empirical models. The use of the Adsorbed Solution theory is complicated by the nature of the theory and the complexity of the studied system. In addition to the sorption models, a dynamic column model that takes into account the volume changes of the gel type resins as changing resin bed porosity was also derived. Using the chromatography, all the main components of the hydrolysates can be recovered selectively, and the sulfuric acid consumption of the hydrolysis process can be lowered considerably. Investigation of the performance of the chromatographic fractionation showed that the highest separation efficiency in this separation task is obtained with a gel type resin with a high crosslinking degree (8 wt. %); especially when the hydrolysates contain high amounts of acetic acid. In addition, the concentrated acid hydrolysis should be done with as low sulfuric acid concentration as possible to obtain good separation performance. The column loading and flow rate also have large effects on the performance. In this work, it was demonstrated that when recycling of the fractions obtained in the chromatographic fractionation are recycled to preceding unit operations these unit operations should included in the performance evaluation of the fractionation. When this was done, the separation performance and the feasibility of the concentrated acid hydrolysis process were found to improve considerably. Use of multi-column chromatographic fractionation processes, the Japan Organo process and the Multi-Column Recycling Chromatography process, was also investigated. In the studied case, neither of these processes could compete with the single-column batch process in the productivity. However, due to internal recycling steps, the Multi-Column Recycling Chromatography was found to be superior to the batch process when the product yield and the eluent consumption were taken into account.
Resumo:
Because of the increased availability of different kind of business intelligence technologies and tools it can be easy to fall in illusion that new technologies will automatically solve the problems of data management and reporting of the company. The management is not only about management of technology but also the management of processes and people. This thesis is focusing more into traditional data management and performance management of production processes which both can be seen as a requirement for long lasting development. Also some of the operative BI solutions are considered in the ideal state of reporting system. The objectives of this study are to examine what requirements effective performance management of production processes have for data management and reporting of the company and to see how they are effecting on the efficiency of it. The research is executed as a theoretical literary research about the subjects and as a qualitative case study about reporting development project of Finnsugar Ltd. The case study is examined through theoretical frameworks and by the active participant observation. To get a better picture about the ideal state of reporting system simple investment calculations are performed. According to the results of the research, requirements for effective performance management of production processes are automation in the collection of data, integration of operative databases, usage of efficient data management technologies like ETL (Extract, Transform, Load) processes, data warehouse (DW) and Online Analytical Processing (OLAP) and efficient management of processes, data and roles.
Improving the competitiveness of electrolytic Zinc process by chemical reaction engineering approach
Resumo:
This doctoral thesis describes the development work performed on the leachand purification sections in the electrolytic zinc plant in Kokkola to increase the efficiency in these two stages, and thus the competitiveness of the plant. Since metallic zinc is a typical bulk product, the improvement of the competitiveness of a plant was mostly an issue of decreasing unit costs. The problems in the leaching were low recovery of valuable metals from raw materials, and that the available technology offered complicated and expensive processes to overcome this problem. In the purification, the main problem was consumption of zinc powder - up to four to six times the stoichiometric demand. This reduced the capacity of the plant as this zinc is re-circulated through the electrolysis, which is the absolute bottleneck in a zinc plant. Low selectivity gave low-grade and low-value precipitates for further processing to metallic copper, cadmium, cobalt and nickel. Knowledge of the underlying chemistry was poor and process interruptions causing losses of zinc production were frequent. Studies on leaching comprised the kinetics of ferrite leaching and jarosite precipitation, as well as the stability of jarosite in acidic plant solutions. A breakthrough came with the finding that jarosite could precipitate under conditions where ferrite would leach satisfactorily. Based on this discovery, a one-step process for the treatment of ferrite was developed. In the plant, the new process almost doubled the recovery of zinc from ferrite in the same equipment as the two-step jarosite process was operated in at that time. In a later expansion of the plant, investment savings were substantial compared to other technologies available. In the solution purification, the key finding was that Co, Ni, and Cu formed specific arsenides in the “hot arsenic zinc dust” step. This was utilized for the development of a three-step purification stage based on fluidized bed technology in all three steps, i.e. removal of Cu, Co and Cd. Both precipitation rates and selectivity increased, which strongly decreased the zinc powder consumption through a substantially suppressed hydrogen gas evolution. Better selectivity improved the value of the precipitates: cadmium, which caused environmental problems in the copper smelter, was reduced from 1-3% reported normally down to 0.05 %, and a cobalt cake with 15 % Co was easily produced in laboratory experiments in the cobalt removal. The zinc powder consumption in the plant for a solution containing Cu, Co, Ni and Cd (1000, 25, 30 and 350 mg/l, respectively), was around 1.8 g/l; i.e. only 1.4 times the stoichiometric demand – or, about 60% saving in powder consumption. Two processes for direct leaching of the concentrate under atmospheric conditions were developed, one of which was implemented in the Kokkola zinc plant. Compared to the existing pressure leach technology, savings were obtained mostly in investment. The scientific basis for the most important processes and process improvements is given in the doctoral thesis. This includes mathematical modeling and thermodynamic evaluation of experimental results and hypotheses developed. Five of the processes developed in this research and development program were implemented in the plant and are still operated. Even though these processes were developed with the focus on the plant in Kokkola, they can also be implemented at low cost in most of the zinc plants globally, and have thus a great significance in the development of the electrolytic zinc process in general.
Resumo:
This study combines several projects related to the flows in vessels with complex shapes representing different chemical apparata. Three major cases were studied. The first one is a two-phase plate reactor with a complex structure of intersecting micro channels engraved on one plate which is covered by another plain plate. The second case is a tubular microreactor, consisting of two subcases. The first subcase is a multi-channel two-component commercial micromixer (slit interdigital) used to mix two liquid reagents before they enter the reactor. The second subcase is a micro-tube, where the distribution of the heat generated by the reaction was studied. The third case is a conventionally packed column. However, flow, reactions or mass transfer were not modeled. Instead, the research focused on how to describe mathematically the realistic geometry of the column packing, which is rather random and can not be created using conventional computeraided design or engineering (CAD/CAE) methods. Several modeling approaches were used to describe the performance of the processes in the considered vessels. Computational fluid dynamics (CFD) was used to describe the details of the flow in the plate microreactor and micromixer. A space-averaged mass transfer model based on Fick’s law was used to describe the exchange of the species through the gas-liquid interface in the microreactor. This model utilized data, namely the values of the interfacial area, obtained by the corresponding CFD model. A common heat transfer model was used to find the heat distribution in the micro-tube. To generate the column packing, an additional multibody dynamic model was implemented. Auxiliary simulation was carried out to determine the position and orientation of every packing element in the column. This data was then exported into a CAD system to generate desirable geometry, which could further be used for CFD simulations. The results demonstrated that the CFD model of the microreactor could predict the flow pattern well enough and agreed with experiments. The mass transfer model allowed to estimate the mass transfer coefficient. Modeling for the second case showed that the flow in the micromixer and the heat transfer in the tube could be excluded from the larger model which describes the chemical kinetics in the reactor. Results of the third case demonstrated that the auxiliary simulation could successfully generate complex random packing not only for the column but also for other similar cases.
Resumo:
The purpose of the study is to analyse lateral rigidity in the framework of pre-internationalisation to find out its reflections on managerial decision making. The interest of the study lies in the intersection of the meaningful but relatively stagnant concept of lateral rigidity, and the pre-internationalisation phase of companies that has received only a limited amount of research attention. The theoretical basis for the study is drawn from managerial decision making and internationalisation literatures. Firstly, the study aims to define the concept of lateral rigidity in order to secondly find out how it influences managers’ pre-internationalisation decision making. The study is theoretical in nature, and is based solely on literature examination. Concept analysis method is used to determine the attributes of lateral rigidity for the purpose of recognising the concept in the pre-internationalisation framework. The attributes that are found to comprise lateral rigidity are culture, know-how, uncertainty and attitude. Furthermore, these attributes are more specifically found to consist of environmental, personal and operational matters. Through the analysis of the pre-internationalisation literature it is discovered that all the attributes appear there, and present a variety of influences on pre-internationalisation decision making that can be characterised as being negative. The study finds that culture influences managers’ decision making via subjective reasoning and behaviour that stem from a domestic inclination, and via unfamiliarity with foreign markets. Against assumption, home cultural factors, e.g. values and customs, do not appear to have an influence. Know-how is found to influence decision making via managers’ previous experiences, subjective abiding perceptions, and the usage of previous operation patterns. Uncertainty, then again, influences managers’ risk perception, unfamiliarity avoidance, and the scope of potential international operations. Attitude is found to have a robust influence on managerial decision making via the usage of familiar processes and decision regimes, subjective preference of convention, and plausible results of operations. Ergo, the effects of lateral rigidity on managers show to represent an encumbrance in the pre-internationalisation phase; even though internationalisation would take place, the related decisions and actions are highly constrained. Especially the subjectivity of managers is seen to have a meaningful role in the decision making process.
Resumo:
The decreasing fossil fuel resources combined with an increasing world energy demand has raised an interest in renewable energy sources. The alternatives can be solar, wind and geothermal energies, but only biomass can be a substitute for the carbon–based feedstock, which is suitable for the production of transportation fuels and chemicals. However, a high oxygen content of the biomass creates challenges for the future chemical industry, forcing the development of new processes which allow a complete or selective oxygen removal without any significant carbon loss. Therefore, understanding and optimization of biomass deoxygenation processes are crucial for the future bio–based chemical industry. In this work, deoxygenation of fatty acids and their derivatives was studied over Pd/C and TiO2 supported noble metal catalysts (Pt, Pt–Re, Re and Ru) to obtain future fuel components. The 5 % Pd/C catalyst was investigated in semibatch and fixed bed reactors at 300 °C and 1.7–2 MPa of inert and hydrogen–containing atmospheres. Based on extensive kinetic studies, plausible reaction mechanisms and pathways were proposed. The influence of the unsaturation in the deoxygenation of model compounds and industrial feedstock – tall oil fatty acids – over a Pd/C catalyst was demonstrated. The optimization of the reaction conditions suppressed the formation of by–products, hence high yields and selectivities towards linear hydrocarbons and catalyst stability were achieved. Experiments in a fixed bed reactor filled with a 2 % Pd/C catalyst were performed with stearic acid as a model compound at different hydrogen–containing gas atmospheres to understand the catalyst stability under various conditions. Moreover, prolonged experiments were carried out with concentrated model compounds to reveal the catalyst deactivation. New materials were proposed for the selective deoxygenation process at lower temperatures (~200 °C) with a tunable selectivity to hydrodeoxygenation by using 4 % Pt/TiO2 or decarboxylation/decarbonylation over 4 % Ru/TiO2 catalysts. A new method for selective hydrogenation of fatty acids to fatty alcohols was demonstrated with a 4 % Re/TiO2 catalyst. A reaction pathway and mechanism for TiO2 supported metal catalysts was proposed and an optimization of the process conditions led to an increase in the formation of the desired products.
Resumo:
The future of privacy in the information age is a highly debated topic. In particular, new and emerging technologies such as ICTs and cognitive technologies are seen as threats to privacy. This thesis explores images of the future of privacy among non-experts within the time frame from the present until the year 2050. The aims of the study are to conceptualise privacy as a social and dynamic phenomenon, to understand how privacy is conceptualised among citizens and to analyse ideal-typical images of the future of privacy using the causal layered analysis method. The theoretical background of the thesis combines critical futures studies and critical realism, and the empirical material is drawn from three focus group sessions held in spring 2012 as part of the PRACTIS project. From a critical realist perspective, privacy is conceptualised as a social institution which creates and maintains boundaries between normative circles and preserves the social freedom of individuals. Privacy changes when actors with particular interests engage in technology-enabled practices which challenge current privacy norms. The thesis adopts a position of technological realism as opposed to determinism or neutralism. In the empirical part, the focus group participants are divided into four clusters based on differences in privacy conceptions and perceived threats and solutions. The clusters are fundamentalists, pragmatists, individualists and collectivists. Correspondingly, four ideal-typical images of the future are composed: ‘drift to low privacy’, ‘continuity and benign evolution’, ‘privatised privacy and an uncertain future’, and ‘responsible future or moral decline’. The images are analysed using the four layers of causal layered analysis: litany, system, worldview and myth. Each image has its strengths and weaknesses. The individualistic images tend to be fatalistic in character while the collectivistic images are somewhat utopian. In addition, the images have two common weaknesses: lack of recognition of ongoing developments and simplistic conceptions of privacy based on a dichotomy between the individual and society. The thesis argues for a dialectical understanding of futures as present images of the future and as outcomes of real processes and mechanisms. The first steps in promoting desirable futures are the awareness of privacy as a social institution, the awareness of current images of the future, including their assumptions and weaknesses, and an attitude of responsibility where futures are seen as the consequences of present choices.
Resumo:
In photosynthesis, light energy is converted to chemical energy, which is consumed for carbon assimilation in the Calvin-Benson-Bassham (CBB) cycle. Intensive research has significantly advanced the understanding of how photosynthesis can survive in the ever-changing light conditions. However, precise details concerning the dynamic regulation of photosynthetic processes have remained elusive. The aim of my thesis was to specify some molecular mechanisms and interactions behind the regulation of photosynthetic reactions under environmental fluctuations. A genetic approach was employed, whereby Arabidopsis thaliana mutants deficient in specific photosynthetic protein components were subjected to adverse light conditions and assessed for functional deficiencies in the photosynthetic machinery. I examined three interconnected mechanisms: (i) auxiliary functions of PsbO1 and PsbO2 isoforms in the oxygen evolving complex of photosystem II (PSII), (ii) the regulatory function of PGR5 in photosynthetic electron transfer and (iii) the involvement of the Calcium Sensing Receptor CaS in photosynthetic performance. Analysis of photosynthetic properties in psbo1 and psbo2 mutants demonstrated that PSII is sensitive to light induced damage when PsbO2, rather than PsbO1, is present in the oxygen evolving complex. PsbO1 stabilizes PSII more efficiently compared to PsbO2 under light stress. However, PsbO2 shows a higher GTPase activity compared to PsbO1, and plants may partially compensate the lack of PsbO1 by increasing the rate of the PSII repair cycle. PGR5 proved vital in the protection of photosystem I (PSI) under fluctuating light conditions. Biophysical characterization of photosynthetic electron transfer reactions revealed that PGR5 regulates linear electron transfer by controlling proton motive force, which is crucial for the induction of the photoprotective non-photochemical quenching and the control of electron flow from PSII to PSI. I conclude that PGR5 controls linear electron transfer to protect PSI against light induced oxidative damage. I also found that PGR5 physically interacts with CaS, which is not needed for photoprotection of PSII or PSI in higher plants. Rather, transcript profiling and quantitative proteomic analysis suggested that CaS is functionally connected with the CBB cycle. This conclusion was supported by lowered amounts of specific calciumregulated CBB enzymes in cas mutant chloroplasts and by slow electron flow to PSI electron acceptors when leaves were reilluminated after an extended dark period. I propose that CaS is required for calcium regulation of the CBB cycle during periods of darkness. Moreover, CaS may also have a regulatory role in the activation of chloroplast ATPase. Through their diverse interactions, components of the photosynthetic machinery ensure optimization of light-driven electron transport and efficient basic production, while minimizing the harm caused by light induced photodamage.
Resumo:
The main generator source of a longitudinal muscle contraction was identified as an M (mechanical-stimulus-sensitive) circuit composed of a presynaptic M-1 neuron and a postsynaptic M-2 neuron in the ventral nerve cord of the earthworm, Amynthas hawayanus, by simultaneous intracellular response recording and Lucifer Yellow-CH injection with two microelectrodes. Five-peaked responses were evoked in both neurons by a mechanical, but not by an electrical, stimulus to the mechanoreceptor in the shaft of a seta at the opposite side of an epidermis-muscle-nerve-cord preparation. This response was correlated to 84% of the amplitude, 73% of the rising rate and 81% of the duration of a longitudinal muscle contraction recorded by a mechano-electrical transducer after eliminating the other possible generator sources by partitioning the epidermis-muscle piece of this preparation. The pre- and postsynaptic relationship between these two neurons was determined by alternately stimulating and recording with two microelectrodes. Images of the Lucifer Yellow-CH-filled M-1 and M-2 neurons showed that both of them are composed of bundles of longitudinal processes situated on the side of the nerve cord opposite to stimulation. The M-1 neuron has an afferent process (A1) in the first nerve at the stimulated side of this preparation and the M-2 neuron has two efferent processes (E1 and E3) in the first and third nerves at the recording side where their effector muscle cell was identified by a third microelectrode.
Resumo:
In the market where companies similar in size and resources are competing, it is challenging to have any advantage over others. In order to stay afloat company needs to have capability to perform with fewer resources and yet provide better service. Hence development of efficient processes which can cut costs and improve performance is crucial. As business expands, processes become complicated and large amount of data needs to be managed and available on request. Different tools are used in companies to store and manage data, which facilitates better production and transactions. In the modern business world the most utilized tool for that purpose is ERP - Enterprise Resource Planning system. The focus of this research is to study how competitive advantage can be achieved by implementing proprietary ERP system in the company; ERP system that is in-house created, tailor made to match and align business needs and processes. Market is full of ERP software, but choosing the right one is a big challenge. Identifying the key features that need improvement in processes and data management, choosing the right ERP, implementing it and the follow-up is a long and expensive journey companies undergo. Some companies prefer to invest in a ready-made package bought from vendor and adjust it according to own business needs, while others focus on creating own system with in-house IT capabilities. In this research a case company is used and author tries to identify and analyze why organization in question decided to pursue the development of proprietary ERP system, how it has been implemented and whether it has been successful. Main conclusion and recommendation of this research is for companies to know core capabilities and constraints before choosing and implementing ERP system. Knowledge of factors that affect system change outcome is important, to make the right decisions on strategic level and implement on operational level. Duration of the project in the case company has lasted longer than anticipated. It has been reported that in cases of buying ready product from vendor, projects are delayed and completed over budget as well. In general, in case company implementation of proprietary ERP has been successful both from business performance figures and usability of system by employees. In terms of future research, conducting a study to calculate statistically ROI of both approaches; of buying ready product and creating own ERP will be beneficial.