965 resultados para Freezing and processing


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Bread dough and particularly wheat dough, due to its viscoelastic behaviour, is probably the most dynamic and complicated rheological system and its characteristics are very important since they highly affect final products’ textural and sensorial properties. The study of dough rheology has been a very challenging task for many researchers since it can provide numerous information about dough formulation, structure and processing. This explains why dough rheology has been a matter of investigation for several decades. In this research rheological assessment of doughs and breads was performed by using empirical and fundamental methods at both small and large deformation, in order to characterize different types of doughs and final products such as bread. In order to study the structural aspects of food products, image analysis techniques was used for the integration of the information coming from empirical and fundamental rheological measurements. Evaluation of dough properties was carried out by texture profile analysis (TPA), dough stickiness (Chen and Hoseney cell) and uniaxial extensibility determination (Kieffer test) by using a Texture Analyser; small deformation rheological measurements, were performed on a controlled stress–strain rheometer; moreover the structure of different doughs was observed by using the image analysis; while bread characteristics were studied by using texture profile analysis (TPA) and image analysis. The objective of this research was to understand if the different rheological measurements were able to characterize and differentiate the different samples analysed. This in order to investigate the effect of different formulation and processing conditions on dough and final product from a structural point of view. For this aim the following different materials were performed and analysed: - frozen dough realized without yeast; - frozen dough and bread made with frozen dough; - doughs obtained by using different fermentation method; - doughs made by Kamut® flour; - dough and bread realized with the addition of ginger powder; - final products coming from different bakeries. The influence of sub-zero storage time on non-fermented and fermented dough viscoelastic performance and on final product (bread) was evaluated by using small deformation and large deformation methods. In general, the longer the sub-zero storage time the lower the positive viscoelastic attributes. The effect of fermentation time and of different type of fermentation (straight-dough method; sponge-and-dough procedure and poolish method) on rheological properties of doughs were investigated using empirical and fundamental analysis and image analysis was used to integrate this information throughout the evaluation of the dough’s structure. The results of fundamental rheological test showed that the incorporation of sourdough (poolish method) provoked changes that were different from those seen in the others type of fermentation. The affirmative action of some ingredients (extra-virgin olive oil and a liposomic lecithin emulsifier) to improve rheological characteristics of Kamut® dough has been confirmed also when subjected to low temperatures (24 hours and 48 hours at 4°C). Small deformation oscillatory measurements and large deformation mechanical tests performed provided useful information on the rheological properties of samples realized by using different amounts of ginger powder, showing that the sample with the highest amount of ginger powder (6%) had worse rheological characteristics compared to the other samples. Moisture content, specific volume, texture and crumb grain characteristics are the major quality attributes of bread products. The different sample analyzed, “Coppia Ferrarese”, “Pane Comune Romagnolo” and “Filone Terra di San Marino”, showed a decrease of crumb moisture and an increase in hardness over the storage time. Parameters such as cohesiveness and springiness, evaluated by TPA that are indicator of quality of fresh bread, decreased during the storage. By using empirical rheological tests we found several differences among the samples, due to the different ingredients used in formulation and the different process adopted to prepare the sample, but since these products are handmade, the differences could be account as a surplus value. In conclusion small deformation (in fundamental units) and large deformation methods showed a significant role in monitoring the influence of different ingredients used in formulation, different processing and storage conditions on dough viscoelastic performance and on final product. Finally the knowledge of formulation, processing and storage conditions together with the evaluation of structural and rheological characteristics is fundamental for the study of complex matrices like bakery products, where numerous variable can influence their final quality (e.g. raw material, bread-making procedure, time and temperature of the fermentation and baking).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Mining and processing of metal ores are important causes of soil and groundwater contamination in many regions worldwide. Metal contaminations are a serious risk for the environment and human health. The assessment of metal contaminations in the soil is therefore an important task. A common approach to assess the environmental risk emanating from inorganic contaminations to soil and groundwater is the use of batch or column leaching tests. In this regard, the suitability of leaching tests is a controversial issue. In the first part of this work the applicability and comparability of common leaching tests in the scope of groundwater risk assessment of inorganic contamination is reviewed and critically discussed. Soil water sampling methods (the suction cup method and centrifugation) are addressed as an alternative to leaching tests. Reasons for limitations of the comparability of leaching test results are exposed and recommendations are given for the expedient application of leaching tests for groundwater risk assessment. Leaching tests are usually carried out in open contact with the atmosphere disregarding possible changes of redox conditions. This can affect the original metal speciation and distribution, particularly when anoxic samples are investigated. The influence of sample storage on leaching test results of sulfide bearing anoxic material from a former flotation dump is investigated in a long-term study. Since the oxidation of the sulfide-bearing samples leads to a significant overestimation of metal release, a feasible modification for the conduction of common leaching tests for anoxic material is proposed, where oxidation is prevented efficiently. A comparison of leaching test results to soil water analyzes have shown that the modified saturation soil extraction (SSE) is found to be the only of the tested leaching procedures, which can be recommended for the assessment of current soil water concentrations at anoxic sites if direct investigation of the soil water is impossible due to technical reasons. The vertical distribution and speciation of Zn and Pb in the flotation residues as well as metal concentrations in soil water and plants were investigated to evaluate the environmental risk arising from this site due to the release of metals. The variations in pH and inorganic C content show an acidification of the topsoil with pH values down to 5.5 in the soil and a soil water pH of 6 in 1 m depth. This is due to the oxidation of sulfides and depletion in carbonates. In the anoxic subsoil pH conditions are still neutral and soil water collected with suction cups is in equilibrium with carbonate minerals. Results from extended x-ray absorption fine-structure (EXAFS) spectroscopy confirm that Zn is mainly bound in sphalerite in the subsoil and weathering reactions lead to a redistribution of Zn in the topsoil. A loss of 35% Zn and S from the topsoil compared to the parent material with 10 g/kg Zn has been observed. 13% of total Zn in the topsoil can be regarded as mobile or easily mobilizable according to sequential chemical extractions (SCE). Zn concentrations of 10 mg/L were found in the soil water, where pH is acidic. Electron supply and the buffer capacity of the soil were identified as main factors controlling Zn mobility and release to the groundwater. Variable Pb concentrations up to 30 µg/L were observed in the soil water. In contrast to Zn, Pb is enriched in the mobile fraction of the oxidized topsoil by a factor of 2 compared to the subsoil with 2 g/kg Pb. 80% of the cation exchange capacity in the topsoil is occupied by Pb. Therefore, plant uptake and bioavailability are of major concern. If the site is not prevented from proceeding acidification in the future, a significant release of Zn, S, and Pb to the groundwater has to be expected. Results from this study show that the assessment of metal release especially from sulfide bearing anoxic material requires an extensive comprehension of leaching mechanisms on the one hand and on weathering processes, which influence the speciation and the mobility of metals, on the other hand. Processes, which may change redox and pH conditions in the future, have to be addressed to enable sound decisions for soil and groundwater protection and remediation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The use of stone and its types of processing have been very important in the vernacular architecture of the cross-border Carso. In Carso this represents an important legacy of centuries and has a uniform typological characteristic to a great extent. The stone was the main constituent of the local architecture, setting and shaping the human environment, incorporating the history of places through their specific symbolic and constructive language. The primary aim of this research is the recognition of the constructive rules and the values embedded in the Carso rural architecture by use and processing of stone. Central to this investigation is the typological reading, aimed to analyze the constructive language expressed by this legacy, through the analysis of the relationship between type, technique and material.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Turbulent energy dissipation is presented in the theoretical context of the famous Kolmogorov theory, formulated in 1941. Some remarks and comments about this theory help the reader understand the approach to turbulence study, as well as give some basic insights to the problem. A clear distinction is made amongst dissipation, pseudo-dissipation and dissipation surrogates. Dissipation regulates how turbulent kinetic energy in a flow gets transformed into internal energy, which makes this quantity a fundamental characteristic to investigate in order to enhance our understanding of turbulence. The dissertation focuses on experimental investigation of the pseudo-dissipation. Indeed this quantity is difficult to measure as it requires the knowledge of all the possible derivatives of the three dimensional velocity field. Once considering an hot-wire technique to measure dissipation we need to deal with surrogates of dissipation, since not all the terms can be measured. The analysis of surrogates is the main topic of this work. In particular two flows, the turbulent channel and the turbulent jet, are considered. These canonic flows, introduced in a brief fashion, are often used as a benchmark for CFD solvers and experimental equipment due to their simple structure. Observations made in the canonic flows are often transferable to more complicated and interesting cases, with many industrial applications. The main tools of investigation are DNS simulations and experimental measures. DNS data are used as a benchmark for the experimental results since all the components of dissipation are known within the numerical simulation. The results of some DNS were already available at the start of this thesis, so the main work consisted in reading and processing the data. Experiments were carried out by means of hot-wire anemometry, described in detail on a theoretical and practical level. The study of DNS data of a turbulent channel at Re=298 reveals that the traditional surrogate can be improved Consequently two new surrogates are proposed and analysed, based on terms of the velocity gradient that are easy to measure experimentally. We manage to find a formulation that improves the accuracy of surrogates by an order of magnitude. For the jet flow results from a DNS at Re=1600 of a temporal jet, and results from our experimental facility CAT at Re=70000, are compared to validate the experiment. It is found that the ratio between components of the dissipation differs between DNS and experimental data. Possible errors in both sets of data are discussed, and some ways to improve the data are proposed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Data sets describing the state of the earth's atmosphere are of great importance in the atmospheric sciences. Over the last decades, the quality and sheer amount of the available data increased significantly, resulting in a rising demand for new tools capable of handling and analysing these large, multidimensional sets of atmospheric data. The interdisciplinary work presented in this thesis covers the development and the application of practical software tools and efficient algorithms from the field of computer science, aiming at the goal of enabling atmospheric scientists to analyse and to gain new insights from these large data sets. For this purpose, our tools combine novel techniques with well-established methods from different areas such as scientific visualization and data segmentation. In this thesis, three practical tools are presented. Two of these tools are software systems (Insight and IWAL) for different types of processing and interactive visualization of data, the third tool is an efficient algorithm for data segmentation implemented as part of Insight.Insight is a toolkit for the interactive, three-dimensional visualization and processing of large sets of atmospheric data, originally developed as a testing environment for the novel segmentation algorithm. It provides a dynamic system for combining at runtime data from different sources, a variety of different data processing algorithms, and several visualization techniques. Its modular architecture and flexible scripting support led to additional applications of the software, from which two examples are presented: the usage of Insight as a WMS (web map service) server, and the automatic production of a sequence of images for the visualization of cyclone simulations. The core application of Insight is the provision of the novel segmentation algorithm for the efficient detection and tracking of 3D features in large sets of atmospheric data, as well as for the precise localization of the occurring genesis, lysis, merging and splitting events. Data segmentation usually leads to a significant reduction of the size of the considered data. This enables a practical visualization of the data, statistical analyses of the features and their events, and the manual or automatic detection of interesting situations for subsequent detailed investigation. The concepts of the novel algorithm, its technical realization, and several extensions for avoiding under- and over-segmentation are discussed. As example applications, this thesis covers the setup and the results of the segmentation of upper-tropospheric jet streams and cyclones as full 3D objects. Finally, IWAL is presented, which is a web application for providing an easy interactive access to meteorological data visualizations, primarily aimed at students. As a web application, the needs to retrieve all input data sets and to install and handle complex visualization tools on a local machine are avoided. The main challenge in the provision of customizable visualizations to large numbers of simultaneous users was to find an acceptable trade-off between the available visualization options and the performance of the application. Besides the implementational details, benchmarks and the results of a user survey are presented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This PhD thesis is focused on cold atmospheric plasma treatments (GP) for microbial inactivation in food applications. In fact GP represents a promising emerging technology alternative to the traditional methods for the decontamination of foods. The objectives of this work were to evaluate: - the effects of GP treatments on microbial inactivation in model systems and in real foods; - the stress response in L. monocytogenes following exposure to different GP treatments. As far as the first aspect, inactivation curves were obtained for some target pathogens, i.e. Listeria monocytogenes and Escherichia coli, by exposing microbial cells to GP generated with two different DBD equipments and processing conditions (exposure time, material of the electrodes). Concerning food applications, the effects of different GP treatments on the inactivation of natural microflora and Listeria monocytogenes, Salmonella Enteritidis and Escherichia coli on the surface of Fuji apples, soya sprouts and black pepper were evaluated. In particular the efficacy of the exposure to gas plasma was assessed immediately after treatments and during storage. Moreover, also possible changes in quality parameters such as colour, pH, Aw, moisture content, oxidation, polyphenol-oxidase activity, antioxidant activity were investigated. Since the lack of knowledge of cell targets of GP may limit its application, the possible mechanism of action of GP was studied against 2 strains of Listeria monocytogenes by evaluating modifications in the fatty acids of the cytoplasmic membrane (through GC/MS analysis) and metabolites detected by SPME-GC/MS and 1H-NMR analyses. Moreover, changes induced by different treatments on the expression of selected genes related to general stress response, virulence or to the metabolism were detected with Reverse Transcription-qPCR. In collaboration with the Scripps Research Institute (La Jolla, CA, USA) also proteomic profiles following gas plasma exposure were analysed through Multidimensional Protein Identification Technology (MudPIT) to evaluate possible changes in metabolic processes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background During production and processing of multi-walled carbon nanotubes (MWCNTs), they may be inhaled and may enter the pulmonary circulation. It is essential that interactions with involved body fluids like the pulmonary surfactant, the blood and others are investigated, particularly as these interactions could lead to coating of the tubes and may affect their chemical and physical characteristics. The aim of this study was to characterize the possible coatings of different functionalized MWCNTs in a cell free environment. Results To simulate the first contact in the lung, the tubes were coated with pulmonary surfactant and subsequently bound lipids were characterized. The further coating in the blood circulation was simulated by incubating the tubes in blood plasma. MWCNTs were amino (NH2)- and carboxyl (-COOH)-modified, in order to investigate the influence on the bound lipid and protein patterns. It was shown that surfactant lipids bind unspecifically to different functionalized MWCNTs, in contrast to the blood plasma proteins which showed characteristic binding patterns. Patterns of bound surfactant lipids were altered after a subsequent incubation in blood plasma. In addition, it was found that bound plasma protein patterns were altered when MWCNTs were previously coated with pulmonary surfactant. Conclusions A pulmonary surfactant coating and the functionalization of MWCNTs have both the potential to alter the MWCNTs blood plasma protein coating and to determine their properties and behaviour in biological systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Disturbances in melatonin - the neurohormone that signals environmental darkness as part of the circadian circuit of mammals - have been implicated in various psychopathologies in humans. At present, experimental evidence linking prenatal melatonin signaling to adult physiology, behavior, and gene expression is lacking. We hypothesized that administration of melatonin (5 mg/kg) or the melatonin receptor antagonist luzindole (5 mg/kg) to rats in utero would permanently alter the circadian circuit to produce differential growth, adult behavior, and hippocampal gene expressionin the male rat. Prenatal treatment was found to increase growth in melatonin-treated animals. In addition, subjects exposed to melatonin prenatally displayed increased rearing in the open field test and an increased right turn preference in the elevated plusmaze. Rats administered luzindole prenatally, however, displayed greater freezing and grooming behavior in the open field test and improved learning in the Morris water maze. Analysis of relative adult hippocampal gene expression with RT-PCR revealed increasedexpression of brain-derived neurotrophic factor (BDNF) with a trend toward increased expression of melatonin 1A (MEL1A) receptors in melatonin-exposed animals whereas overall prenatal treatment had a significant effect on microtubule-associated protein 2(MAP2) expression. Our data support the conclusion that the manipulation of maternal melatonin levels alters brain development and leads to physiological and behavioral abnormalities in adult offspring. We designate the term circadioneuroendocrine (CNE)axis and propose the CNE-axis hypothesis of psychopathology.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Synapses of hippocampal neurons play important roles in learning and memory processes and are involved in aberrant hippocampal function in temporal lobe epilepsy. Major neuronal types in the hippocampus as well as their input and output synapses are well known, but it has remained an open question to what extent conventional electron microscopy (EM) has provided us with the real appearance of synaptic fine structure under in vivo conditions. There is reason to assume that conventional aldehyde fixation and dehydration lead to protein denaturation and tissue shrinkage, likely associated with the occurrence of artifacts. However, realistic fine-structural data of synapses are required for our understanding of the transmission process and for its simulation. Here, we used high-pressure freezing and cryosubstitution of hippocampal tissue that was not subjected to aldehyde fixation and dehydration in ethanol to monitor the fine structure of an identified synapse in the hippocampal CA3 region, that is, the synapse between granule cell axons, the mossy fibers, and the proximal dendrites of CA3 pyramidal neurons. Our results showed that high-pressure freezing nicely preserved ultrastructural detail of this particular synapse and allowed us to study rapid structural changes associated with synaptic plasticity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recently, we have demonstrated that considerable inherent sensitivity gains are attained in MAS NMR spectra acquired by nonuniform sampling (NUS) and introduced maximum entropy interpolation (MINT) processing that assures the linearity of transformation between the time and frequency domains. In this report, we examine the utility of the NUS/MINT approach in multidimensional datasets possessing high dynamic range, such as homonuclear C-13-C-13 correlation spectra. We demonstrate on model compounds and on 1-73-(U-C-13,N-15)/74-108-(U-N-15) E. coli thioredoxin reassembly, that with appropriately constructed 50 % NUS schedules inherent sensitivity gains of 1.7-2.1-fold are readily reached in such datasets. We show that both linearity and line width are retained under these experimental conditions throughout the entire dynamic range of the signals. Furthermore, we demonstrate that the reproducibility of the peak intensities is excellent in the NUS/MINT approach when experiments are repeated multiple times and identical experimental and processing conditions are employed. Finally, we discuss the principles for design and implementation of random exponentially biased NUS sampling schedules for homonuclear C-13-C-13 MAS correlation experiments that yield high-quality artifact-free datasets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In honeybees (Apis niellifera), the process of nectar collection is considered a straightforward example of task partitioning with two subtasks or two intersecting cycles of activity: (1) foraging and (2) storing of nectar, linked via its transfer between foragers and food processors. Many observations suggest, however, that nectar colleclion and processing in honeybees is a complex process, involving workers of other sub-castes and depending on variables such as resource profitability or the amount of stored honey. It has been observed that food processor bees often distribute food to other hive bees after receiving it from incoming foragers, instead of storing it immediately in honey cells. While there is little information about the sub-caste affiliation and the behaviour of these second-order receivers, this stage may be important for the rapid distribution of nutrients and related information. To investigate the identity of these second-order receivers, we quantified behaviours following nectar transfer and compared these behaviours with the behaviour of average worker hive-bees. Furthermore, we tested whether food quality (sugar concentration) affects the behaviour of the second-order receivers. Of all identified second-order receivers, 59.3% performed nurse duties, 18.5% performed food-processor duties and 22.2% performed forager duties. After food intake, these bees were more active, had more trophallaxes (especially offering contacts) compared to average workers and they were found mainly in the brood area, independent of food quality. Our results show that the liquid food can be distributed rapidly among many bees of the three main worker sub-castes, without being stored in honey cells first. Furthermore, the results suggest that the rapid distribution of food partly depends on the high activity of second-order receivers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

By applying high pressure freezing and freeze-substitution, we observed large inclusions of homogeneous appearance in the front of locomoting Walker carcinosarcoma cells that have not been described earlier. Live cell imaging revealed that these inclusions were poor in lipids and nucleic acids but had a high lysine (and hence protein) content. Usually one such structure 2-5 mum in size was present at the front of motile Walker cells, predominantly in the immediate vicinity of newly forming blebs. By correlating the lysine-rich areas in fixed and embedded cells with electron microscopic pictures, inclusions could be assigned to confined, faintly stained cytoplasmic areas that lacked a surrounding membrane; they were therefore called pseudovacuoles. After high-pressure freezing and freeze substitution, pseudovacuoles appeared to be filled with 20 nm large electron-transparent patches surrounded by 12 and 15 nm large particles. The heat shock protein Hsp90 was identified by peptide sequencing as a major fluorescent band on SDS-PAGE of lysine-labelled Walker cell extracts. By immunofluorescence, Hsp90 was found to be enriched in pseudovacuoles. Colocalization of the lysine with a potassium-specific dye in living cells revealed that pseudovacuoles act as K+ stores in the vicinity of forming blebs. We propose that pseudovacuoles might support blebbing by locally regulating the intracellular hydrostatic pressure.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cellulose-polymer composites have potential applications in aerospace and transportation areas where lightweight materials with high mechanical properties are needed. In addition, these economical and biodegradable composites have been shown to be useful as polymer electrolytes, packaging structures, optoelectronic devices, and medical implants such as wound dressing and bone scaffolds. In spite of the above mentioned advantages and potential applications, due to the difficulties associated with synthesis and processing techniques, application of cellulose crystals (micro and nano sized) for preparation of new composite systems is limited. Cellulose is hydrophilic and polar as opposed to most of common thermoplastics, which are non-polar. This results in complications in addition of cellulose crystals to polymer matrices, and as a result in achieving sufficient dispersion levels, which directly affects the mechanical properties of the composites. As in other composite materials, the properties of cellulose-polymer composites depend on the volume fraction and the properties of individual phases (the reinforcement and the polymer matrix), the dispersion quality of the reinforcement through the matrix and the interaction between CNCs themselves and CNC and the matrix (interphase). In order to develop economical cellulose-polymer composites with superior qualities, the properties of individual cellulose crystals, as well as the effect of dispersion of reinforcements and the interphase on the properties of the final composites should be understood. In this research, the mechanical properties of CNC polymer composites were characterized at the macro and nano scales. A direct correlation was made between: Dispersion quality and macro-mechanical properties Nanomechanical properties at the surface and tensile properties CNC diameter and interphase thickness Lastly, individual CNCs from different sources were characterized and for the first time size-scale effect on their nanomechanical properties were reported. Then the effect of CNC surface modification on the mechanical properties was studied and correlated to the crystalline structure of these materials.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Polycarbonate (PC) is an important engineering thermoplastic that is currently produced in large industrial scale using bisphenol A and monomers such as phosgene. Since phosgene is highly toxic, a non-phosgene approach using diphenyl carbonate (DPC) as an alternative monomer, as developed by Asahi Corporation of Japan, is a significantly more environmentally friendly alternative. Other advantages include the use of CO2 instead of CO as raw material and the elimination of major waste water production. However, for the production of DPC to be economically viable, reactive-distillation units are needed to obtain the necessary yields by shifting the reaction-equilibrium to the desired products and separating the products at the point where the equilibrium reaction occurs. In the field of chemical reaction engineering, there are many reactions that are suffering from the low equilibrium constant. The main goal of this research is to determine the optimal process needed to shift the reactions by using appropriate control strategies of the reactive distillation system. An extensive dynamic mathematical model has been developed to help us investigate different control and processing strategies of the reactive distillation units to increase the production of DPC. The high-fidelity dynamic models include extensive thermodynamic and reaction-kinetics models while incorporating the necessary mass and energy balance of the various stages of the reactive distillation units. The study presented in this document shows the possibility of producing DPC via one reactive distillation instead of the conventional two-column, with a production rate of 16.75 tons/h corresponding to start reactants materials of 74.69 tons/h of Phenol and 35.75 tons/h of Dimethyl Carbonate. This represents a threefold increase over the projected production rate given in the literature based on a two-column configuration. In addition, the purity of the DPC produced could reach levels as high as 99.5% with the effective use of controls. These studies are based on simulation done using high-fidelity dynamic models.