919 resultados para Modeling of purification operations inbiotechnology
Resumo:
The venom of Crotalus durissus terrificus snakes presents various substances, including a serine protease with thrombin-like activity, called gyroxin, that clots plasmatic fibrinogen and promote the fibrin formation. The aim of this study was to purify and structurally characterize the gyroxin enzyme from Crotalus durissus terrificus venom. For isolation and purification, the following methods were employed: gel filtration on Sephadex G75 column and affinity chromatography on benzamidine Sepharose 6B; 12% SDS-PAGE under reducing conditions; N-terminal sequence analysis; cDNA cloning and expression through RT-PCR and crystallization tests. Theoretical molecular modeling was performed using bioinformatics tools based on comparative analysis of other serine proteases deposited in the NCBI (National Center for Biotechnology Information) database. Protein N-terminal sequencing produced a single chain with a molecular mass of similar to 30 kDa while its full-length cDNA had 714 bp which encoded a mature protein containing 238 amino acids. Crystals were obtained from the solutions 2 and 5 of the Crystal Screen Kit (R), two and one respectively, that reveal the protein constitution of the sample. For multiple sequence alignments of gyroxin-like B2.1 with six other serine proteases obtained from snake venoms (SVSPs), the preservation of cysteine residues and their main structural elements (alpha-helices, beta-barrel and loops) was indicated. The localization of the catalytic triad in His57, Asp102 and Ser198 as well as S1 and S2 specific activity sites in Thr193 and Gli215 amino acids was pointed. The area of recognition and cleavage of fibrinogen in SVSPs for modeling gyroxin B2.1 sequence was located at Arg60, Arg72, Gln75, Arg81, Arg82, Lis85, Glu86 and Lis87 residues. Theoretical modeling of gyroxin fraction generated a classical structure consisting of two alpha-helices, two beta-barrel structures, five disulfide bridges and loops in positions 37, 60, 70, 99, 148, 174 and 218. These results provided information about the functional structure of gyroxin allowing its application in the design of new drugs.
Resumo:
Pumped-storage (PS) systems are used to store electric energy as potential energy for release during peak demand. We investigate the impacts of a planned 1000 MW PS scheme connecting Lago Bianco with Lago di Poschiavo (Switzerland) on temperature and particle mass concentration in both basins. The upper (turbid) basin is a reservoir receiving large amounts of fine particles from the partially glaciated watershed, while the lower basin is a much clearer natural lake. Stratification, temperature and particle concentrations in the two basins were simulated with and without PS for four different hydrological conditions and 27 years of meteorological forcing using the software CE-QUAL-W2. The simulations showed that the PS operations lead to an increase in temperature in both basins during most of the year. The increase is most pronounced (up to 4°C) in the upper hypolimnion of the natural lake toward the end of summer stratification and is partially due to frictional losses in the penstocks, pumps and turbines. The remainder of the warming is from intense coupling to the atmosphere while water resides in the shallower upper reservoir. These impacts are most pronounced during warm and dry years, when the upper reservoir is strongly heated and the effects are least concealed by floods. The exchange of water between the two basins relocates particles from the upper reservoir to the lower lake, where they accumulate during summer in the upper hypolimnion (10 to 20 mg L−1) but also to some extent decrease light availability in the trophic surface layer.
Resumo:
Bone marrow hematopoietic stem cells (HSCs) are responsible for both lifelong daily maintenance of all blood cells and for repair after cell loss. Until recently the cellular mechanisms by which HSCs accomplish these two very different tasks remained an open question. Biological evidence has now been found for the existence of two related mouse HSC populations. First, a dormant HSC (d-HSC) population which harbors the highest self-renewal potential of all blood cells but is only induced into active self-renewal in response to hematopoietic stress. And second, an active HSC (a-HSC) subset that by and large produces the progenitors and mature cells required for maintenance of day-to-day hematopoiesis. Here we present computational analyses further supporting the d-HSC concept through extensive modeling of experimental DNA label-retaining cell (LRC) data. Our conclusion that the presence of a slowly dividing subpopulation of HSCs is the most likely explanation (amongst the various possible causes including stochastic cellular variation) of the observed long term Bromodeoxyuridine (BrdU) retention, is confirmed by the deterministic and stochastic models presented here. Moreover, modeling both HSC BrdU uptake and dilution in three stages and careful treatment of the BrdU detection sensitivity permitted improved estimates of HSC turnover rates. This analysis predicts that d-HSCs cycle about once every 149-193 days and a-HSCs about once every 28-36 days. We further predict that, using LRC assays, a 75%-92.5% purification of d-HSCs can be achieved after 59-130 days of chase. Interestingly, the d-HSC proportion is now estimated to be around 30-45% of total HSCs - more than twice that of our previous estimate.
Resumo:
This study is concerned with Autoregressive Moving Average (ARMA) models of time series. ARMA models form a subclass of the class of general linear models which represents stationary time series, a phenomenon encountered most often in practice by engineers, scientists and economists. It is always desirable to employ models which use parameters parsimoniously. Parsimony will be achieved by ARMA models because it has only finite number of parameters. Even though the discussion is primarily concerned with stationary time series, later we will take up the case of homogeneous non stationary time series which can be transformed to stationary time series. Time series models, obtained with the help of the present and past data is used for forecasting future values. Physical science as well as social science take benefits of forecasting models. The role of forecasting cuts across all fields of management-—finance, marketing, production, business economics, as also in signal process, communication engineering, chemical processes, electronics etc. This high applicability of time series is the motivation to this study.
Resumo:
Wenn man die Existenz von physikalischen Mechanismen ignoriert, die für die Struktur hydrologischer Zeitreihen verantwortlich sind, kann das zu falschen Schlussfolgerungen bzgl. des Vorhandenseins möglicher Gedächtnis (memory) -Effekte, d.h. von Persistenz, führen. Die hier vorgelegte Doktorarbeit spürt der niedrigfrequenten klimatischen Variabilität innerhalb den hydrologischen Zyklus nach und bietet auf dieser "Reise" neue Einsichten in die Transformation der charakteristischen Eigenschaften von Zeitreihen mit einem Langzeitgedächtnis. Diese Studie vereint statistische Methoden der Zeitreihenanalyse mit empirisch-basierten Modelltechniken, um operative Modelle zu entwickeln, die in der Lage sind (1) die Dynamik des Abflusses zu modellieren, (2) sein zukünftiges Verhalten zu prognostizieren und (3) die Abflusszeitreihen an unbeobachteten Stellen abzuschätzen. Als solches präsentiert die hier vorgelegte Dissertation eine ausführliche Untersuchung zu den Ursachen der niedrigfrequenten Variabilität von hydrologischen Zeitreihen im deutschen Teil des Elbe-Einzugsgebietes, den Folgen dieser Variabilität und den physikalisch basierten Reaktionen von Oberflächen- und Grundwassermodellen auf die niedrigfrequenten Niederschlags-Eingangsganglinien. Die Doktorarbeit gliedert sich wie folgt: In Kapitel 1 wird als Hintergrundinformation das Hurst Phänomen beschrieben und ein kurzer Rückblick auf diesbezügliche Studien gegeben. Das Kapitel 2 diskutiert den Einfluss der Präsenz von niedrigfrequenten periodischen Zeitreihen auf die Zuverlässigkeit verschiedener Hurst-Parameter-Schätztechniken. Kapitel 3 korreliert die niedrigfrequente Niederschlagsvariabilität mit dem Index der Nord-Atlantischen Ozillations (NAO). Kapitel 4-6 sind auf den deutschen Teil des Elbe-Einzugsgebietes fokussiert. So werden in Kapitel 4 die niedrigfrequenten Variabilitäten der unterschiedlichen hydro-meteorologischen Parameter untersucht und es werden Modelle beschrieben, die die Dynamik dieser Niedrigfrequenzen und deren zukünftiges Verhalten simulieren. Kapitel 5 diskutiert die mögliche Anwendung der Ergebnisse für die charakteristische Skalen und die Verfahren der Analyse der zeitlichen Variabilität auf praktische Fragestellungen im Wasserbau sowie auf die zeitliche Bestimmung des Gebiets-Abflusses an unbeobachteten Stellen. Kapitel 6 verfolgt die Spur der Niedrigfrequenzzyklen im Niederschlag durch die einzelnen Komponenten des hydrologischen Zyklus, nämlich dem Direktabfluss, dem Basisabfluss, der Grundwasserströmung und dem Gebiets-Abfluss durch empirische Modellierung. Die Schlussfolgerungen werden im Kapitel 7 präsentiert. In einem Anhang werden technische Einzelheiten zu den verwendeten statistischen Methoden und die entwickelten Software-Tools beschrieben.
Resumo:
The present study describes a methodology of dosage of glycerol kinase (GK) from baker's yeast. The standardization of the activity of the glycerol kinase from baker's yeast was accomplished using the diluted enzymatic preparation containing glycerol phosphate oxidase (GPO) and glycerol kinase. The mixture was incubated at 60 degrees C by 15 min and the reaction was stopped by the SDS solution addition. A first set of experiments was carried out in order to investigate the individual effect of temperature (7), pH and substrate concentration (S), on GK activity and stability. The pH and temperature stability tests showed that the enzyme presented a high stability to pH 6.0-8.0 and the thermal stability were completely maintained up to 50 degrees C during 1 h. The K(m) of the enzyme for glycerol was calculated to be 2 mM and V(max) to be 1.15 U/mL. In addition, modeling and optimization of reaction conditions was attempted by response surface methodology (RSM). Higher activity values will be attained at temperatures between 52 and 56 degrees C, pH around 10.2-10.5 and substrate concentrations from 150 to 170 mM.This low cost method for glycerol kinase dosage in a sequence of reactions is of great importance for many industries, like food, sugar and alcohol. RSM showed to be an adequate approach for modeling the reaction and optimization of reaction conditions to maximize glycerol kinase activity. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The goal of this thesis was the study of the cement-bone interface in the tibial component of a cemented total knee prosthesis. One of the things you can see in specimens after in vivo service is that resorption of bone occurs in the interdigitated region between bone and cement. A stress shielding effect was investigated as a cause to explain bone resorption. Stress shielding occurs when bone is loaded less than physiological and therefore it starts remodeling according to the new loading conditions. µCT images were used to obtain 3D models of the bone and cement structure and a Finite Element Analysis was used to simulate different kind of loads. Resorption was also simulated by performing erosion operations in the interdigitated bone region. Finally, 4 models were simulated: bone (trabecular), bone with cement, and two models of bone with cement after progressive erosions of the bone. Compression, tension and shear test were simulated for each model in displacement-control until 2% of strain. The results show how the principal strain and Von Mises stress decrease after adding the cement on the structure and after the erosion operations. These results show that a stress shielding effect does occur and rises after resorption starts.
Resumo:
Global climate change in recent decades has strongly influenced the Arctic generating pronounced warming accompanied by significant reduction of sea ice in seasonally ice-covered seas and a dramatic increase of open water regions exposed to wind [Stephenson et al., 2011]. By strongly scattering the wave energy, thick multiyear ice prevents swell from penetrating deeply into the Arctic pack ice. However, with the recent changes affecting Arctic sea ice, waves gain more energy from the extended fetch and can therefore penetrate further into the pack ice. Arctic sea ice also appears weaker during melt season, extending the transition zone between thick multi-year ice and the open ocean. This region is called the Marginal Ice Zone (MIZ). In the Arctic, the MIZ is mainly encountered in the marginal seas, such as the Nordic Seas, the Barents Sea, the Beaufort Sea and the Labrador Sea. Formed by numerous blocks of sea ice of various diameters (floes) the MIZ, under certain conditions, allows maritime transportation stimulating dreams of industrial and touristic exploitation of these regions and possibly allowing, in the next future, a maritime connection between the Atlantic and the Pacific. With the increasing human presence in the Arctic, waves pose security and safety issues. As marginal seas are targeted for oil and gas exploitation, understanding and predicting ocean waves and their effects on sea ice become crucial for structure design and for real time safety of operations. The juxtaposition of waves and sea ice represents a risk for personnel and equipment deployed on ice, and may complicate critical operations such as platform evacuations. The risk is difficult to evaluate because there are no long-term observations of waves in ice, swell events are difficult to predict from local conditions, ice breakup can occur on very short time-scales and wave-ice interactions are beyond the scope of current forecasting models [Liu and Mollo-Christensen, 1988,Marko, 2003]. In this thesis, a newly developed Waves in Ice Model (WIM) [Williams et al., 2013a,Williams et al., 2013b] and its related Ocean and Sea Ice model (OSIM) will be used to study the MIZ and the improvements of wave modeling in ice infested waters. The following work has been conducted in collaboration with the Nansen Environmental and Remote Sensing Center and within the SWARP project which aims to extend operational services supporting human activity in the Arctic by including forecast of waves in ice-covered seas, forecast of sea-ice in the presence of waves and remote sensing of both waves and sea ice conditions. The WIM will be included in the downstream forecasting services provided by Copernicus marine environment monitoring service.
Resumo:
Federal Highway Administration, Washington, D.C.
Resumo:
Federal Highway Administration, Office of Safety and Traffic Operations Research Development, McLean, Va.
Resumo:
Managed lane strategies are innovative road operation schemes for addressing congestion problems. These strategies operate a lane (lanes) adjacent to a freeway that provides congestion-free trips to eligible users, such as transit or toll-payers. To ensure the successful implementation of managed lanes, the demand on these lanes need to be accurately estimated. Among different approaches for predicting this demand, the four-step demand forecasting process is most common. Managed lane demand is usually estimated at the assignment step. Therefore, the key to reliably estimating the demand is the utilization of effective assignment modeling processes. ^ Managed lanes are particularly effective when the road is functioning at near-capacity. Therefore, capturing variations in demand and network attributes and performance is crucial for their modeling, monitoring and operation. As a result, traditional modeling approaches, such as those used in static traffic assignment of demand forecasting models, fail to correctly predict the managed lane demand and the associated system performance. The present study demonstrates the power of the more advanced modeling approach of dynamic traffic assignment (DTA), as well as the shortcomings of conventional approaches, when used to model managed lanes in congested environments. In addition, the study develops processes to support an effective utilization of DTA to model managed lane operations. ^ Static and dynamic traffic assignments consist of demand, network, and route choice model components that need to be calibrated. These components interact with each other, and an iterative method for calibrating them is needed. In this study, an effective standalone framework that combines static demand estimation and dynamic traffic assignment has been developed to replicate real-world traffic conditions. ^ With advances in traffic surveillance technologies collecting, archiving, and analyzing traffic data is becoming more accessible and affordable. The present study shows how data from multiple sources can be integrated, validated, and best used in different stages of modeling and calibration of managed lanes. Extensive and careful processing of demand, traffic, and toll data, as well as proper definition of performance measures, result in a calibrated and stable model, which closely replicates real-world congestion patterns, and can reasonably respond to perturbations in network and demand properties.^
Resumo:
This paper presents an integer programming model for developing optimal shift schedules while allowing extensive flexibility in terms of alternate shift starting times, shift lengths, and break placement. The model combines the work of Moondra (1976) and Bechtold and Jacobs (1990) by implicitly matching meal breaks to implicitly represented shifts. Moreover, the new model extends the work of these authors to enable the scheduling of overtime and the scheduling of rest breaks. We compare the new model to Bechtold and Jacobs' model over a diverse set of 588 test problems. The new model generates optimal solutions more rapidly, solves problems with more shift alternatives, and does not generate schedules violating the operative restrictions on break timing.
Resumo:
Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.
Resumo:
Managed lane strategies are innovative road operation schemes for addressing congestion problems. These strategies operate a lane (lanes) adjacent to a freeway that provides congestion-free trips to eligible users, such as transit or toll-payers. To ensure the successful implementation of managed lanes, the demand on these lanes need to be accurately estimated. Among different approaches for predicting this demand, the four-step demand forecasting process is most common. Managed lane demand is usually estimated at the assignment step. Therefore, the key to reliably estimating the demand is the utilization of effective assignment modeling processes. Managed lanes are particularly effective when the road is functioning at near-capacity. Therefore, capturing variations in demand and network attributes and performance is crucial for their modeling, monitoring and operation. As a result, traditional modeling approaches, such as those used in static traffic assignment of demand forecasting models, fail to correctly predict the managed lane demand and the associated system performance. The present study demonstrates the power of the more advanced modeling approach of dynamic traffic assignment (DTA), as well as the shortcomings of conventional approaches, when used to model managed lanes in congested environments. In addition, the study develops processes to support an effective utilization of DTA to model managed lane operations. Static and dynamic traffic assignments consist of demand, network, and route choice model components that need to be calibrated. These components interact with each other, and an iterative method for calibrating them is needed. In this study, an effective standalone framework that combines static demand estimation and dynamic traffic assignment has been developed to replicate real-world traffic conditions. With advances in traffic surveillance technologies collecting, archiving, and analyzing traffic data is becoming more accessible and affordable. The present study shows how data from multiple sources can be integrated, validated, and best used in different stages of modeling and calibration of managed lanes. Extensive and careful processing of demand, traffic, and toll data, as well as proper definition of performance measures, result in a calibrated and stable model, which closely replicates real-world congestion patterns, and can reasonably respond to perturbations in network and demand properties.
Resumo:
American tegumentary leishmaniasis (ATL) is a disease transmitted to humans by the female sandflies of the genus Lutzomyia. Several factors are involved in the disease transmission cycle. In this work only rainfall and deforestation were considered to assess the variability in the incidence of ATL. In order to reach this goal, monthly recorded data of the incidence of ATL in Orán, Salta, Argentina, were used, in the period 1985-2007. The square root of the relative incidence of ATL and the corresponding variance were formulated as time series, and these data were smoothed by moving averages of 12 and 24 months, respectively. The same procedure was applied to the rainfall data. Typical months, which are April, August, and December, were found and allowed us to describe the dynamical behavior of ATL outbreaks. These results were tested at 95% confidence level. We concluded that the variability of rainfall would not be enough to justify the epidemic outbreaks of ATL in the period 1997-2000, but it consistently explains the situation observed in the years 2002 and 2004. Deforestation activities occurred in this region could explain epidemic peaks observed in both years and also during the entire time of observation except in 2005-2007.