951 resultados para vector quantization based Gaussian modeling
Resumo:
DNA-based immunization has initiated a new era of vaccine research. One of the main goals of gene vaccine development is the control of the levels of expression in vivo for efficient immunization. Modifying the vector to modulate expression or immunogenicity is of critical importance for the improvement of DNA vaccines. The most frequently used vectors for genetic immunization are plasmids. In this article, we review some of the main elements relevant to their design such as strong promoter/enhancer region, introns, genes encoding antigens of interest from the pathogen (how to choose and modify them), polyadenylation termination sequence, origin of replication for plasmid production in Escherichia coli, antibiotic resistance gene as selectable marker, convenient cloning sites, and the presence of immunostimulatory sequences (ISS) that can be added to the plasmid to enhance adjuvanticity and to activate the immune system. In this review, the specific modifications that can increase overall expression as well as the potential of DNA-based vaccination are also discussed.
Resumo:
Fluid particle breakup and coalescence are important phenomena in a number of industrial flow systems. This study deals with a gas-liquid bubbly flow in one wastewater cleaning application. Three-dimensional geometric model of a dispersion water system was created in ANSYS CFD meshing software. Then, numerical study of the system was carried out by means of unsteady simulations performed in ANSYS FLUENT CFD software. Single-phase water flow case was setup to calculate the entire flow field using the RNG k-epsilon turbulence model based on the Reynolds-averaged Navier-Stokes (RANS) equations. Bubbly flow case was based on a computational fluid dynamics - population balance model (CFD-PBM) coupled approach. Bubble breakup and coalescence were considered to determine the evolution of the bubble size distribution. Obtained results are considered as steps toward optimization of the cleaning process and will be analyzed in order to make the process more efficient.
Resumo:
This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.
Resumo:
Acid sulfate (a.s.) soils constitute a major environmental issue. Severe ecological damage results from the considerable amounts of acidity and metals leached by these soils in the recipient watercourses. As even small hot spots may affect large areas of coastal waters, mapping represents a fundamental step in the management and mitigation of a.s. soil environmental risks (i.e. to target strategic areas). Traditional mapping in the field is time-consuming and therefore expensive. Additional more cost-effective techniques have, thus, to be developed in order to narrow down and define in detail the areas of interest. The primary aim of this thesis was to assess different spatial modeling techniques for a.s. soil mapping, and the characterization of soil properties relevant for a.s. soil environmental risk management, using all available data: soil and water samples, as well as datalayers (e.g. geological and geophysical). Different spatial modeling techniques were applied at catchment or regional scale. Two artificial neural networks were assessed on the Sirppujoki River catchment (c. 440 km2) located in southwestern Finland, while fuzzy logic was assessed on several areas along the Finnish coast. Quaternary geology, aerogeophysics and slope data (derived from a digital elevation model) were utilized as evidential datalayers. The methods also required the use of point datasets (i.e. soil profiles corresponding to known a.s. or non-a.s. soil occurrences) for training and/or validation within the modeling processes. Applying these methods, various maps were generated: probability maps for a.s. soil occurrence, as well as predictive maps for different soil properties (sulfur content, organic matter content and critical sulfide depth). The two assessed artificial neural networks (ANNs) demonstrated good classification abilities for a.s. soil probability mapping at catchment scale. Slightly better results were achieved using a Radial Basis Function (RBF) -based ANN than a Radial Basis Functional Link Net (RBFLN) method, narrowing down more accurately the most probable areas for a.s. soil occurrence and defining more properly the least probable areas. The RBF-based ANN also demonstrated promising results for the characterization of different soil properties in the most probable a.s. soil areas at catchment scale. Since a.s. soil areas constitute highly productive lands for agricultural purpose, the combination of a probability map with more specific soil property predictive maps offers a valuable toolset to more precisely target strategic areas for subsequent environmental risk management. Notably, the use of laser scanning (i.e. Light Detection And Ranging, LiDAR) data enabled a more precise definition of a.s. soil probability areas, as well as the soil property modeling classes for sulfur content and the critical sulfide depth. Given suitable training/validation points, ANNs can be trained to yield a more precise modeling of the occurrence of a.s. soils and their properties. By contrast, fuzzy logic represents a simple, fast and objective alternative to carry out preliminary surveys, at catchment or regional scale, in areas offering a limited amount of data. This method enables delimiting and prioritizing the most probable areas for a.s soil occurrence, which can be particularly useful in the field. Being easily transferable from area to area, fuzzy logic modeling can be carried out at regional scale. Mapping at this scale would be extremely time-consuming through manual assessment. The use of spatial modeling techniques enables the creation of valid and comparable maps, which represents an important development within the a.s. soil mapping process. The a.s. soil mapping was also assessed using water chemistry data for 24 different catchments along the Finnish coast (in all, covering c. 21,300 km2) which were mapped with different methods (i.e. conventional mapping, fuzzy logic and an artificial neural network). Two a.s. soil related indicators measured in the river water (sulfate content and sulfate/chloride ratio) were compared to the extent of the most probable areas for a.s. soils in the surveyed catchments. High sulfate contents and sulfate/chloride ratios measured in most of the rivers demonstrated the presence of a.s. soils in the corresponding catchments. The calculated extent of the most probable a.s. soil areas is supported by independent data on water chemistry, suggesting that the a.s. soil probability maps created with different methods are reliable and comparable.
Resumo:
We report here the construction of a vector derived from pET3-His and pRSET plasmids for the expression and purification of recombinant proteins in Escherichia coli based on T7 phage RNA polymerase. The resulting pAE plasmid combined the advantages of both vectors: small size (pRSET), expression of a short 6XHis tag at N-terminus (pET3-His) and a high copy number of plasmid (pRSET). The small size of the vector (2.8 kb) and the high copy number/cell (200-250 copies) facilitate the subcloning and sequencing procedures when compared to the pET system (pET3-His, 4.6 kb and 40-50 copies) and also result in high level expression of recombinant proteins (20 mg purified protein/liter of culture). In addition, the vector pAE enables the expression of a fusion protein with a minimal amino-terminal hexa-histidine affinity tag (a tag of 9 amino acids using XhoI restriction enzyme for the 5'cloning site) as in the case of pET3-His plasmid and in contrast to proteins expressed by pRSET plasmids (a tag of 36 amino acids using BamHI restriction enzyme for the 5'cloning site). Thus, although proteins expressed by pRSET plasmids also have a hexa-histidine tag, the fusion peptide is much longer and may represent a problem for some recombinant proteins.
Resumo:
Serine-proteases are involved in vital processes in virtually all species. They are important targets for researchers studying the relationships between protein structure and activity, for the rational design of new pharmaceuticals. Trypsin was used as a model to assess a possible differential contribution of hydration water to the binding of two synthetic inhibitors. Thermodynamic parameters for the association of bovine ß-trypsin (homogeneous material, observed 23,294.4 ± 0.2 Da, theoretical 23,292.5 Da) with the inhibitors benzamidine and berenil at pH 8.0, 25ºC and with 25 mM CaCl2, were determined using isothermal titration calorimetry and the osmotic stress method. The association constant for berenil was about 12 times higher compared to the one for benzamidine (binding constants are K = 596,599 ± 25,057 and 49,513 ± 2,732 M-1, respectively; the number of binding sites is the same for both ligands, N = 0.99 ± 0.05). Apparently the driving force responsible for this large difference of affinity is not due to hydrophobic interactions because the variation in heat capacity (DCp), a characteristic signature of these interactions, was similar in both systems tested (-464.7 ± 23.9 and -477.1 ± 86.8 J K-1 mol-1 for berenil and benzamidine, respectively). The results also indicated that the enzyme has a net gain of about 21 water molecules regardless of the inhibitor tested. It was shown that the difference in affinity could be due to a larger number of interactions between berenil and the enzyme based on computational modeling. The data support the view that pharmaceuticals derived from benzamidine that enable hydrogen bond formation outside the catalytic binding pocket of ß-trypsin may result in more effective inhibitors.
Resumo:
The objective of this thesis is to develop and generalize further the differential evolution based data classification method. For many years, evolutionary algorithms have been successfully applied to many classification tasks. Evolution algorithms are population based, stochastic search algorithms that mimic natural selection and genetics. Differential evolution is an evolutionary algorithm that has gained popularity because of its simplicity and good observed performance. In this thesis a differential evolution classifier with pool of distances is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, to determine the optimal values for all free parameters of the classifier model during the training phase of the classifier. The differential evolution classifier applies the individually optimized distance measure for each new data set to be classified is generalized to cover a pool of distances. Instead of optimizing a single distance measure for the given data set, the selection of the optimal distance measure from a predefined pool of alternative measures is attempted systematically and automatically. Furthermore, instead of only selecting the optimal distance measure from a set of alternatives, an attempt is made to optimize the values of the possible control parameters related with the selected distance measure. Specifically, a pool of alternative distance measures is first created and then the differential evolution algorithm is applied to select the optimal distance measure that yields the highest classification accuracy with the current data. After determining the optimal distance measures for the given data set together with their optimal parameters, all determined distance measures are aggregated to form a single total distance measure. The total distance measure is applied to the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; a sample belongs to the class represented by the nearest prototype vector when measured with the optimized total distance measure. During the training process the differential evolution algorithm determines the optimal class vectors, selects optimal distance metrics, and determines the optimal values for the free parameters of each selected distance measure. The results obtained with the above method confirm that the choice of distance measure is one of the most crucial factors for obtaining higher classification accuracy. The results also demonstrate that it is possible to build a classifier that is able to select the optimal distance measure for the given data set automatically and systematically. After finding optimal distance measures together with optimal parameters from the particular distance measure results are then aggregated to form a total distance, which will be used to form the deviation between the class vectors and samples and thus classify the samples. This thesis also discusses two types of aggregation operators, namely, ordered weighted averaging (OWA) based multi-distances and generalized ordered weighted averaging (GOWA). These aggregation operators were applied in this work to the aggregation of the normalized distance values. The results demonstrate that a proper combination of aggregation operator and weight generation scheme play an important role in obtaining good classification accuracy. The main outcomes of the work are the six new generalized versions of previous method called differential evolution classifier. All these DE classifier demonstrated good results in the classification tasks.
Resumo:
Motivated by a recently proposed biologically inspired face recognition approach, we investigated the relation between human behavior and a computational model based on Fourier-Bessel (FB) spatial patterns. We measured human recognition performance of FB filtered face images using an 8-alternative forced-choice method. Test stimuli were generated by converting the images from the spatial to the FB domain, filtering the resulting coefficients with a band-pass filter, and finally taking the inverse FB transformation of the filtered coefficients. The performance of the computational models was tested using a simulation of the psychophysical experiment. In the FB model, face images were first filtered by simulated V1- type neurons and later analyzed globally for their content of FB components. In general, there was a higher human contrast sensitivity to radially than to angularly filtered images, but both functions peaked at the 11.3-16 frequency interval. The FB-based model presented similar behavior with regard to peak position and relative sensitivity, but had a wider frequency band width and a narrower response range. The response pattern of two alternative models, based on local FB analysis and on raw luminance, strongly diverged from the human behavior patterns. These results suggest that human performance can be constrained by the type of information conveyed by polar patterns, and consequently that humans might use FB-like spatial patterns in face processing.
Resumo:
Human papillomavirus (HPV) infection is the most common sexually transmitted disease in the world and is related to the etiology of cervical cancer. The most common high-risk HPV types are 16 and 18; however, the second most prevalent type in the Midwestern region of Brazil is HPV-33. New vaccine strategies against HPV have shown that virus-like particles (VLP) of the major capsid protein (L1) induce efficient production of antibodies, which confer protection against the same viral type. The methylotrophic yeast Pichia pastoris is an efficient and inexpensive expression system for the production of high levels of heterologous proteins stably using a wild-type gene in combination with an integrative vector. It was recently demonstrated that P. pastoris can produce the HPV-16 L1 protein by using an episomal vector associated with the optimized L1 gene. However, the use of an episomal vector is not appropriate for protein production on an industrial scale. In the present study, the vectors were integrated into the Pichia genome and the results were positive for L1 gene transcription and protein production, both intracellularly and in the extracellular environment. Despite the great potential for expression by the P. pastoris system, our results suggest a low yield of L1 recombinant protein, which, however, does not make this system unworkable. The achievement of stable clones containing the expression cassettes integrated in the genome may permit optimizations that could enable the establishment of a platform for the production of VLP-based vaccines.
Resumo:
Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.
Resumo:
The objective of this work was to determine and model the infrared dehydration curves of apple slices - Fuji and Gala varieties. The slices were dehydrated until constant mass, in a prototype dryer with infrared heating source. The applied temperatures ranged from 50 to 100 °C. Due to the physical characteristics of the product, the dehydration curve was divided in two periods, constant and falling, separated by the critical moisture content. A linear model was used to describe the constant dehydration period. Empirical models traditionally used to model the drying behavior of agricultural products were fitted to the experimental data of the falling dehydration period. Critical moisture contents of 2.811 and 3.103 kgw kgs-1 were observed for the Fuji and Gala varieties, respectively. Based on the results, it was concluded that the constant dehydration rates presented a direct relationship with the temperature; thus, it was possible to fit a model that describes the moisture content variation in function of time and temperature. Among the tested models, which describe the falling dehydration period, the model proposed by Midilli presented the best fit for all studied conditions.
Resumo:
A mathematical model to predict microbial growth in milk was developed and analyzed. The model consists of a system of two differential equations of first order. The equations are based on physical hypotheses of population growth. The model was applied to five different sets of data of microbial growth in dairy products selected from Combase, which is the most important database in the area with thousands of datasets from around the world, and the results showed a good fit. In addition, the model provides equations for the evaluation of the maximum specific growth rate and the duration of the lag phase which may provide useful information about microbial growth.
Resumo:
Celery (Apium graveolens L. var. secalinum Alef) leaves with 50±0.07 g weight and 91.75±0.15% humidity (~11.21 db) were dried using 8 different microwave power densities ranging between 1.8-20 W g-1, until the humidity fell down to 8.95±0.23% (~0.1 db). Microwave drying processes were completed between 5.5 and 77 min depending on the microwave power densities. In this study, measured values were compared with predicted values obtained from twenty thin layer drying theoretical, semi-empirical and empirical equations with a new thin layer drying equation. Within applied microwave power density; models whose coefficient and correlation (R²) values are highest were chosen as the best models. Weibull distribution model gave the most suitable predictions at all power density. At increasing microwave power densities, the effective moisture diffusivity values ranged from 1.595 10-10 to 6.377 10-12 m2 s-1. The activation energy was calculated using an exponential expression based on Arrhenius equation. The linear relationship between the drying rate constant and effective moisture diffusivity gave the best fit.
Resumo:
Abstract The aim of this work was to evaluate a non-agitated process of bioethanol production from soybean molasses and the kinetic parameters of fermentation using a strain of Saccharomyces cerevisiae (ATCC® 2345). Kinetic experiment was conducted in medium with 30% (w v-1) of soluble solids without supplementation or pH adjustment. The maximum ethanol concentration was in 44 hours, the ethanol productivity was 0.946 g L-1 h-1, the yield over total initial sugars (Y1) was 47.87%, over consumed sugars (Y2) was 88.08% and specific cells production rate was 0.006 h-1. The mathematical polynomial was adjusted to the experimental data and provided very similar parameters of yield and productivity. Based in this study, for one ton of soybean molasses can be produced 103 kg of anhydrous bioethanol.
Resumo:
Building Information Modeling – BIM is widely spreading in the Architecture, Engineering, and Construction (AEC) industries. Manufacturers of building elements are also starting to provide more and more objects of their products. The ideal availability and distribution for these models is not yet stabilized. Usual goal of a manufacturer is to get their model into design as early as possible. Finding the ways to satisfy customer needs with a superior service would help to achieve this goal. This study aims to seek what case company’s customers want out of the model and what they think is the ideal way to obtain these models and what are the desired functionalities for this service. This master’s thesis uses a modified version of lead user method to gain understanding of what the needs are in a longer term. In this framework also benchmarking of current solutions and their common model functions is done. Empirical data is collected with survey and interviews. As a result this thesis provides understanding that what is the information customer uses when obtaining a model, what kind of model is expected to be achieved and how is should the process optimally function. Based on these results ideal service is pointed out.