951 resultados para vector quantization based Gaussian modeling
Resumo:
The purpose is to present a scientific research that led to the modeling of an information system which aimed at the maintenance of traceability data in the Brazilian wine industry, according to the principles of a service-oriented architecture (SOA). Since 2005, traceability data maintenance is an obligation for all producers that intend to export to any European Union country. Also, final customers, including the Brazilian ones, have been asking for information about food products. A solution that collectively contemplated the industry was sought in order to permit that producer consortiums of associations could share the costs and benefits of such a solution. Following an extensive bibliographic review, a series of interviews conducted with Brazilian researchers and wine producers in Bento Goncalves - RS, Brazil, elucidated many aspects associated with the wine production process. Information technology issues related to the theme were also researched. The software was modeled with the Unified Modeling Language (UML) and uses web services for data exchange. A model for the wine production process was also proposed. A functional prototype showed that the adopted model is able to fulfill the demands of wine producers. The good results obtained lead us to consider the use of this model in other domains.
Resumo:
A new, simple approach for modeling and assessing the operation and response of the multiline voltage-source controller (VSC)-based flexible ac transmission system controllers, namely the generalized interline power-flow controller (GIPFC) and the interline power-flow controller (IPFC), is presented in this paper. The model and the analysis developed are based on the converters` power balance method which makes use of the d-q orthogonal coordinates to thereafter present a direct solution for these controllers through a quadratic equation. The main constraints and limitations that such devices present while controlling the two independent ac systems considered, will also be evaluated. In order to examine and validate the steady-state model initially proposed, a phase-shift VSC-based GIPFC was also built in the Alternate Transients Program program whose results are also included in this paper. Where applicable, a comparative evaluation between the GIPFC and the IPFC is also presented.
Resumo:
The paper presents the development of a mechanical actuator using a shape memory alloy with a cooling system based on the thermoelectric effect (Seebeck-Peltier effect). Such a method has the advantage of reduced weight and requires a simpler control strategy as compared to other forced cooling systems. A complete mathematical model of the actuator was derived, and an experimental prototype was implemented. Several experiments are used to validate the model and to identify all parameters. A robust and nonlinear controller, based on sliding-mode theory, was derived and implemented. Experiments were used to evaluate the actuator closed-loop performance, stability, and robustness properties. The results showed that the proposed cooling system and controller are able to improve the dynamic response of the actuator. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.
Resumo:
CD4-selective targeting of an antibody-polycation-DNA complex was investigated The complex was synthesized with the anti-CD4 monoclonal antibody B-F5, polylysine(268) (pLL) and either the pGL3 control vector containing the luciferase reporter gene or the pGeneGrip vector containing the green fluorescent protein (GFP) gene. B-F5-pLL-DNA complexes inhibited the binding of I-125-B-F5 to CD4(+) Jurkat cells, while complexes synthesised either without B-F5 or using a non-specific mouse IgG1 antibody had little or no effect Expression of the luciferase reporter gene was achieved in Jurkat cells using the B-F5-pLL-pGL3 complex and was enhanced in the presence of PMA. Negligible luciferase activity was defected with the non-specific antibody complex in Jurkat cells or with the B-F5-pLL-pGL3 complex in the CD4(-) K-562 cells. Using complexes synthesised with the pGeneGrip vector, the transfection efficiency in Jurkat and K-562 cells was examined using confocal microscopy. More than 95% of Jurkat cells expressed GFP and the level of this expression was markedly enhanced by PMA. Negligible GFP expression was seen in K-562 cells or when B-F5 was replaced by a nonspecific antibody. Using flow cytometry, fluorescein-labelled complex showed specific targeting to CD4(+) cells in a mixed cell population from human peripheral blood. These studies demonstrate the selective transfection of CD4(+) T-lymphoid cells using a polycation-based gene delivery system. The complex may provide a means of delivering anti-HIV gene therapies to CD4(+) cells in vivo.
Resumo:
The conventional convection-dispersion (also called axial dispersion) model is widely used to interrelate hepatic availability (F) and clearance (Cl) with the morphology and physiology of the liver and to predict effects such as changes in liver blood flow on F and Cl. An extended form of the convection-dispersion model has been developed to adequately describe the outflow concentration-time profiles for vascular markers at both short and long times after bolus injections into perfused livers. The model, based on flux concentration and a convolution of catheters and large vessels, assumes that solute elimination in hepatocytes follows either fast distribution into or radial diffusion in hepatocytes. The model includes a secondary vascular compartment, postulated to be interconnecting sinusoids. Analysis of the mean hepatic transit time (MTT) and normalized variance (CV2) of solutes with extraction showed that the discrepancy between the predictions of MTT and CV2 for the extended and conventional models are essentially identical irrespective of the magnitude of rate constants representing permeability, volume, and clearance parameters, providing that there is significant hepatic extraction. In conclusion, the application of a newly developed extended convection-dispersion model has shown that the unweighted conventional convection-dispersion model can be used to describe the disposition of extracted solutes and, in particular, to estimate hepatic availability and clearance in booth experimental and clinical situations.
Resumo:
Functional magnetic resonance imaging (fMRI) is currently one of the most widely used methods for studying human brain function in vivo. Although many different approaches to fMRI analysis are available, the most widely used methods employ so called ""mass-univariate"" modeling of responses in a voxel-by-voxel fashion to construct activation maps. However, it is well known that many brain processes involve networks of interacting regions and for this reason multivariate analyses might seem to be attractive alternatives to univariate approaches. The current paper focuses on one multivariate application of statistical learning theory: the statistical discrimination maps (SDM) based on support vector machine, and seeks to establish some possible interpretations when the results differ from univariate `approaches. In fact, when there are changes not only on the activation level of two conditions but also on functional connectivity, SDM seems more informative. We addressed this question using both simulations and applications to real data. We have shown that the combined use of univariate approaches and SDM yields significant new insights into brain activations not available using univariate methods alone. In the application to a visual working memory fMRI data, we demonstrated that the interaction among brain regions play a role in SDM`s power to detect discriminative voxels. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
We propose a simulated-annealing-based genetic algorithm for solving model parameter estimation problems. The algorithm incorporates advantages of both genetic algorithms and simulated annealing. Tests on computer-generated synthetic data that closely resemble optical constants of a metal were performed to compare the efficiency of plain genetic algorithms against the simulated-annealing-based genetic algorithms. These tests assess the ability of the algorithms to and the global minimum and the accuracy of values obtained for model parameters. Finally, the algorithm with the best performance is used to fit the model dielectric function to data for platinum and aluminum. (C) 1997 Optical Society of America.
Resumo:
293T and Sk-Hep-1 cells were transduced with a replication-defective self-inactivating HIV-1 derived vector carrying FVIII cDNA. The genomic DNA was sequenced to reveal LTR/human genome junctions and integration sites. One hundred and thirty-two sequences matched human sequences, with an identity of at least 98%. The integration sites in 293T-FVIIIDB and in Sk-Hep-FVIIIDB cells were preferentially located in gene regions. The integrations in both cell lines were distant from the CpG islands and from the transcription start sites. A comparison between the two cell lines showed that the lentiviral-transduced DNA had the same preferred regions in the two different cell lines.
Resumo:
A thermodynamic approach is developed in this paper to describe the behavior of a subcritical fluid in the neighborhood of vapor-liquid interface and close to a graphite surface. The fluid is modeled as a system of parallel molecular layers. The Helmholtz free energy of the fluid is expressed as the sum of the intrinsic Helmholtz free energies of separate layers and the potential energy of their mutual interactions calculated by the 10-4 potential. This Helmholtz free energy is described by an equation of state (such as the Bender or Peng-Robinson equation), which allows us a convenient means to obtain the intrinsic Helmholtz free energy of each molecular layer as a function of its two-dimensional density. All molecular layers of the bulk fluid are in mechanical equilibrium corresponding to the minimum of the total potential energy. In the case of adsorption the external potential exerted by the graphite layers is added to the free energy. The state of the interface zone between the liquid and the vapor phases or the state of the adsorbed phase is determined by the minimum of the grand potential. In the case of phase equilibrium the approach leads to the distribution of density and pressure over the transition zone. The interrelation between the collision diameter and the potential well depth was determined by the surface tension. It was shown that the distance between neighboring molecular layers substantially changes in the vapor-liquid transition zone and in the adsorbed phase with loading. The approach is considered in this paper for the case of adsorption of argon and nitrogen on carbon black. In both cases an excellent agreement with the experimental data was achieved without additional assumptions and fitting parameters, except for the fluid-solid potential well depth. The approach has far-reaching consequences and can be readily extended to the model of adsorption in slit pores of carbonaceous materials and to the analysis of multicomponent adsorption systems. (C) 2002 Elsevier Science (USA).
Resumo:
A thermodynamic approach based on the Bender equation of state is suggested for the analysis of supercritical gas adsorption on activated carbons at high pressure. The approach accounts for the equality of the chemical potential in the adsorbed phase and that in the corresponding bulk phase and the distribution of elements of the adsorption volume (EAV) over the potential energy for gas-solid interaction. This scheme is extended to subcritical fluid adsorption and takes into account the phase transition in EAV The method is adapted to gravimetric measurements of mass excess adsorption and has been applied to the adsorption of argon, nitrogen, methane, ethane, carbon dioxide, and helium on activated carbon Norit R I in the temperature range from 25 to 70 C. The distribution function of adsorption volume elements over potentials exhibits overlapping peaks and is consistently reproduced for different gases. It was found that the distribution function changes weakly with temperature, which was confirmed by its comparison with the distribution function obtained by the same method using nitrogen adsorption isotherm at 77 K. It was shown that parameters such as pore volume and skeleton density can be determined directly from adsorption measurements, while the conventional approach of helium expansion at room temperature can lead to erroneous results due to the adsorption of helium in small pores of activated carbon. The approach is a convenient tool for analysis and correlation of excess adsorption isotherms over a wide range of pressure and temperature. This approach can be readily extended to the analysis of multicomponent adsorption systems. (C) 2002 Elsevier Science (USA).
Resumo:
A new modeling approach-multiple mapping conditioning (MMC)-is introduced to treat mixing and reaction in turbulent flows. The model combines the advantages of the probability density function and the conditional moment closure methods and is based on a certain generalization of the mapping closure concept. An equivalent stochastic formulation of the MMC model is given. The validity of the closuring hypothesis of the model is demonstrated by a comparison with direct numerical simulation results for the three-stream mixing problem. (C) 2003 American Institute of Physics.
Resumo:
Polymers have become the reference material for high reliability and performance applications. In this work, a multi-scale approach is proposed to investigate the mechanical properties of polymeric based material under strain. To achieve a better understanding of phenomena occurring at the smaller scales, a coupling of a Finite Element Method (FEM) and Molecular Dynamics (MD) modeling in an iterative procedure was employed, enabling the prediction of the macroscopic constitutive response. As the mechanical response can be related to the local microstructure, which in turn depends on the nano-scale structure, the previous described multi-scale method computes the stress-strain relationship at every analysis point of the macro-structure by detailed modeling of the underlying micro- and meso-scale deformation phenomena. The proposed multi-scale approach can enable prediction of properties at the macroscale while taking into consideration phenomena that occur at the mesoscale, thus offering an increased potential accuracy compared to traditional methods.
Resumo:
This paper aims to present a multi-agent model for a simulation, whose goal is to help one specific participant of multi-criteria group decision making process.This model has five main intervenient types: the human participant, who is using the simulation and argumentation support system; the participant agents, one associated to the human participant and the others simulating the others human members of the decision meeting group; the directory agent; the proposal agents, representing the different alternatives for a decision (the alternatives are evaluated based on criteria); and the voting agent responsiblefor all voting machanisms.At this stage it is proposed a two phse algorithm. In the first phase each participantagent makes his own evaluation of the proposals under discussion, and the voting agent proposes a simulation of a voting process.In the second phase, after the dissemination of the voting results,each one ofthe partcipan agents will argue to convince the others to choose one of the possible alternatives. The arguments used to convince a specific participant are dependent on agent knowledge about that participant. This two-phase algorithm is applied iteratively.
Resumo:
The growing heterogeneity of networks, devices and consumption conditions asks for flexible and adaptive video coding solutions. The compression power of the HEVC standard and the benefits of the distributed video coding paradigm allow designing novel scalable coding solutions with improved error robustness and low encoding complexity while still achieving competitive compression efficiency. In this context, this paper proposes a novel scalable video coding scheme using a HEVC Intra compliant base layer and a distributed coding approach in the enhancement layers (EL). This design inherits the HEVC compression efficiency while providing low encoding complexity at the enhancement layers. The temporal correlation is exploited at the decoder to create the EL side information (SI) residue, an estimation of the original residue. The EL encoder sends only the data that cannot be inferred at the decoder, thus exploiting the correlation between the original and SI residues; however, this correlation must be characterized with an accurate correlation model to obtain coding efficiency improvements. Therefore, this paper proposes a correlation modeling solution to be used at both encoder and decoder, without requiring a feedback channel. Experiments results confirm that the proposed scalable coding scheme has lower encoding complexity and provides BD-Rate savings up to 3.43% in comparison with the HEVC Intra scalable extension under development. © 2014 IEEE.