989 resultados para fixed path methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Numerous studies have examined determinants leading to preponderance of women in major depressive disorder (MDD), which is particularly accentuated for the atypical depression subtype. It is thus of interest to explore the specific indirect effects influencing the association between sex and established depression subtypes. METHODS: The data of 1624 subjects with a lifetime diagnosis of MDD derived from the population-based PsyCoLaus data were used. An atypical (n=256), a melancholic (n=422), a combined atypical and melancholic features subtype (n=198), and an unspecified MDD group (n=748) were constructed according to the DSM-IV specifiers. Path models with direct and indirect effects were applied to the data. RESULTS: Partial mediation of the female-related atypical and combined atypical-melancholic depression subtypes was found. Early anxiety disorders and high emotion-orientated coping acted as mediating variables between sex and the atypical depression subtype. In contrast, high Body Mass Index (BMI) served as a suppression variable, also concerning the association between sex and the combined atypical-melancholic subtype. The latter association was additionally mediated by an early age of MDD onset and early/late anxiety disorders. LIMITATIONS: The use of cross-sectional data does not allow causal conclusions. CONCLUSIONS: This is the first study that provides evidence for a differentiation of the general mechanisms explaining sex differences of overall MDD by depression subtypes. Determinants affecting the pathways begin early in life. Since some of them are primarily of behavioral nature, the present findings could be a valuable target in mental health care.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coverage Path Planning (CPP) is the task of determining a path that passes over all points of an area or volume of interest while avoiding obstacles. This task is integral to many robotic applications, such as vacuum cleaning robots, painter robots, autonomous underwater vehicles creating image mosaics, demining robots, lawn mowers, automated harvesters, window cleaners and inspection of complex structures, just to name a few. A considerable body of research has addressed the CPP problem. However, no updated surveys on CPP reflecting recent advances in the field have been presented in the past ten years. In this paper, we present a review of the most successful CPP methods, focusing on the achievements made in the past decade. Furthermore, we discuss reported field applications of the described CPP methods. This work aims to become a starting point for researchers who are initiating their endeavors in CPP. Likewise, this work aims to present a comprehensive review of the recent breakthroughs in the field, providing links to the most interesting and successful works

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Low quality mine drainage from tailings facilities persists as one of the most significant global environmental concerns related to sulphide mining. Due to the large variation in geological and environmental conditions at mine sites, universal approaches to the management of mine drainage are not always applicable. Instead, site-specific knowledge of the geochemical behaviour of waste materials is required for the design and closure of the facilities. In this thesis, tailings-derived water contamination and factors causing the pollution were investigated in two coeval active sulphide mine sites in Finland: the Hitura Ni mine and the Luikonlahti Cu-Zn-Co-Ni mine and talc processing plant. A hydrogeochemical study was performed to characterise the tailingsderived water pollution at Hitura. Geochemical changes in the Hitura tailings were evaluated with a detailed mineralogical and geochemical investigation (solid-phase speciation, acid mine drainage potential, pore water chemistry) and using a spatial assessment to identify the mechanisms of water contamination. A similar spatial investigation, applying selective extractions, was carried out in the Luikonlahti tailings area for comparative purposes (Hitura low-sulphide tailings vs. Luikonlahti sulphide-rich tailings). At both sites, hydrogeochemistry of tailings seepage waters was further characterised to examine the net results of the processes observed within the impoundments and to identify constraints for water treatment. At Luikonlahti, annual and seasonal variation in effluent quality was evaluated based on a four-year monitoring period. Observations pertinent to future assessment and mine drainage prevention from existing and future tailings facilities were presented based on the results. A combination of hydrogeochemical approaches provided a means to delineate the tailings-derived neutral mine drainage at Hitura. Tailings effluents with elevated Ni, SO4 2- and Fe content had dispersed to the surrounding aquifer through a levelled-out esker and underneath the seepage collection ditches. In future mines, this could be avoided with additional basal liners in tailings impoundments where the permeability of the underlying Quaternary deposits is inadequate, and with sufficiently deep ditches. Based on the studies, extensive sulphide oxidation with subsequent metal release may already initiate during active tailings disposal. The intensity and onset of oxidation depended on e.g. the Fe sulphide content of the tailings, water saturation level, and time of exposure of fresh sulphide grains. Continuous disposal decreased sulphide weathering in the surface of low-sulphide tailings, but oxidation initiated if they were left uncovered after disposal ceased. In the sulphide-rich tailings, delayed burial of the unsaturated tailings had resulted in thick oxidized layers, despite the continuous operation. Sulphide weathering and contaminant release occurred also in the border zones. Based on the results, the prevention of sulphide oxidation should already be considered in the planning of tailings disposal, taking into account the border zones. Moreover, even lowsulphide tailings should be covered without delay after active disposal ceases. The quality of tailings effluents showed wide variation within a single impoundment and between the two different types of tailings facilities assessed. The affecting factors included source materials, the intensity of weathering of tailings and embankment materials along the seepage flow path, inputs from the process waters, the water retention time in tailings, and climatic seasonality. In addition, modifications to the tailings impoundment may markedly change the effluent quality. The wide variation in the tailings effluent quality poses challenges for treatment design. The final decision on water management requires quantification of the spatial and seasonal fluctuation at the site, taking into account changes resulting from the eventual closure of the impoundment. Overall, comprehensive hydrogeochemical mapping was deemed essential in the identification of critical contaminants and their sources at mine sites. Mineralogical analysis, selective extractions, and pore water analysis were a good combination of methods for studying the weathering of tailings and in evaluating metal mobility from the facilities. Selective extractions with visual observations and pH measurements of tailings solids were, nevertheless, adequate in describing the spatial distribution of sulphide oxidation in tailings impoundments. Seepage water chemistry provided additional data on geochemical processes in tailings and was necessary for defining constraints for water treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The simultaneous determination of two or more active components in pharmaceutical preparations, without previous chemical separation, is a common analytical problem. Published works describe the determination of AZT and 3TC separately, as raw material or in different pharmaceutical preparations. In this work, a method using UV spectroscopy and multivariate calibration is described for the simultaneous measurement of 3TC and AZT in fixed dose combinations. The methodology was validated and applied to determine the AZT+3TC contents in tablets from five different manufacturers, as well as their dissolution profile. The results obtained employing the proposed methodology was similar to methods using first derivative technique and HPLC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three sensitive spectrophotometric methods are presented for the determination of finasteride in bulk and in tablets. The methods rely on the use of bromate-bromide reagent and three dyes namely, methyl orange, indigocarmine and thymol blue as reagents. They involve the addition of a measured excess of bromate-bromide reagent to finasteride in acid medium, and after the bromination reaction is judged to be complete, the unreacted bromine is determined by reacting with a fixed amount of either methylorange and measuring the absorbance at 520 nm (method A) or indigocarmine and measuring the absorbance at 610 nm (method B) or thymol blue and measuring the absorbance at 550 nm (method C). In all the methods, the amount of insitu generated bromine reacted corresponds to the amount of finasteride. The absorbance measured at the respective wavelength is found increase linearly with the concentration of finasteride. Beer's law is obeyed in the ranges 0.25- 2.0, 0.5-6.0 and 1-12 µg mL-1 for method A, method B and method C, respectively. The calculated molar absorptivity values are 5.7x10(4), 3.12x10(4) and 1.77x10(4) L mol-1 cm-1 respectively, for method A, method B and method C, and the corresponding Sandell sensitivity values are 0.0065, 0.012 and 0.021 µg cm-2. The limits of detection (LOD) and quantification (LOQ) are also reported for all the methods. Accuracy and, intra-day and inter-day precisions of the methods were established according to the current ICH guidelines. The methods were successfully applied to the determination of finasteride in commercially available tablets and the results were found to closely agree with the label claim. The results of the methods were statistically compared with those of a reference method by applying Student's t-test and F-test. The accuracy and reliability of the methods were further confirmed by performing recovery tests via standard addition procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this dissertation is to improve the dynamic simulation of fluid power circuits. A fluid power circuit is a typical way to implement power transmission in mobile working machines, e.g. cranes, excavators etc. Dynamic simulation is an essential tool in developing controllability and energy-efficient solutions for mobile machines. Efficient dynamic simulation is the basic requirement for the real-time simulation. In the real-time simulation of fluid power circuits there exist numerical problems due to the software and methods used for modelling and integration. A simulation model of a fluid power circuit is typically created using differential and algebraic equations. Efficient numerical methods are required since differential equations must be solved in real time. Unfortunately, simulation software packages offer only a limited selection of numerical solvers. Numerical problems cause noise to the results, which in many cases leads the simulation run to fail. Mathematically the fluid power circuit models are stiff systems of ordinary differential equations. Numerical solution of the stiff systems can be improved by two alternative approaches. The first is to develop numerical solvers suitable for solving stiff systems. The second is to decrease the model stiffness itself by introducing models and algorithms that either decrease the highest eigenvalues or neglect them by introducing steady-state solutions of the stiff parts of the models. The thesis proposes novel methods using the latter approach. The study aims to develop practical methods usable in dynamic simulation of fluid power circuits using explicit fixed-step integration algorithms. In this thesis, twomechanisms whichmake the systemstiff are studied. These are the pressure drop approaching zero in the turbulent orifice model and the volume approaching zero in the equation of pressure build-up. These are the critical areas to which alternative methods for modelling and numerical simulation are proposed. Generally, in hydraulic power transmission systems the orifice flow is clearly in the turbulent area. The flow becomes laminar as the pressure drop over the orifice approaches zero only in rare situations. These are e.g. when a valve is closed, or an actuator is driven against an end stopper, or external force makes actuator to switch its direction during operation. This means that in terms of accuracy, the description of laminar flow is not necessary. But, unfortunately, when a purely turbulent description of the orifice is used, numerical problems occur when the pressure drop comes close to zero since the first derivative of flow with respect to the pressure drop approaches infinity when the pressure drop approaches zero. Furthermore, the second derivative becomes discontinuous, which causes numerical noise and an infinitely small integration step when a variable step integrator is used. A numerically efficient model for the orifice flow is proposed using a cubic spline function to describe the flow in the laminar and transition areas. Parameters for the cubic spline function are selected such that its first derivative is equal to the first derivative of the pure turbulent orifice flow model in the boundary condition. In the dynamic simulation of fluid power circuits, a tradeoff exists between accuracy and calculation speed. This investigation is made for the two-regime flow orifice model. Especially inside of many types of valves, as well as between them, there exist very small volumes. The integration of pressures in small fluid volumes causes numerical problems in fluid power circuit simulation. Particularly in realtime simulation, these numerical problems are a great weakness. The system stiffness approaches infinity as the fluid volume approaches zero. If fixed step explicit algorithms for solving ordinary differential equations (ODE) are used, the system stability would easily be lost when integrating pressures in small volumes. To solve the problem caused by small fluid volumes, a pseudo-dynamic solver is proposed. Instead of integration of the pressure in a small volume, the pressure is solved as a steady-state pressure created in a separate cascade loop by numerical integration. The hydraulic capacitance V/Be of the parts of the circuit whose pressures are solved by the pseudo-dynamic method should be orders of magnitude smaller than that of those partswhose pressures are integrated. The key advantage of this novel method is that the numerical problems caused by the small volumes are completely avoided. Also, the method is freely applicable regardless of the integration routine applied. The superiority of both above-mentioned methods is that they are suited for use together with the semi-empirical modelling method which necessarily does not require any geometrical data of the valves and actuators to be modelled. In this modelling method, most of the needed component information can be taken from the manufacturer’s nominal graphs. This thesis introduces the methods and shows several numerical examples to demonstrate how the proposed methods improve the dynamic simulation of various hydraulic circuits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The drug discovery process is facing new challenges in the evaluation process of the lead compounds as the number of new compounds synthesized is increasing. The potentiality of test compounds is most frequently assayed through the binding of the test compound to the target molecule or receptor, or measuring functional secondary effects caused by the test compound in the target model cells, tissues or organism. Modern homogeneous high-throughput-screening (HTS) assays for purified estrogen receptors (ER) utilize various luminescence based detection methods. Fluorescence polarization (FP) is a standard method for ER ligand binding assay. It was used to demonstrate the performance of two-photon excitation of fluorescence (TPFE) vs. the conventional one-photon excitation method. As result, the TPFE method showed improved dynamics and was found to be comparable with the conventional method. It also held potential for efficient miniaturization. Other luminescence based ER assays utilize energy transfer from a long-lifetime luminescent label e.g. lanthanide chelates (Eu, Tb) to a prompt luminescent label, the signal being read in a time-resolved mode. As an alternative to this method, a new single-label (Eu) time-resolved detection method was developed, based on the quenching of the label by a soluble quencher molecule when displaced from the receptor to the solution phase by an unlabeled competing ligand. The new method was paralleled with the standard FP method. It was shown to yield comparable results with the FP method and found to hold a significantly higher signal-tobackground ratio than FP. Cell-based functional assays for determining the extent of cell surface adhesion molecule (CAM) expression combined with microscopy analysis of the target molecules would provide improved information content, compared to an expression level assay alone. In this work, immune response was simulated by exposing endothelial cells to cytokine stimulation and the resulting increase in the level of adhesion molecule expression was analyzed on fixed cells by means of immunocytochemistry utilizing specific long-lifetime luminophore labeled antibodies against chosen adhesion molecules. Results showed that the method was capable of use in amulti-parametric assay for protein expression levels of several CAMs simultaneously, combined with analysis of the cellular localization of the chosen adhesion molecules through time-resolved luminescence microscopy inspection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this thesis is twofold. The first and major part is devoted to sensitivity analysis of various discrete optimization problems while the second part addresses methods applied for calculating measures of solution stability and solving multicriteria discrete optimization problems. Despite numerous approaches to stability analysis of discrete optimization problems two major directions can be single out: quantitative and qualitative. Qualitative sensitivity analysis is conducted for multicriteria discrete optimization problems with minisum, minimax and minimin partial criteria. The main results obtained here are necessary and sufficient conditions for different stability types of optimal solutions (or a set of optimal solutions) of the considered problems. Within the framework of quantitative direction various measures of solution stability are investigated. A formula for a quantitative characteristic called stability radius is obtained for the generalized equilibrium situation invariant to changes of game parameters in the case of the H¨older metric. Quality of the problem solution can also be described in terms of robustness analysis. In this work the concepts of accuracy and robustness tolerances are presented for a strategic game with a finite number of players where initial coefficients (costs) of linear payoff functions are subject to perturbations. Investigation of stability radius also aims to devise methods for its calculation. A new metaheuristic approach is derived for calculation of stability radius of an optimal solution to the shortest path problem. The main advantage of the developed method is that it can be potentially applicable for calculating stability radii of NP-hard problems. The last chapter of the thesis focuses on deriving innovative methods based on interactive optimization approach for solving multicriteria combinatorial optimization problems. The key idea of the proposed approach is to utilize a parameterized achievement scalarizing function for solution calculation and to direct interactive procedure by changing weighting coefficients of this function. In order to illustrate the introduced ideas a decision making process is simulated for three objective median location problem. The concepts, models, and ideas collected and analyzed in this thesis create a good and relevant grounds for developing more complicated and integrated models of postoptimal analysis and solving the most computationally challenging problems related to it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The diagnosis of Mycoplasma hyopneumoniae infection is often performed through histopathology, immunohistochemistry (IHC) and polymerase chain reaction (PCR) or a combination of these techniques. PCR can be performed on samples using several conservation methods, including swabs, frozen tissue or formalin-fixed and paraffin-embedded (FFPE) tissue. However, the formalin fixation process often inhibits DNA amplification. To evaluate whether M. hyopneumoniae DNA could be recovered from FFPE tissues, 15 lungs with cranioventral consolidation lesions were collected in a slaughterhouse from swine bred in herds with respiratory disease. Bronchial swabs and fresh lung tissue were collected, and a fragment of the corresponding lung section was placed in neutral buffered formalin for 48 hours. A PCR assay was performed to compare FFPE tissue samples with samples that were only refrigerated (bronchial swabs) or frozen (tissue pieces). M. hyopneumoniae was detected by PCR in all 15 samples of the swab and frozen tissue, while it was detected in only 11 of the 15 FFPE samples. Histological features of M. hyopneumoniae infection were presented in 11 cases and 7 of these samples stained positive in IHC. Concordance between the histological features and detection results was observed in 13 of the FFPE tissue samples. PCR was the most sensitive technique. Comparison of different sample conservation methods indicated that it is possible to detect M. hyopneumoniae from FFPE tissue. It is important to conduct further research using archived material because the efficiency of PCR could be compromised under these conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The acid mining drainage is considered the most significant environmental pollution problem around the world for the extensive formation acidic leachates containing heavy metals. Adsorption is widely used methods in water treatment due to it easy operation and the availability of a wide variety of commercial adsorbent (low cost). The primary goal of this thesis was to investigate the efficiency of neutralizing agents, CaCO3 and CaSiO3, and metal adsorption materials with unmodified limestone from Company Nordkalk Oy. In addition to this, the side materials of limestone mining were tested for iron adsorption from acidic model solution. This study was executed at Lappeenranta University of Technology, Finland. The work utilised fixed-bed adsorption column as the main equipment and large fluidized column. Atomic absorption spectroscopy (AAS) and x-ray diffraction (XRD) was used to determine ferric removal and the composition of material respectively. The results suggest a high potential for the studied materials to be used a low cost adsorbents in acid mine drainage treatment. From the two studied adsorbents, the FS material was more suitable than the Gotland material. Based on the findings, it is recommended that further studies might include detailed analysis of Gotland materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Preparative liquid chromatography is one of the most selective separation techniques in the fine chemical, pharmaceutical, and food industries. Several process concepts have been developed and applied for improving the performance of classical batch chromatography. The most powerful approaches include various single-column recycling schemes, counter-current and cross-current multi-column setups, and hybrid processes where chromatography is coupled with other unit operations such as crystallization, chemical reactor, and/or solvent removal unit. To fully utilize the potential of stand-alone and integrated chromatographic processes, efficient methods for selecting the best process alternative as well as optimal operating conditions are needed. In this thesis, a unified method is developed for analysis and design of the following singlecolumn fixed bed processes and corresponding cross-current schemes: (1) batch chromatography, (2) batch chromatography with an integrated solvent removal unit, (3) mixed-recycle steady state recycling chromatography (SSR), and (4) mixed-recycle steady state recycling chromatography with solvent removal from fresh feed, recycle fraction, or column feed (SSR–SR). The method is based on the equilibrium theory of chromatography with an assumption of negligible mass transfer resistance and axial dispersion. The design criteria are given in general, dimensionless form that is formally analogous to that applied widely in the so called triangle theory of counter-current multi-column chromatography. Analytical design equations are derived for binary systems that follow competitive Langmuir adsorption isotherm model. For this purpose, the existing analytic solution of the ideal model of chromatography for binary Langmuir mixtures is completed by deriving missing explicit equations for the height and location of the pure first component shock in the case of a small feed pulse. It is thus shown that the entire chromatographic cycle at the column outlet can be expressed in closed-form. The developed design method allows predicting the feasible range of operating parameters that lead to desired product purities. It can be applied for the calculation of first estimates of optimal operating conditions, the analysis of process robustness, and the early-stage evaluation of different process alternatives. The design method is utilized to analyse the possibility to enhance the performance of conventional SSR chromatography by integrating it with a solvent removal unit. It is shown that the amount of fresh feed processed during a chromatographic cycle and thus the productivity of SSR process can be improved by removing solvent. The maximum solvent removal capacity depends on the location of the solvent removal unit and the physical solvent removal constraints, such as solubility, viscosity, and/or osmotic pressure limits. Usually, the most flexible option is to remove solvent from the column feed. Applicability of the equilibrium design for real, non-ideal separation problems is evaluated by means of numerical simulations. Due to assumption of infinite column efficiency, the developed design method is most applicable for high performance systems where thermodynamic effects are predominant, while significant deviations are observed under highly non-ideal conditions. The findings based on the equilibrium theory are applied to develop a shortcut approach for the design of chromatographic separation processes under strongly non-ideal conditions with significant dispersive effects. The method is based on a simple procedure applied to a single conventional chromatogram. Applicability of the approach for the design of batch and counter-current simulated moving bed processes is evaluated with case studies. It is shown that the shortcut approach works the better the higher the column efficiency and the lower the purity constraints are.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over time the demand for quantitative portfolio management has increased among financial institutions but there is still a lack of practical tools. In 2008 EDHEC Risk and Asset Management Research Centre conducted a survey of European investment practices. It revealed that the majority of asset or fund management companies, pension funds and institutional investors do not use more sophisticated models to compensate the flaws of the Markowitz mean-variance portfolio optimization. Furthermore, tactical asset allocation managers employ a variety of methods to estimate return and risk of assets, but also need sophisticated portfolio management models to outperform their benchmarks. Recent development in portfolio management suggests that new innovations are slowly gaining ground, but still need to be studied carefully. This thesis tries to provide a practical tactical asset allocation (TAA) application to the Black–Litterman (B–L) approach and unbiased evaluation of B–L models’ qualities. Mean-variance framework, issues related to asset allocation decisions and return forecasting are examined carefully to uncover issues effecting active portfolio management. European fixed income data is employed in an empirical study that tries to reveal whether a B–L model based TAA portfolio is able outperform its strategic benchmark. The tactical asset allocation utilizes Vector Autoregressive (VAR) model to create return forecasts from lagged values of asset classes as well as economic variables. Sample data (31.12.1999–31.12.2012) is divided into two. In-sample data is used for calibrating a strategic portfolio and the out-of-sample period is for testing the tactical portfolio against the strategic benchmark. Results show that B–L model based tactical asset allocation outperforms the benchmark portfolio in terms of risk-adjusted return and mean excess return. The VAR-model is able to pick up the change in investor sentiment and the B–L model adjusts portfolio weights in a controlled manner. TAA portfolio shows promise especially in moderately shifting allocation to more risky assets while market is turning bullish, but without overweighting investments with high beta. Based on findings in thesis, Black–Litterman model offers a good platform for active asset managers to quantify their views on investments and implement their strategies. B–L model shows potential and offers interesting research avenues. However, success of tactical asset allocation is still highly dependent on the quality of input estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivation for Speaker recognition work is presented in the first part of the thesis. An exhaustive survey of past work in this field is also presented. A low cost system not including complex computation has been chosen for implementation. Towards achieving this a PC based system is designed and developed. A front end analog to digital convertor (12 bit) is built and interfaced to a PC. Software to control the ADC and to perform various analytical functions including feature vector evaluation is developed. It is shown that a fixed set of phrases incorporating evenly balanced phonemes is aptly suited for the speaker recognition work at hand. A set of phrases are chosen for recognition. Two new methods are adopted for the feature evaluation. Some new measurements involving a symmetry check method for pitch period detection and ACE‘ are used as featured. Arguments are provided to show the need for a new model for speech production. Starting from heuristic, a knowledge based (KB) speech production model is presented. In this model, a KB provides impulses to a voice producing mechanism and constant correction is applied via a feedback path. It is this correction that differs from speaker to speaker. Methods of defining measurable parameters for use as features are described. Algorithms for speaker recognition are developed and implemented. Two methods are presented. The first is based on the model postulated. Here the entropy on the utterance of a phoneme is evaluated. The transitions of voiced regions are used as speaker dependent features. The second method presented uses features found in other works, but evaluated differently. A knock—out scheme is used to provide the weightage values for the selection of features. Results of implementation are presented which show on an average of 80% recognition. It is also shown that if there are long gaps between sessions, the performance deteriorates and is speaker dependent. Cross recognition percentages are also presented and this in the worst case rises to 30% while the best case is 0%. Suggestions for further work are given in the concluding chapter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Decimal multiplication is an integral part of financial, commercial, and internet-based computations. A novel design for single digit decimal multiplication that reduces the critical path delay and area for an iterative multiplier is proposed in this research. The partial products are generated using single digit multipliers, and are accumulated based on a novel RPS algorithm. This design uses n single digit multipliers for an n × n multiplication. The latency for the multiplication of two n-digit Binary Coded Decimal (BCD) operands is (n + 1) cycles and a new multiplication can begin every n cycle. The accumulation of final partial products and the first iteration of partial product generation for next set of inputs are done simultaneously. This iterative decimal multiplier offers low latency and high throughput, and can be extended for decimal floating-point multiplication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The identification of chemical mechanism that can exhibit oscillatory phenomena in reaction networks are currently of intense interest. In particular, the parametric question of the existence of Hopf bifurcations has gained increasing popularity due to its relation to the oscillatory behavior around the fixed points. However, the detection of oscillations in high-dimensional systems and systems with constraints by the available symbolic methods has proven to be difficult. The development of new efficient methods are therefore required to tackle the complexity caused by the high-dimensionality and non-linearity of these systems. In this thesis, we mainly present efficient algorithmic methods to detect Hopf bifurcation fixed points in (bio)-chemical reaction networks with symbolic rate constants, thereby yielding information about their oscillatory behavior of the networks. The methods use the representations of the systems on convex coordinates that arise from stoichiometric network analysis. One of the methods called HoCoQ reduces the problem of determining the existence of Hopf bifurcation fixed points to a first-order formula over the ordered field of the reals that can then be solved using computational-logic packages. The second method called HoCaT uses ideas from tropical geometry to formulate a more efficient method that is incomplete in theory but worked very well for the attempted high-dimensional models involving more than 20 chemical species. The instability of reaction networks may lead to the oscillatory behaviour. Therefore, we investigate some criterions for their stability using convex coordinates and quantifier elimination techniques. We also study Muldowney's extension of the classical Bendixson-Dulac criterion for excluding periodic orbits to higher dimensions for polynomial vector fields and we discuss the use of simple conservation constraints and the use of parametric constraints for describing simple convex polytopes on which periodic orbits can be excluded by Muldowney's criteria. All developed algorithms have been integrated into a common software framework called PoCaB (platform to explore bio- chemical reaction networks by algebraic methods) allowing for automated computation workflows from the problem descriptions. PoCaB also contains a database for the algebraic entities computed from the models of chemical reaction networks.