59 resultados para the SIMPLE algorithm
Resumo:
The reverse engineering problem addressed in the present research consists of estimating the thicknesses and the optical constants of two thin films deposited on a transparent substrate using only transmittance data through the whole stack. No functional dispersion relation assumptions are made on the complex refractive index. Instead, minimal physical constraints are employed, as in previous works of some of the authors where only one film was considered in the retrieval algorithm. To our knowledge this is the first report on the retrieval of the optical constants and the thickness of multiple film structures using only transmittance data that does not make use of dispersion relations. The same methodology may be used if the available data correspond to normal reflectance. The software used in this work is freely available through the PUMA Project web page (http://www.ime.usp.br/similar to egbirgin/puma/). (C) 2008 Optical Society of America
Resumo:
A novel flow-based strategy for implementing simultaneous determinations of different chemical species reacting with the same reagent(s) at different rates is proposed and applied to the spectrophotometric catalytic determination of iron and vanadium in Fe-V alloys. The method relies on the influence of Fe(II) and V(IV) on the rate of the iodide oxidation by Cr(VI) under acidic conditions, the Jones reducing agent is then needed Three different plugs of the sample are sequentially inserted into an acidic KI reagent carrier stream, and a confluent Cr(VI) solution is added downstream Overlap between the inserted plugs leads to a complex sample zone with several regions of maximal and minimal absorbance values. Measurements performed on these regions reveal the different degrees of reaction development and tend to be more precise Data are treated by multivariate calibration involving the PLS algorithm The proposed system is very simple and rugged Two latent variables carried out ca 95% of the analytical information and the results are in agreement with ICP-OES. (C) 2010 Elsevier B V. All rights reserved.
Resumo:
Traditionally, chronotype classification is based on the Morningness-Eveningness Questionnaire (MEQ). It is implicit in the classification that intermediate individuals get intermediate scores to most of the MEQ questions. However, a small group of individuals has a different pattern of answers. In some questions, they answer as ""morning-types"" and in some others they answer as ""evening-types,"" resulting in an intermediate total score. ""Evening-type"" and ""Morning-type"" answers were set as A(1) and A(4), respectively. Intermediate answers were set as A(2) and A(3). The following algorithm was applied: Bimodality Index = (Sigma A(1) x Sigma A(4))(2) - (Sigma A(2) x Sigma A(3))(2). Neither-types that had positive bimodality scores were classified as bimodal. If our hypothesis is validated by objective data, an update of chronotype classification will be required. (Author correspondence: brunojm@ymail.com)
Resumo:
The design of supplementary damping controllers to mitigate the effects of electromechanical oscillations in power systems is a highly complex and time-consuming process, which requires a significant amount of knowledge from the part of the designer. In this study, the authors propose an automatic technique that takes the burden of tuning the controller parameters away from the power engineer and places it on the computer. Unlike other approaches that do the same based on robust control theories or evolutionary computing techniques, our proposed procedure uses an optimisation algorithm that works over a formulation of the classical tuning problem in terms of bilinear matrix inequalities. Using this formulation, it is possible to apply linear matrix inequality solvers to find a solution to the tuning problem via an iterative process, with the advantage that these solvers are widely available and have well-known convergence properties. The proposed algorithm is applied to tune the parameters of supplementary controllers for thyristor controlled series capacitors placed in the New England/New York benchmark test system, aiming at the improvement of the damping factor of inter-area modes, under several different operating conditions. The results of the linear analysis are validated by non-linear simulation and demonstrate the effectiveness of the proposed procedure.
Resumo:
This paper presents an Adaptive Maximum Entropy (AME) approach for modeling biological species. The Maximum Entropy algorithm (MaxEnt) is one of the most used methods in modeling biological species geographical distribution. The approach presented here is an alternative to the classical algorithm. Instead of using the same set features in the training, the AME approach tries to insert or to remove a single feature at each iteration. The aim is to reach the convergence faster without affect the performance of the generated models. The preliminary experiments were well performed. They showed an increasing on performance both in accuracy and in execution time. Comparisons with other algorithms are beyond the scope of this paper. Some important researches are proposed as future works.
Resumo:
Starting from the Durbin algorithm in polynomial space with an inner product defined by the signal autocorrelation matrix, an isometric transformation is defined that maps this vector space into another one where the Levinson algorithm is performed. Alternatively, for iterative algorithms such as discrete all-pole (DAP), an efficient implementation of a Gohberg-Semencul (GS) relation is developed for the inversion of the autocorrelation matrix which considers its centrosymmetry. In the solution of the autocorrelation equations, the Levinson algorithm is found to be less complex operationally than the procedures based on GS inversion for up to a minimum of five iterations at various linear prediction (LP) orders.
Resumo:
In this paper the continuous Verhulst dynamic model is used to synthesize a new distributed power control algorithm (DPCA) for use in direct sequence code division multiple access (DS-CDMA) systems. The Verhulst model was initially designed to describe the population growth of biological species under food and physical space restrictions. The discretization of the corresponding differential equation is accomplished via the Euler numeric integration (ENI) method. Analytical convergence conditions for the proposed DPCA are also established. Several properties of the proposed recursive algorithm, such as Euclidean distance from optimum vector after convergence, convergence speed, normalized mean squared error (NSE), average power consumption per user, performance under dynamics channels, and implementation complexity aspects, are analyzed through simulations. The simulation results are compared with two other DPCAs: the classic algorithm derived by Foschini and Miljanic and the sigmoidal of Uykan and Koivo. Under estimated errors conditions, the proposed DPCA exhibits smaller discrepancy from the optimum power vector solution and better convergence (under fixed and adaptive convergence factor) than the classic and sigmoidal DPCAs. (C) 2010 Elsevier GmbH. All rights reserved.
Resumo:
This work aims at proposing the use of the evolutionary computation methodology in order to jointly solve the multiuser channel estimation (MuChE) and detection problems at its maximum-likelihood, both related to the direct sequence code division multiple access (DS/CDMA). The effectiveness of the proposed heuristic approach is proven by comparing performance and complexity merit figures with that obtained by traditional methods found in literature. Simulation results considering genetic algorithm (GA) applied to multipath, DS/CDMA and MuChE and multi-user detection (MuD) show that the proposed genetic algorithm multi-user channel estimation (GAMuChE) yields a normalized mean square error estimation (nMSE) inferior to 11%, under slowly varying multipath fading channels, large range of Doppler frequencies and medium system load, it exhibits lower complexity when compared to both maximum likelihood multi-user channel estimation (MLMuChE) and gradient descent method (GrdDsc). A near-optimum multi-user detector (MuD) based on the genetic algorithm (GAMuD), also proposed in this work, provides a significant reduction in the computational complexity when compared to the optimum multi-user detector (OMuD). In addition, the complexity of the GAMuChE and GAMuD algorithms were (jointly) analyzed in terms of number of operations necessary to reach the convergence, and compared to other jointly MuChE and MuD strategies. The joint GAMuChE-GAMuD scheme can be regarded as a promising alternative for implementing third-generation (3G) and fourth-generation (4G) wireless systems in the near future. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
When building genetic maps, it is necessary to choose from several marker ordering algorithms and criteria, and the choice is not always simple. In this study, we evaluate the efficiency of algorithms try (TRY), seriation (SER), rapid chain delineation (RCD), recombination counting and ordering (RECORD) and unidirectional growth (UG), as well as the criteria PARF (product of adjacent recombination fractions), SARF (sum of adjacent recombination fractions), SALOD (sum of adjacent LOD scores) and LHMC (likelihood through hidden Markov chains), used with the RIPPLE algorithm for error verification, in the construction of genetic linkage maps. A linkage map of a hypothetical diploid and monoecious plant species was simulated containing one linkage group and 21 markers with fixed distance of 3 cM between them. In all, 700 F(2) populations were randomly simulated with and 400 individuals with different combinations of dominant and co-dominant markers, as well as 10 and 20% of missing data. The simulations showed that, in the presence of co-dominant markers only, any combination of algorithm and criteria may be used, even for a reduced population size. In the case of a smaller proportion of dominant markers, any of the algorithms and criteria (except SALOD) investigated may be used. In the presence of high proportions of dominant markers and smaller samples (around 100), the probability of repulsion linkage increases between them and, in this case, use of the algorithms TRY and SER associated to RIPPLE with criterion LHMC would provide better results. Heredity (2009) 103, 494-502; doi:10.1038/hdy.2009.96; published online 29 July 2009
Resumo:
We have characterized the kinetic properties of ectonucleoside triphosphate diphosphohydrolase 1 (E-NTPDase1) from rat osseous plate membranes. A novel finding of the present study is that the solubilized enzyme shows high- and low-affinity sites for the substrate in contrast with a single substrate site for the membrane-bound enzyme. In addition, contrary to the Michaelian chraracteristics of the membrane-bound enzyme, the site-site interactions after solubilization with 0.5% digitonin plus 0.1% lysolecithin resulted in a less active ectonucleoside triphosphate diphosphohydrolase, showing activity of about 398.3 nmol Pi min(-1) mg(-1). The solubilized enzyme has M(r) of 66-72 kDa, and its catalytic efficiency was significantly increased by magnesium and calcium ions; but the ATP/ADP activity ratio was always < 2.0. Partial purification and kinetic characterization of the rat osseous plate E-NTPDase1 in a solubilized form may lead to a better understanding of a possible function of the enzyme as a modulator of nucleotidase activity or purinergic signaling in matrix vesicle membranes. The simple procedure to obtain the enzyme in a solubilized form may also be attractive for comparative studies of particular features of the active sites from this and other ATPases.
Resumo:
Background: The use of synthetic mesh for abdominal wall closure after removal of the rectus abdominis is established but not standardised. This study compares two forms of mesh fixation: a simple suture, which fixes the mesh to the edges of the defect on the anterior rectus abdominis fascia; and total fixation, which incorporates the fasciae of the internal oblique, external oblique and transverse muscles in the suture, anchoring the mesh in the position of the removed muscle. Method: A total of 16 fresh cadavers were dissected. Two sutures were compared: simple and total. Three different sites were analysed: 5 cm above, 5 cm below and at the level of the umbilicus. The two sutures compared were tested in each region using a standardised technique. All sutures were performed with nylon 0, perpendicular to the linea alba. Each suture was secured to a dynamometer, which was pulled perpendicularly towards the midline until the rupture of the aponeurosis. `Rupture resistance` was measured in kilogram force. The mean among the groups was compared using the paired Student`s t-test to a significance level of 1% (p < 0.01). Results: The mean rupture resistance of the total suture was 160% higher than that of the simple suture. Conclusion: The total suture includes the external oblique, internal oblique and transverse fasciae, which are multi-directional, and creates a much higher resistance when compared with the simple suture. Total suture may reduce the incidence of bulging and hernias of the abdominal wall after harvesting the rectus abdominis muscle, but comparative clinical studies are necessary. (C) 2010 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Resumo:
A cross-sectional study was carried out with 288 male blood donors, aged between 40 and 60 years old, with the aim of comparing the prevalence of erectile dysfunction (ED) as defined by the International Index of Erectile Function (IIEF) and that resulting from the simple questioning of the presence of ED. Socio-demographic, clinical, and behavioral factors that are associated with the presence of ED were considered. Erectile dysfunction prevalence in the IIEF was 31.9%, while self-reported ED prevalence was 3.1%. The factors associated to ED, as reported by the IIEF were: professional inactivity, suspected depression and/or anxiety, reduced sexual desired, and self-reported ED.
Resumo:
Introduction Reduction of automatic pressure support based on a target respiratory frequency or mandatory rate ventilation (MRV) is available in the Taema-Horus ventilator for the weaning process in the intensive care unit (ICU) setting. We hypothesised that MRV is as effective as manual weaning in post-operative ICU patients. Methods There were 106 patients selected in the postoperative period in a prospective, randomised, controlled protocol. When the patients arrived at the ICU after surgery, they were randomly assigned to either: traditional weaning, consisting of the manual reduction of pressure support every 30 minutes, keeping the respiratory rate/tidal volume (RR/TV) below 80 L until 5 to 7 cmH(2)O of pressure support ventilation (PSV); or automatic weaning, referring to MRV set with a respiratory frequency target of 15 breaths per minute (the ventilator automatically decreased the PSV level by 1 cmH(2)O every four respiratory cycles, if the patient`s RR was less than 15 per minute). The primary endpoint of the study was the duration of the weaning process. Secondary endpoints were levels of pressure support, RR, TV (mL), RR/TV, positive end expiratory pressure levels, FiO(2) and SpO(2) required during the weaning process, the need for reintubation and the need for non-invasive ventilation in the 48 hours after extubation. Results In the intention to treat analysis there were no statistically significant differences between the 53 patients selected for each group regarding gender (p = 0.541), age (p = 0.585) and type of surgery (p = 0.172). Nineteen patients presented complications during the trial (4 in the PSV manual group and 15 in the MRV automatic group, p < 0.05). Nine patients in the automatic group did not adapt to the MRV mode. The mean +/- sd (standard deviation) duration of the weaning process was 221 +/- 192 for the manual group, and 271 +/- 369 minutes for the automatic group (p = 0.375). PSV levels were significantly higher in MRV compared with that of the PSV manual reduction (p < 0.05). Reintubation was not required in either group. Non-invasive ventilation was necessary for two patients, in the manual group after cardiac surgery (p = 0.51). Conclusions The duration of the automatic reduction of pressure support was similar to the manual one in the postoperative period in the ICU, but presented more complications, especially no adaptation to the MRV algorithm. Trial Registration Trial registration number: ISRCTN37456640
Resumo:
The Amazon Basin is crucial to global circulatory and carbon patterns due to the large areal extent and large flux magnitude. Biogeophysical models have had difficulty reproducing the annual cycle of net ecosystem exchange (NEE) of carbon in some regions of the Amazon, generally simulating uptake during the wet season and efflux during seasonal drought. In reality, the opposite occurs. Observational and modeling studies have identified several mechanisms that explain the observed annual cycle, including: (1) deep soil columns that can store large water amount, (2) the ability of deep roots to access moisture at depth when near-surface soil dries during annual drought, (3) movement of water in the soil via hydraulic redistribution, allowing for more efficient uptake of water during the wet season, and moistening of near-surface soil during the annual drought, and (4) photosynthetic response to elevated light levels as cloudiness decreases during the dry season. We incorporate these mechanisms into the third version of the Simple Biosphere model (SiB3) both singly and collectively, and confront the results with observations. For the forest to maintain function through seasonal drought, there must be sufficient water storage in the soil to sustain transpiration through the dry season in addition to the ability of the roots to access the stored water. We find that individually, none of these mechanisms by themselves produces a simulation of the annual cycle of NEE that matches the observed. When these mechanisms are combined into the model, NEE follows the general trend of the observations, showing efflux during the wet season and uptake during seasonal drought.
Resumo:
A large amount of biological data has been produced in the last years. Important knowledge can be extracted from these data by the use of data analysis techniques. Clustering plays an important role in data analysis, by organizing similar objects from a dataset into meaningful groups. Several clustering algorithms have been proposed in the literature. However, each algorithm has its bias, being more adequate for particular datasets. This paper presents a mathematical formulation to support the creation of consistent clusters for biological data. Moreover. it shows a clustering algorithm to solve this formulation that uses GRASP (Greedy Randomized Adaptive Search Procedure). We compared the proposed algorithm with three known other algorithms. The proposed algorithm presented the best clustering results confirmed statistically. (C) 2009 Elsevier Ltd. All rights reserved.