977 resultados para Simple methods
Resumo:
This paper presents simple, rapid, precise and accurate stability-indicating HPLC and CE methods, which were developed and validated for the determination of nitrendipine, nimodipine and nisoldipine. These drugs are calcium channel antagonists of the 1,4-dihydropyridine type which are used in the treatment of cardiovascular diseases. Experimental results showed a good linear correlation between the area and the concentration of drugs covering a relatively large domain of concentration in all cases. The linearity of the analytical procedures was in the range of 2.0-120.0 mu g mL-1 for nitrendipine, 1.0-100.0 mu g mL(-1) for nimodipine and 100.0-600.0 mu g mL(-1) for nisoldipine, the regression determination coefficient being higher than 0.99 in all cases. The proposed methods were found to have good precision and accuracy. The chemical stability of these drugs was determined under various conditions and the methods have shown adequate separation for their enantiomers and degradation products. In addition, degradation products produced as a result of stress studies did not interfere with the detection of the drugs' enantiomers and the assays can thus be considered stability-indicating.
Resumo:
We propose simple heuristics for the assembly line worker assignment and balancing problem. This problem typically occurs in assembly lines in sheltered work centers for the disabled. Different from the well-known simple assembly line balancing problem, the task execution times vary according to the assigned worker. We develop a constructive heuristic framework based on task and worker priority rules defining the order in which the tasks and workers should be assigned to the workstations. We present a number of such rules and compare their performance across three possible uses: as a stand-alone method, as an initial solution generator for meta-heuristics, and as a decoder for a hybrid genetic algorithm. Our results show that the heuristics are fast, they obtain good results as a stand-alone method and are efficient when used as a initial solution generator or as a solution decoder within more elaborate approaches.
Resumo:
Electronic polarization induced by the interaction of a reference molecule with a liquid environment is expected to affect the magnetic shielding constants. Understanding this effect using realistic theoretical models is important for proper use of nuclear magnetic resonance in molecular characterization. In this work, we consider the pyridine molecule in water as a model system to briefly investigate this aspect. Thus, Monte Carlo simulations and quantum mechanics calculations based on the B3LYP/6-311++G (d,p) are used to analyze different aspects of the solvent effects on the N-15 magnetic shielding constant of pyridine in water. This includes in special the geometry relaxation and the electronic polarization of the solute by the solvent. The polarization effect is found to be very important, but, as expected for pyridine, the geometry relaxation contribution is essentially negligible. Using an average electrostatic model of the solvent, the magnetic shielding constant is calculated as -58.7 ppm, in good agreement with the experimental value of -56.3 ppm. The explicit inclusion of hydrogen-bonded water molecules embedded in the electrostatic field of the remaining solvent molecules gives the value of -61.8 ppm.
Resumo:
The estimation of reference evapotranspiration (ETo), used in water balance, allows to determine soil water content, assisting on irrigation management. The present study aimed to compare simple ETo estimating methods with the Penman-Monteith (FAO), in the folowing time scales: daily, 5, 10, 15 and 30 days and monthly in the counties of Frederico Westphalen and Palmeira das Missoes, in the Rio Grande do Sul state, Brazil. The methods tested had their efficiency improved by increasing the time scale of analysis, keeping the same performance for both locations. The highest and lowest ETo values occurred in December and June, respectively. Most methods underestimated ETo. For any of the time scales Makking and Radiaton FAO24 methods can replace the Penman-Monteith for estimating ETo.
Resumo:
Purpose: To describe a new computerized method for the analysis of lid contour based on the measurement of multiple radial midpupil lid distances. Design: Evaluation of diagnostic technology. Participants and Controls: Monocular palpebral fissure images of 35 patients with Graves' upper eyelid retraction and of 30 normal subjects. Methods: Custom software was used to measure the conventional midpupil upper lid distance (MPLD) and 12 oblique MPLDs on each 15 degrees across the temporal (105 degrees, 120 degrees, 135 degrees, 150 degrees, 165 degrees, and 180 degrees) and nasal (75 degrees, 60 degrees, 45 degrees, 30 degrees, 15 degrees, and 0 degrees) sectors of the lid fissure. Main Outcome Measures: Mean, standard deviation, 5th and 95th percentiles of the oblique MPLDs obtained for patients and controls. Temporal/nasal MPLD ratios of the same angles with respect to the midline. Results: The MPLDs increased from the vertical midline in both nasal and temporal sectors of the fissure. In the control group the differences between the mean central MPLD (90 degrees) and those up to 30 degrees in the nasal (75 degrees and 60 degrees) and temporal sectors (105 degrees and 120 degrees) were not significant. For greater eccentricities, all temporal and nasal mean MPLDs increased significantly. When the MPLDs of the same angles were compared between groups, the mean values of the Graves' patients differed from control at all angles (F = 4192; P<0.0001). The greatest temporal/nasal asymmetry occurred 60 degrees from the vertical midline. Conclusions: The measurement of radial MPLD is a simple and effective way to characterize lid contour abnormalities. In patients with Graves' upper eyelid retraction, the method demonstrated that the maximum amplitude of the lateral lid flare sign occurred at 60 degrees from the vertical midline. Financial Disclosure(s): The authors have no proprietary or commercial interest in any of the materials discussed in this article. Ophthalmology 2012; 119: 625-628 (C) 2012 by the American Academy of Ophthalmology.
Resumo:
In this thesis some multivariate spectroscopic methods for the analysis of solutions are proposed. Spectroscopy and multivariate data analysis form a powerful combination for obtaining both quantitative and qualitative information and it is shown how spectroscopic techniques in combination with chemometric data evaluation can be used to obtain rapid, simple and efficient analytical methods. These spectroscopic methods consisting of spectroscopic analysis, a high level of automation and chemometric data evaluation can lead to analytical methods with a high analytical capacity, and for these methods, the term high-capacity analysis (HCA) is suggested. It is further shown how chemometric evaluation of the multivariate data in chromatographic analyses decreases the need for baseline separation. The thesis is based on six papers and the chemometric tools used are experimental design, principal component analysis (PCA), soft independent modelling of class analogy (SIMCA), partial least squares regression (PLS) and parallel factor analysis (PARAFAC). The analytical techniques utilised are scanning ultraviolet-visible (UV-Vis) spectroscopy, diode array detection (DAD) used in non-column chromatographic diode array UV spectroscopy, high-performance liquid chromatography with diode array detection (HPLC-DAD) and fluorescence spectroscopy. The methods proposed are exemplified in the analysis of pharmaceutical solutions and serum proteins. In Paper I a method is proposed for the determination of the content and identity of the active compound in pharmaceutical solutions by means of UV-Vis spectroscopy, orthogonal signal correction and multivariate calibration with PLS and SIMCA classification. Paper II proposes a new method for the rapid determination of pharmaceutical solutions by the use of non-column chromatographic diode array UV spectroscopy, i.e. a conventional HPLC-DAD system without any chromatographic column connected. In Paper III an investigation is made of the ability of a control sample, of known content and identity to diagnose and correct errors in multivariate predictions something that together with use of multivariate residuals can make it possible to use the same calibration model over time. In Paper IV a method is proposed for simultaneous determination of serum proteins with fluorescence spectroscopy and multivariate calibration. Paper V proposes a method for the determination of chromatographic peak purity by means of PCA of HPLC-DAD data. In Paper VI PARAFAC is applied for the decomposition of DAD data of some partially separated peaks into the pure chromatographic, spectral and concentration profiles.
Resumo:
[EN]The age and growth of the sand sole Pegusa lascaris from the Canarian Archipelago were studied from 2107 fish collected between January 2005 and December 2007. To find an appropriate method for age determination, sagittal otoliths were observed by surface-reading and frontal section and the results were compared. The two methods did not differ significantly in estimated age but the surface-reading method is superior in terms of cost and time efficiency. The sand sole has a moderate life span, with ages up to 10 years recorded. Individuals grow quickly in their first two years, attaining approximately 48% of their maximum standard length; after the second year, their growth rate drops rapidly as energy is diverted to reproduction. Males and females show dimorphism in growth, with females reaching a slightly greater length and age than males. Von Bertalanffy, seasonalized von Bertalanfy, Gompertz, and Schnute growth models were fitted to length-at-age data. Akaike weights for the seasonalized von Bertalanffy growth model indicated that the probability of choosing the correct model from the group of models used was >0.999 for males and females. The seasonalized von Bertalanffy growth parameters estimated were: L? = 309 mm standard length, k = 0.166 yr?1, t0 = ?1.88 yr, C = 0.347, and ts = 0.578 for males; and L? = 318 mm standard length, k = 0.164 yr?1, t0 = ?1.653 yr, C = 0.820, and ts = 0.691 for females. Fish standard length and otolith radius are closely correlated (R2 = 0.902). The relation between standard length and otolith radius is described by a power function (a = 85.11, v = 0.906)
Resumo:
[EN] We analyze the discontinuity preserving problem in TV-L1 optical flow methods. This type of methods typically creates rounded effects at flow boundaries, which usually do not coincide with object contours. A simple strategy to overcome this problem consists in inhibiting the diffusion at high image gradients. In this work, we first introduce a general framework for TV regularizers in optical flow and relate it with some standard approaches. Our survey takes into account several methods that use decreasing functions for mitigating the diffusion at image contours. Consequently, this kind of strategies may produce instabilities in the estimation of the optical flows. Hence, we study the problem of instabilities and show that it actually arises from an ill-posed formulation. From this study, it is possible to come across with different schemes to solve this problem. One of these consists in separating the pure TV process from the mitigating strategy. This has been used in another work and we demonstrate here that it has a good performance. Furthermore, we propose two alternatives to avoid the instability problems: (i) we study a fully automatic approach that solves the problem based on the information of the whole image; (ii) we derive a semi-automatic approach that takes into account the image gradients in a close neighborhood adapting the parameter in each position. In the experimental results, we present a detailed study and comparison between the different alternatives. These methods provide very good results, especially for sequences with a few dominant gradients. Additionally, a surprising effect of these approaches is that they can cope with occlusions. This can be easily achieved by using strong regularizations and high penalizations at image contours.
Resumo:
The Peer-to-Peer network paradigm is drawing the attention of both final users and researchers for its features. P2P networks shift from the classic client-server approach to a high level of decentralization where there is no central control and all the nodes should be able not only to require services, but to provide them to other peers as well. While on one hand such high level of decentralization might lead to interesting properties like scalability and fault tolerance, on the other hand it implies many new problems to deal with. A key feature of many P2P systems is openness, meaning that everybody is potentially able to join a network with no need for subscription or payment systems. The combination of openness and lack of central control makes it feasible for a user to free-ride, that is to increase its own benefit by using services without allocating resources to satisfy other peers’ requests. One of the main goals when designing a P2P system is therefore to achieve cooperation between users. Given the nature of P2P systems based on simple local interactions of many peers having partial knowledge of the whole system, an interesting way to achieve desired properties on a system scale might consist in obtaining them as emergent properties of the many interactions occurring at local node level. Two methods are typically used to face the problem of cooperation in P2P networks: 1) engineering emergent properties when designing the protocol; 2) study the system as a game and apply Game Theory techniques, especially to find Nash Equilibria in the game and to reach them making the system stable against possible deviant behaviors. In this work we present an evolutionary framework to enforce cooperative behaviour in P2P networks that is alternative to both the methods mentioned above. Our approach is based on an evolutionary algorithm inspired by computational sociology and evolutionary game theory, consisting in having each peer periodically trying to copy another peer which is performing better. The proposed algorithms, called SLAC and SLACER, draw inspiration from tag systems originated in computational sociology, the main idea behind the algorithm consists in having low performance nodes copying high performance ones. The algorithm is run locally by every node and leads to an evolution of the network both from the topology and from the nodes’ strategy point of view. Initial tests with a simple Prisoners’ Dilemma application show how SLAC is able to bring the network to a state of high cooperation independently from the initial network conditions. Interesting results are obtained when studying the effect of cheating nodes on SLAC algorithm. In fact in some cases selfish nodes rationally exploiting the system for their own benefit can actually improve system performance from the cooperation formation point of view. The final step is to apply our results to more realistic scenarios. We put our efforts in studying and improving the BitTorrent protocol. BitTorrent was chosen not only for its popularity but because it has many points in common with SLAC and SLACER algorithms, ranging from the game theoretical inspiration (tit-for-tat-like mechanism) to the swarms topology. We discovered fairness, meant as ratio between uploaded and downloaded data, to be a weakness of the original BitTorrent protocol and we drew inspiration from the knowledge of cooperation formation and maintenance mechanism derived from the development and analysis of SLAC and SLACER, to improve fairness and tackle freeriding and cheating in BitTorrent. We produced an extension of BitTorrent called BitFair that has been evaluated through simulation and has shown the abilities of enforcing fairness and tackling free-riding and cheating nodes.
Resumo:
Computer aided design of Monolithic Microwave Integrated Circuits (MMICs) depends critically on active device models that are accurate, computationally efficient, and easily extracted from measurements or device simulators. Empirical models of active electron devices, which are based on actual device measurements, do not provide a detailed description of the electron device physics. However they are numerically efficient and quite accurate. These characteristics make them very suitable for MMIC design in the framework of commercially available CAD tools. In the empirical model formulation it is very important to separate linear memory effects (parasitic effects) from the nonlinear effects (intrinsic effects). Thus an empirical active device model is generally described by an extrinsic linear part which accounts for the parasitic passive structures connecting the nonlinear intrinsic electron device to the external world. An important task circuit designers deal with is evaluating the ultimate potential of a device for specific applications. In fact once the technology has been selected, the designer would choose the best device for the particular application and the best device for the different blocks composing the overall MMIC. Thus in order to accurately reproducing the behaviour of different-in-size devices, good scalability properties of the model are necessarily required. Another important aspect of empirical modelling of electron devices is the mathematical (or equivalent circuit) description of the nonlinearities inherently associated with the intrinsic device. Once the model has been defined, the proper measurements for the characterization of the device are performed in order to identify the model. Hence, the correct measurement of the device nonlinear characteristics (in the device characterization phase) and their reconstruction (in the identification or even simulation phase) are two of the more important aspects of empirical modelling. This thesis presents an original contribution to nonlinear electron device empirical modelling treating the issues of model scalability and reconstruction of the device nonlinear characteristics. The scalability of an empirical model strictly depends on the scalability of the linear extrinsic parasitic network, which should possibly maintain the link between technological process parameters and the corresponding device electrical response. Since lumped parasitic networks, together with simple linear scaling rules, cannot provide accurate scalable models, either complicate technology-dependent scaling rules or computationally inefficient distributed models are available in literature. This thesis shows how the above mentioned problems can be avoided through the use of commercially available electromagnetic (EM) simulators. They enable the actual device geometry and material stratification, as well as losses in the dielectrics and electrodes, to be taken into account for any given device structure and size, providing an accurate description of the parasitic effects which occur in the device passive structure. It is shown how the electron device behaviour can be described as an equivalent two-port intrinsic nonlinear block connected to a linear distributed four-port passive parasitic network, which is identified by means of the EM simulation of the device layout, allowing for better frequency extrapolation and scalability properties than conventional empirical models. Concerning the issue of the reconstruction of the nonlinear electron device characteristics, a data approximation algorithm has been developed for the exploitation in the framework of empirical table look-up nonlinear models. Such an approach is based on the strong analogy between timedomain signal reconstruction from a set of samples and the continuous approximation of device nonlinear characteristics on the basis of a finite grid of measurements. According to this criterion, nonlinear empirical device modelling can be carried out by using, in the sampled voltage domain, typical methods of the time-domain sampling theory.
Resumo:
This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.
Resumo:
The use of guided ultrasonic waves (GUW) has increased considerably in the fields of non-destructive (NDE) testing and structural health monitoring (SHM) due to their ability to perform long range inspections, to probe hidden areas as well as to provide a complete monitoring of the entire waveguide. Guided waves can be fully exploited only once their dispersive properties are known for the given waveguide. In this context, well stated analytical and numerical methods are represented by the Matrix family methods and the Semi Analytical Finite Element (SAFE) methods. However, while the former are limited to simple geometries of finite or infinite extent, the latter can model arbitrary cross-section waveguides of finite domain only. This thesis is aimed at developing three different numerical methods for modelling wave propagation in complex translational invariant systems. First, a classical SAFE formulation for viscoelastic waveguides is extended to account for a three dimensional translational invariant static prestress state. The effect of prestress, residual stress and applied loads on the dispersion properties of the guided waves is shown. Next, a two-and-a-half Boundary Element Method (2.5D BEM) for the dispersion analysis of damped guided waves in waveguides and cavities of arbitrary cross-section is proposed. The attenuation dispersive spectrum due to material damping and geometrical spreading of cavities with arbitrary shape is shown for the first time. Finally, a coupled SAFE-2.5D BEM framework is developed to study the dispersion characteristics of waves in viscoelastic waveguides of arbitrary geometry embedded in infinite solid or liquid media. Dispersion of leaky and non-leaky guided waves in terms of speed and attenuation, as well as the radiated wavefields, can be computed. The results obtained in this thesis can be helpful for the design of both actuation and sensing systems in practical application, as well as to tune experimental setup.
Resumo:
The multimodal biology activity of ergot alkaloids is known by humankind since middle ages. Synthetically modified ergot alkaloids are used for the treatment of various medical conditions. Despite the great progress in organic syntheses, the total synthesis of ergot alkaloids remains a great challenge due to the complexity of their polycyclic structure with multiple stereogenic centres. This project has developed a new domino reaction between indoles bearing a Michael acceptor at the 4 position and nitroethene, leading to potential ergot alkaloid precursors in highly enantioenriched form. The reaction was optimised and applied to a large variety of substrate with good results. Even if unfortunately all attempts to further modify the obtained polycyclic structure failed, it was found a reaction able to produce the diastereoisomer of the polycyclic product in excellent yields. The compounds synthetized were characterized by NMR and ESIMS analysis confirming the structure and their enantiomeric excess was determined by chiral stationary phase HPLC. The mechanism of the reaction was evaluated by DFT calculations, showing the formation of a key bicoordinated nitronate intermediate, and fully accounting for the results observed with all substrates. The relative and absolute configuration of the adducts were determined by a combination of NMR, ECD and computational methods.
Resumo:
Information is nowadays a key resource: machine learning and data mining techniques have been developed to extract high-level information from great amounts of data. As most data comes in form of unstructured text in natural languages, research on text mining is currently very active and dealing with practical problems. Among these, text categorization deals with the automatic organization of large quantities of documents in priorly defined taxonomies of topic categories, possibly arranged in large hierarchies. In commonly proposed machine learning approaches, classifiers are automatically trained from pre-labeled documents: they can perform very accurate classification, but often require a consistent training set and notable computational effort. Methods for cross-domain text categorization have been proposed, allowing to leverage a set of labeled documents of one domain to classify those of another one. Most methods use advanced statistical techniques, usually involving tuning of parameters. A first contribution presented here is a method based on nearest centroid classification, where profiles of categories are generated from the known domain and then iteratively adapted to the unknown one. Despite being conceptually simple and having easily tuned parameters, this method achieves state-of-the-art accuracy in most benchmark datasets with fast running times. A second, deeper contribution involves the design of a domain-independent model to distinguish the degree and type of relatedness between arbitrary documents and topics, inferred from the different types of semantic relationships between respective representative words, identified by specific search algorithms. The application of this model is tested on both flat and hierarchical text categorization, where it potentially allows the efficient addition of new categories during classification. Results show that classification accuracy still requires improvements, but models generated from one domain are shown to be effectively able to be reused in a different one.
Resumo:
This thesis details the development of quantum chemical methods for the accurate theoretical description of molecular systems with a complicated electronic structure. In simple cases, a single Slater determinant, in which the electrons occupy a number of energetically lowest molecular orbitals, offers a qualitatively correct model. The widely used coupled-cluster method CCSD(T) efficiently includes electron correlation effects starting from this determinant and provides reaction energies in error by only a few kJ/mol. However, the method often fails when several electronic configurations are important, as, for instance, in the course of many chemical reactions or in transition metal compounds. Internally contracted multireference coupled-cluster methods (ic-MRCC methods) cure this deficiency by using a linear combination of determinants as a reference function. Despite their theoretical elegance, the ic-MRCC equations involve thousands of terms and are therefore derived by the computer. Calculations of energy surfaces of BeH2, HF, LiF, H2O, N2 and Be3 unveil the theory's high accuracy compared to other approaches and the quality of various hierarchies of approximations. New theoretical advances include size-extensive techniques for removing linear dependencies in the ic-MRCC equations and a multireference analog of CCSD(T). Applications of the latter method to O3, Ni2O2, benzynes, C6H7NO and Cr2 underscore its potential to become a new standard method in quantum chemistry.