631 resultados para algorithmic skeletons
Resumo:
This work describes the creation of heuristics rules based on 13C-NMR spectroscopy that characterize several skeletal types of diterpenes. Using a collection of 2745 spectra we built a database linked to the expert system SISTEMAT. Several programs were applied to the database in order to discover characteristic signals that identify with a good performance, a large diversity of skeletal types. The heuristic approach used was able to differentiate groups of skeletons based firstly on the number of primary, secondary, tertiary and quaternary carbons, and secondly the program searches, for each group, if there are ranges of chemical shifts that identifies specific skeletal type. The program was checked with 100 new structures recently published and was able to identify the correct skeleton in 65 of the studied cases. When the skeleton has several hundreds of compounds, for example, the labdanes, the program employs the concept of subskeletal, and does not classify in the same group labdanes with double bounds at different positions. The chemical shift ranges for each subskeletal types and the structures of all skeletal types are given. The consultation program can be obtained from the authors. © 1997 - IOS Press. All rights reserved.
Resumo:
Pós-graduação em Química - IQ
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Hundreds of Terabytes of CMS (Compact Muon Solenoid) data are being accumulated for storage day by day at the University of Nebraska-Lincoln, which is one of the eight US CMS Tier-2 sites. Managing this data includes retaining useful CMS data sets and clearing storage space for newly arriving data by deleting less useful data sets. This is an important task that is currently being done manually and it requires a large amount of time. The overall objective of this study was to develop a methodology to help identify the data sets to be deleted when there is a requirement for storage space. CMS data is stored using HDFS (Hadoop Distributed File System). HDFS logs give information regarding file access operations. Hadoop MapReduce was used to feed information in these logs to Support Vector Machines (SVMs), a machine learning algorithm applicable to classification and regression which is used in this Thesis to develop a classifier. Time elapsed in data set classification by this method is dependent on the size of the input HDFS log file since the algorithmic complexities of Hadoop MapReduce algorithms here are O(n). The SVM methodology produces a list of data sets for deletion along with their respective sizes. This methodology was also compared with a heuristic called Retention Cost which was calculated using size of the data set and the time since its last access to help decide how useful a data set is. Accuracies of both were compared by calculating the percentage of data sets predicted for deletion which were accessed at a later instance of time. Our methodology using SVMs proved to be more accurate than using the Retention Cost heuristic. This methodology could be used to solve similar problems involving other large data sets.
Resumo:
The broad goals of verifiable visualization rely on correct algorithmic implementations. We extend a framework for verification of isosurfacing implementations to check topological properties. Specifically, we use stratified Morse theory and digital topology to design algorithms which verify topological invariants. Our extended framework reveals unexpected behavior and coding mistakes in popular publicly available isosurface codes.
Resumo:
The existence of a small partition of a combinatorial structure into random-like subparts, a so-called regular partition, has proven to be very useful in the study of extremal problems, and has deep algorithmic consequences. The main result in this direction is the Szemeredi Regularity Lemma in graph theory. In this note, we are concerned with regularity in permutations: we show that every permutation of a sufficiently large set has a regular partition into a small number of intervals. This refines the partition given by Cooper (2006) [10], which required an additional non-interval exceptional class. We also introduce a distance between permutations that plays an important role in the study of convergence of a permutation sequence. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
We discuss an algorithmic framework based on efficient graph algorithms and algebraic-topological computational tools. The framework is aimed at automatic computation of a database of global dynamics of a given m-parameter semidynamical system with discrete time on a bounded subset of the n-dimensional phase space. We introduce the mathematical background, which is based upon Conley's topological approach to dynamics, describe the algorithms for the analysis of the dynamics using rectangular grids both in phase space and parameter space, and show two sample applications. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4767672]
Resumo:
In the past decades, all of the efforts at quantifying systems complexity with a general tool has usually relied on using Shannon's classical information framework to address the disorder of the system through the Boltzmann-Gibbs-Shannon entropy, or one of its extensions. However, in recent years, there were some attempts to tackle the quantification of algorithmic complexities in quantum systems based on the Kolmogorov algorithmic complexity, obtaining some discrepant results against the classical approach. Therefore, an approach to the complexity measure is proposed here, using the quantum information formalism, taking advantage of the generality of the classical-based complexities, and being capable of expressing these systems' complexity on other framework than its algorithmic counterparts. To do so, the Shiner-Davison-Landsberg (SDL) complexity framework is considered jointly with linear entropy for the density operators representing the analyzed systems formalism along with the tangle for the entanglement measure. The proposed measure is then applied in a family of maximally entangled mixed state.
Resumo:
The aims were both to determine lactate and ventilatory threshold during incremental resistance training and to analyze the acute cardiorespiratory and metabolic responses during constant-load resistance exercise at lactate threshold (LT) intensity. Ten healthy men performed 2 protocols on leg press machine. The incremental test was performed to determine the lactate and ventilatory thresholds through an algorithmic adjustment method. After 48 h, a constant-load exercise at LT intensity was executed. The intensity of LT and ventilatory threshold was 27.1 +/- 3.7 and 30.3 +/- 7.9% of 1RM, respectively (P=0.142). During the constant-load resistance exercise, no significant variation was observed between set 9 and set 15 for blood lactate concentration (3.3 +/- 0.9 and 4.1 +/- 1.4 mmol.L-1, respectively. P=0.166) and BORG scale (11.5 +/- 2.9 and 13.0 +/- 3.5, respectively. P=0.783). No significant variation was observed between set 6 and set 15 for minute ventilation (19.4 +/- 4.9 and 22.4 +/- 5.5L. min(-1), respectively. P=0.091) and between S3 and S15 for VO2 (0.77 +/- 0.18 and 0.83 +/- 0.16L. min(-1), respectively. P=1.0). Constant-load resistance exercise at LT intensity corresponds to a steady state of ventilatory, cardio-metabolic parameters and ratings of perceived exertion.
Resumo:
We present two new constraint qualifications (CQs) that are weaker than the recently introduced relaxed constant positive linear dependence (RCPLD) CQ. RCPLD is based on the assumption that many subsets of the gradients of the active constraints preserve positive linear dependence locally. A major open question was to identify the exact set of gradients whose properties had to be preserved locally and that would still work as a CQ. This is done in the first new CQ, which we call the constant rank of the subspace component (CRSC) CQ. This new CQ also preserves many of the good properties of RCPLD, such as local stability and the validity of an error bound. We also introduce an even weaker CQ, called the constant positive generator (CPG), which can replace RCPLD in the analysis of the global convergence of algorithms. We close this work by extending convergence results of algorithms belonging to all the main classes of nonlinear optimization methods: sequential quadratic programming, augmented Lagrangians, interior point algorithms, and inexact restoration.
Resumo:
Lagoa Santa, a karstic area in eastern Central Brazil, has been subject to research on human paleontology and archaeology for 175 years. Almost 300 Paleoindian human skeletons have been found since Danish naturalist Peter Lund's pioneering work. Even so, some critical issues such as the role of rockshelters in settlement systems, and the possible paleoclimatic implications of the peopling of the region have yet to be addressed. We present some results obtained from recent excavations at four rockshelters and two open-air sites, new dates for human Paleoindian skeletons, and a model to explain the cultural patterns observed so far. It is also argued that the Paleoindian subsistence system at Lagoa Santa was similar to other locations in South America: generalized small-game hunting complemented by fruits, seed, and root gathering.
Resumo:
Interactive theorem provers (ITP for short) are tools whose final aim is to certify proofs written by human beings. To reach that objective they have to fill the gap between the high level language used by humans for communicating and reasoning about mathematics and the lower level language that a machine is able to “understand” and process. The user perceives this gap in terms of missing features or inefficiencies. The developer tries to accommodate the user requests without increasing the already high complexity of these applications. We believe that satisfactory solutions can only come from a strong synergy between users and developers. We devoted most part of our PHD designing and developing the Matita interactive theorem prover. The software was born in the computer science department of the University of Bologna as the result of composing together all the technologies developed by the HELM team (to which we belong) for the MoWGLI project. The MoWGLI project aimed at giving accessibility through the web to the libraries of formalised mathematics of various interactive theorem provers, taking Coq as the main test case. The motivations for giving life to a new ITP are: • study the architecture of these tools, with the aim of understanding the source of their complexity • exploit such a knowledge to experiment new solutions that, for backward compatibility reasons, would be hard (if not impossible) to test on a widely used system like Coq. Matita is based on the Curry-Howard isomorphism, adopting the Calculus of Inductive Constructions (CIC) as its logical foundation. Proof objects are thus, at some extent, compatible with the ones produced with the Coq ITP, that is itself able to import and process the ones generated using Matita. Although the systems have a lot in common, they share no code at all, and even most of the algorithmic solutions are different. The thesis is composed of two parts where we respectively describe our experience as a user and a developer of interactive provers. In particular, the first part is based on two different formalisation experiences: • our internship in the Mathematical Components team (INRIA), that is formalising the finite group theory required to attack the Feit Thompson Theorem. To tackle this result, giving an effective classification of finite groups of odd order, the team adopts the SSReflect Coq extension, developed by Georges Gonthier for the proof of the four colours theorem. • our collaboration at the D.A.M.A. Project, whose goal is the formalisation of abstract measure theory in Matita leading to a constructive proof of Lebesgue’s Dominated Convergence Theorem. The most notable issues we faced, analysed in this part of the thesis, are the following: the difficulties arising when using “black box” automation in large formalisations; the impossibility for a user (especially a newcomer) to master the context of a library of already formalised results; the uncomfortable big step execution of proof commands historically adopted in ITPs; the difficult encoding of mathematical structures with a notion of inheritance in a type theory without subtyping like CIC. In the second part of the manuscript many of these issues will be analysed with the looking glasses of an ITP developer, describing the solutions we adopted in the implementation of Matita to solve these problems: integrated searching facilities to assist the user in handling large libraries of formalised results; a small step execution semantic for proof commands; a flexible implementation of coercive subtyping allowing multiple inheritance with shared substructures; automatic tactics, integrated with the searching facilities, that generates proof commands (and not only proof objects, usually kept hidden to the user) one of which specifically designed to be user driven.
Resumo:
This thesis deals with an investigation of combinatorial and robust optimisation models to solve railway problems. Railway applications represent a challenging area for operations research. In fact, most problems in this context can be modelled as combinatorial optimisation problems, in which the number of feasible solutions is finite. Yet, despite the astonishing success in the field of combinatorial optimisation, the current state of algorithmic research faces severe difficulties with highly-complex and data-intensive applications such as those dealing with optimisation issues in large-scale transportation networks. One of the main issues concerns imperfect information. The idea of Robust Optimisation, as a way to represent and handle mathematically systems with not precisely known data, dates back to 1970s. Unfortunately, none of those techniques proved to be successfully applicable in one of the most complex and largest in scale (transportation) settings: that of railway systems. Railway optimisation deals with planning and scheduling problems over several time horizons. Disturbances are inevitable and severely affect the planning process. Here we focus on two compelling aspects of planning: robust planning and online (real-time) planning.
Resumo:
The ever-increasing spread of automation in industry puts the electrical engineer in a central role as a promoter of technological development in a sector such as the use of electricity, which is the basis of all the machinery and productive processes. Moreover the spread of drives for motor control and static converters with structures ever more complex, places the electrical engineer to face new challenges whose solution has as critical elements in the implementation of digital control techniques with the requirements of inexpensiveness and efficiency of the final product. The successfully application of solutions using non-conventional static converters awake an increasing interest in science and industry due to the promising opportunities. However, in the same time, new problems emerge whose solution is still under study and debate in the scientific community During the Ph.D. course several themes have been developed that, while obtaining the recent and growing interest of scientific community, have much space for the development of research activity and for industrial applications. The first area of research is related to the control of three phase induction motors with high dynamic performance and the sensorless control in the high speed range. The management of the operation of induction machine without position or speed sensors awakes interest in the industrial world due to the increased reliability and robustness of this solution combined with a lower cost of production and purchase of this technology compared to the others available in the market. During this dissertation control techniques will be proposed which are able to exploit the total dc link voltage and at the same time capable to exploit the maximum torque capability in whole speed range with good dynamic performance. The proposed solution preserves the simplicity of tuning of the regulators. Furthermore, in order to validate the effectiveness of presented solution, it is assessed in terms of performance and complexity and compared to two other algorithm presented in literature. The feasibility of the proposed algorithm is also tested on induction motor drive fed by a matrix converter. Another important research area is connected to the development of technology for vehicular applications. In this field the dynamic performances and the low power consumption is one of most important goals for an effective algorithm. Towards this direction, a control scheme for induction motor that integrates within a coherent solution some of the features that are commonly required to an electric vehicle drive is presented. The main features of the proposed control scheme are the capability to exploit the maximum torque in the whole speed range, a weak dependence on the motor parameters, a good robustness against the variations of the dc-link voltage and, whenever possible, the maximum efficiency. The second part of this dissertation is dedicated to the multi-phase systems. This technology, in fact, is characterized by a number of issues worthy of investigation that make it competitive with other technologies already on the market. Multiphase systems, allow to redistribute power at a higher number of phases, thus making possible the construction of electronic converters which otherwise would be very difficult to achieve due to the limits of present power electronics. Multiphase drives have an intrinsic reliability given by the possibility that a fault of a phase, caused by the possible failure of a component of the converter, can be solved without inefficiency of the machine or application of a pulsating torque. The control of the magnetic field spatial harmonics in the air-gap with order higher than one allows to reduce torque noise and to obtain high torque density motor and multi-motor applications. In one of the next chapters a control scheme able to increase the motor torque by adding a third harmonic component to the air-gap magnetic field will be presented. Above the base speed the control system reduces the motor flux in such a way to ensure the maximum torque capability. The presented analysis considers the drive constrains and shows how these limits modify the motor performance. The multi-motor applications are described by a well-defined number of multiphase machines, having series connected stator windings, with an opportune permutation of the phases these machines can be independently controlled with a single multi-phase inverter. In this dissertation this solution will be presented and an electric drive consisting of two five-phase PM tubular actuators fed by a single five-phase inverter will be presented. Finally the modulation strategies for a multi-phase inverter will be illustrated. The problem of the space vector modulation of multiphase inverters with an odd number of phases is solved in different way. An algorithmic approach and a look-up table solution will be proposed. The inverter output voltage capability will be investigated, showing that the proposed modulation strategy is able to fully exploit the dc input voltage either in sinusoidal or non-sinusoidal operating conditions. All this aspects are considered in the next chapters. In particular, Chapter 1 summarizes the mathematical model of induction motor. The Chapter 2 is a brief state of art on three-phase inverter. Chapter 3 proposes a stator flux vector control for a three- phase induction machine and compares this solution with two other algorithms presented in literature. Furthermore, in the same chapter, a complete electric drive based on matrix converter is presented. In Chapter 4 a control strategy suitable for electric vehicles is illustrated. Chapter 5 describes the mathematical model of multi-phase induction machines whereas chapter 6 analyzes the multi-phase inverter and its modulation strategies. Chapter 7 discusses the minimization of the power losses in IGBT multi-phase inverters with carrier-based pulse width modulation. In Chapter 8 an extended stator flux vector control for a seven-phase induction motor is presented. Chapter 9 concerns the high torque density applications and in Chapter 10 different fault tolerant control strategies are analyzed. Finally, the last chapter presents a positioning multi-motor drive consisting of two PM tubular five-phase actuators fed by a single five-phase inverter.
Resumo:
Osteoarthritis (OA) or degenerative joint disease (DJD) is a pathology which affects the synovial joints and characterised by a focal loss of articular cartilage and subsequent bony reaction of the subcondral and marginal bone. Its etiology is best explained by a multifactorial model including: age, sex, genetic and systemic factors, other predisposing diseases and functional stress. In this study the results of the investigation of a modern identified skeletal collection will be presented. In particular, we will focus on the relationship between the presence of OA at various joints. The joint modifications have been analysed using a new methodology that allows the scoring of different degrees of expression of the features considered. Materials and Methods The sample examined comes from the Sassari identified skeletal collection (part of “Frassetto collections”). The individuals were born between 1828 and 1916 and died between 1918 and 1932. Information about sex and age is known for all the individuals. The occupation is known for 173 males and 125 females. Data concerning the occupation of the individuals indicate a preindustrial and rural society. OA has been diagnosed when eburnation (EB) or loss of morphology (LM) were present, or when at least two of the following: marginal lipping (ML), esostosis (EX) or erosion (ER), were present. For each articular surface affected a “mean score” was calculated, reflecting the “severity” of the alterations. A further “score” was calculated for each joint. In the analysis sexes and age classes were always kept separate. For the statistical analyses non parametric test were used. Results The results show there is an increase of OA with age in all the joints analyzed and in particular around 50 years and 60 years. The shoulder, the hip and the knee are the joints mainly affected with ageing while the ankle is the less affected; the correlation values confirm this result. The lesion which show the major correlation with age is the ML. In our sample males are more frequently and more severely affected by OA than females, particularly at the superior limbs, while hip and knee are similarly affected in the two sexes. Lateralization shows some positive results in particular in the right shoulder of males and in various articular surfaces especially of the superior limb of both males and females; articular surfaces and joints are quite always lateralized to the right. Occupational analyses did not show remarkable results probably because of the homogeneity of the sample; males although performing different activities are quite all employed in stressful works. No highest prevalence of knee and hip OA was found in farm-workers respect to the other males. Discussion and Conclusion In this work we propose a methodology to score the different features, necessary to diagnose OA, that allows the investigation of the severity of joint degeneration. This method is easier than the one proposed by Buikstra and Ubelaker (1994), but in the same time allows a quite detailed recording of the features. Epidemiological results can be interpreted quite simply and they are in accordance with other studies; more difficult is the interpretation of the occupational results because many questions concerning the activities performed by the individuals of the collection during their lifespan cannot be solved. Because of this, caution is suggested in the interpretation of bioarcheological specimens. With this work we hope to contribute to the discussion on the puzzling problem of the etiology of OA. The possibility of studying identified skeletons will add important data to the description of osseous features of OA, enriching the medical documentation, based on different criteria. Even if we are aware that the clinical diagnosis is different from the palaeopathological one we think our work will be useful in clarifying some epidemiological as well as pathological aspects of OA.