998 resultados para Program Optimization


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: The purpose of this study was to examine the influence of three different high-intensity interval training (HIT) regimens on endurance performance in highly trained endurance athletes. Methods: Before, and after 2 and 4 wk of training, 38 cyclists and triathletes (mean +/- SD; age = 25 +/- 6 yr; mass = 75 +/- 7 kg; (V)over dot O-2peak = 64.5 +/- 5.2 mL.kg(-1).min(-1)) performed: 1) a progressive cycle test to measure peak oxygen consumption ((V)over dotO(2peak)) and peak aerobic power output (PPO), 2) a time to exhaustion test (T-max) at their (V)over dotO(2peak) power output (P-max), as well as 3) a 40-kin time-trial (TT40). Subjects were matched and assigned to one of four training groups (G(1), N = 8, 8 X 60% T-max P-max, 1:2 work:recovery ratio; G(2), N = 9, 8 X 60% T-max at P-max, recovery at 65% HRmax; G(3), N = 10, 12 X 30 s at 175% PPO, 4.5-min recovery; G(CON), N = 11). In addition to G(1) G(2), and G(3) performing HIT twice per week, all athletes maintained their regular low-intensity training throughout the experimental period. Results: All HIT groups improved TT40 performance (+4.4 to +5.8%) and PPO (+3.0 to +6.2%) significantly more than G(CON) (-0.9 to + 1.1 %; P < 0.05). Furthermore, G(1) (+5.4%) and G(2) (+8.1%) improved their (V)over dot O-2peak significantly more than G(CON) (+ 1.0%; P < 0.05). Conclusion: The present study has shown that when HIT incorporates P-max as the interval intensity and 60% of T-max as the interval duration, already highly trained cyclists can significantly improve their 40-km time trial performance. Moreover, the present data confirm prior research, in that repeated supramaximal HIT can significantly improve 40-km time trial performance.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We discuss a framework for the application of abstract interpretation as an aid during program development, rather than in the more traditional application of program optimization. Program validation and detection of errors is first performed statically by comparing (partial) specifications written in terms of assertions against information obtained from (global) static analysis of the program. The results of this process are expressed in the user assertion language. Assertions (or parts of assertions) which cannot be checked statically are translated into run-time tests. The framework allows the use of assertions to be optional. It also allows using very general properties in assertions, beyond the predefined set understandable by the static analyzer and including properties defined by user programs. We also report briefly on an implementation of the framework. The resulting tool generates and checks assertions for Prolog, CLP(R), and CHIP/CLP(fd) programs, and integrates compile-time and run-time checking in a uniform way. The tool allows using properties such as types, modes, non-failure, determinacy, and computational cost, and can treat modules separately, performing incremental analysis.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Knowing the size of the terms to which program variables are bound at run-time in logic programs is required in a class of applications related to program optimization such as, for example, recursion elimination and granularity analysis. Such size is difficult to even approximate at compile time and is thus generally computed at run-time by using (possibly predefined) predicates which traverse the terms involved. We propose a technique based on program transformation which has the potential of performing this computation much more efficiently. The technique is based on finding program procedures which are called before those in which knowledge regarding term sizes is needed and which traverse the terms whose size is to be determined, and transforming such procedures so that they compute term sizes "on the fly". We present a systematic way of determining whether a given program can be transformed in order to compute a given term size at a given program point without additional term traversal. Also, if several such transformations are possible our approach allows finding minimal transformations under certain criteria. We also discuss the advantages and present some applications of our technique.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We present a framework for the application of abstract interpretation as an aid during program development, rather than in the more traditional application of program optimization. Program validation and detection of errors is first performed statically by comparing (partial) specifications written in terms of assertions against information obtained from static analysis of the program. The results of this process are expressed in the user assertion language. Assertions (or parts of assertions) which cannot be verified statically are translated into run-time tests. The framework allows the use of assertions to be optional. It also allows using very general properties in assertions, beyond the predefined set understandable by the static analyzer and including properties defined by means of user programs. We also report briefly on an implementation of the framework. The resulting tool generates and checks assertions for Prolog, CLP(R), and CHIP/CLP(fd) programs, and integrates compile-time and run-time checking in a uniform way. The tool allows using properties such as types, modes, non-failure, determinacy, and computational cost, and can treat modules separately, performing incremental analysis. In practice, this modularity allows detecting statically bugs in user programs even if they do not contain any assertions.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Knowing the size of the terms to which program variables are bound at run-time in logic programs is required in a class of applications related to program optimization such as, for example, granularity analysis and selection among different algorithms or control rules whose performance may be dependent on such size. Such size is difficult to even approximate at compile time and is thus generally computed at run-time by using (possibly predefined) predicates which traverse the terms involved. We propose a technique based on program transformation which has the potential of performing this computation much more efficiently. The technique is based on finding program procedures which are called before those in which knowledge regarding term sizes is needed and which traverse the terms whose size is to be determined, and transforming such procedures so that they compute term sizes "on the fly". We present a systematic way of determining whether a given program can be transformed in order to compute a given term size at a given program point without additional term traversal. Also, if several such transformations are possible our approach allows finding minimal transformations under certain criteria. We also discuss the advantages and applications of our technique and present some performance results.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Knowing the size of the terms to which program variables are bound at run-time in logic programs is required in a class of applications related to program optimization such as, for example, recursion elimination and granularity analysis. Such size is difficult to even approximate at compile time and is thus generally computed at run-time by using (possibly predefined) predicates which traverse the terms involved. We propose a technique based on program transformation which has the potential of performing this computation much more efficiently. The technique is based on finding program procedures which are called before those in which knowledge regarding term sizes is needed and which traverse the terms whose size is to be determined, and transforming such procedures so that they compute term sizes "on the fly". We present a systematic way of determining whether a given program can be transformed in order to compute a given term size at a given program point without additional term traversal. Also, if several such transformations are possible our approach allows finding minimal transformations under certain criteria. We also discuss the advantages and present some applications of our technique.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The article describes the structure of an ontology model for Optimization of a sequential program. The components of an intellectual modeling system for program optimization are described. The functions of the intellectual modeling system are defined.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Työn tavoitteena on kehittää ABB:lle palvelutuote, jota voidaan tarjota voimalaitosasiakkaille. Uuden palvelutuotteen tulee vastata ABB:n uuden strategian linjauksiin. Palvelulla tarjotaan asiakkaille 1.1.2015 voimaan tulleen energiatehokkuuslain määrittelemien pakollisten toimenpiteiden suoritusta. Työssä kerätään, käsitellään ja analysoidaan tietoa voimalaitosasiakkaille suunnatun palvelun tuotteistamisprosessin päätöksenteon tueksi. Palvelutuotteen kehittämistä varten tutkitaan ABB:n nykyisiä palvelutuotteita, osaamista ja referenssi projekteja, energiatehokkuuslakia, voimalaitosten energiatehokkuus-potentiaalia ja erilaisia energiakatselmusmalleja. Päätöksenteon tueksi tehdään referenssiprojektina energia-analyysi voimalaitokselle, jossa voimalaitoksesta tehdään ipsePRO simulointiohjelmalla mallinnus. Mallinnuksen ja koeajojen avulla tutkitaan voimalaitoksen minimikuorman optimointia. Markkinatutkimuksessa selvitetään lainsäädännön vaikutusta, nykyistä markkinatilannetta, potentiaalisia asiakkaita, kilpailijoita ja ABB:n mahdollisuuksia toimia alalla SWOT–analyysin avulla. Tutkimuksen tulosten perusteella tehdään päätös tuotteistaa voimalaitoksille palvelutuote, joka sisältää kaikki toimet energiatehokkuuslain asettamien vaatimusten täyttämiseen yrityksen energiakatselmuksen vastuuhenkilön, energiakatselmuksen ja kohdekatselmuksien teon osalta. Lisäksi työn aikana Energiavirasto myönsi ABB:lle pätevyyden toimia yrityksen energiakatselmuksen vastuuhenkilönä, mikä on edellytyksenä palvelun tarjoamiselle.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objectives. A large-scale survey of doses to patients undergoing the most frequent radiological examinations was carried out in health services in Sao Paulo (347 radiological examinations per 1 000 inhabitants), the most populous Brazilian state. Methods. A postal dosimetric kit with thermoluminescence dosimeters was used to evaluate the entrance surface dose (ESD) to patients. A stratified sampling technique applied to the national health database furnished important data on the distribution of equipment and the annual number of examinations. Chest, head (skull and sinus), and spine (cervical, thoracic, and lumbar) examinations were included in the trial. A total of 83 rooms and 868 patients were included, and 1 415 values of ESD were measured. Results. The data show large coefficients of variation in tube charge, giving rise to large variations in ESD values. Also, a series of high ESD values associated with unnecessary localizing fluoroscopy were detected. Diagnostic reference levels were determined, based on the 75th percentile (third quartile) of the ESD distributions. For adult patients, the diagnostic reference levels achieved are very similar to those obtained in international surveys. However, the situation is different for pediatric patients: the ESD values found in this survey are twice as large as the international recommendations for chest radiographs of children. Conclusions. Despite the reduced number of ESD values and rooms for the pediatric patient group, it is recommended that practices in chest examinations be revised and that specific national reference doses and image quality be established after a broader survey is carried out.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Global data-flow analysis of (constraint) logic programs, which is generally based on abstract interpretation [7], is reaching a comparatively high level of maturity. A natural question is whether it is time for its routine incorporation in standard compilers, something which, beyond a few experimental systems, has not happened to date. Such incorporation arguably makes good sense only if: • the range of applications of global analysis is large enough to justify the additional complication in the compiler, and • global analysis technology can deal with all the features of "practical" languages (e.g., the ISO-Prolog built-ins) and "scales up" for large programs. We present a tutorial overview of a number of concepts and techniques directly related to the issues above, with special emphasis on the first one. In particular, we concéntrate on novel uses of global analysis during program development and debugging, rather than on the more traditional application área of program optimization. The idea of using abstract interpretation for validation and diagnosis has been studied in the context of imperative programming [2] and also of logic programming. The latter work includes issues such as using approximations to reduce the burden posed on programmers by declarative debuggers [6, 3] and automatically generating and checking assertions [4, 5] (which includes the more traditional type checking of strongly typed languages, such as Gódel or Mercury [1, 8, 9]) We also review some solutions for scalability including modular analysis, incremental analysis, and widening. Finally, we discuss solutions for dealing with meta-predicates, side-effects, delay declarations, constraints, dynamic predicates, and other such features which may appear in practical languages. In the discussion we will draw both from the literature and from our experience and that of others in the development and use of the CIAO system analyzer. In order to emphasize the practical aspects of the solutions discussed, the presentation of several concepts will be illustrated by examples run on the CIAO system, which makes extensive use of global analysis and assertions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract interpretation-based data-flow analysis of logic programs is at this point relatively well understood from the point of view of general frameworks and abstract domains. On the other hand, comparatively little attention has been given to the problems which arise when analysis of a full, practical dialect of the Prolog language is attempted, and only few solutions to these problems have been proposed to date. Such problems relate to dealing correctly with all builtins, including meta-logical and extra-logical predicates, with dynamic predicates (where the program is modified during execution), and with the absence of certain program text during compilation. Existing proposals for dealing with such issues generally restrict in one way or another the classes of programs which can be analyzed if the information from analysis is to be used for program optimization. This paper attempts to fill this gap by considering a full dialect of Prolog, essentially following the recently proposed ISO standard, pointing out the problems that may arise in the analysis of such a dialect, and proposing a combination of known and novel solutions that together allow the correct analysis of arbitrary programs using the full power of the language.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The technique of Abstract Interpretation has allowed the development of very sophisticated global program analyses which are at the same time provably correct and practical. We present in a tutorial fashion a novel program development framework which uses abstract interpretation as a fundamental tool. The framework uses modular, incremental abstract interpretation to obtain information about the program. This information is used to validate programs, to detect bugs with respect to partial specifications written using assertions (in the program itself and/or in system libraries), to generate and simplify run-time tests, and to perform high-level program transformations such as multiple abstract specialization, parallelization, and resource usage control, all in a provably correct way. In the case of validation and debugging, the assertions can refer to a variety of program points such as procedure entry, procedure exit, points within procedures, or global computations. The system can reason with much richer information than, for example, traditional types. This includes data structure shape (including pointer sharing), bounds on data structure sizes, and other operational variable instantiation properties, as well as procedure-level properties such as determinacy, termination, nonfailure, and bounds on resource consumption (time or space cost). CiaoPP, the preprocessor of the Ciao multi-paradigm programming system, which implements the described functionality, will be used to illustrate the fundamental ideas.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present a tutorial overview of Ciaopp, the Ciao system preprocessor. Ciao is a public-domain, next-generation logic programming system, which subsumes ISO-Prolog and is specifically designed to a) be highly extensible via librarles and b) support modular program analysis, debugging, and optimization. The latter tasks are performed in an integrated fashion by Ciaopp. Ciaopp uses modular, incremental abstract interpretation to infer properties of program predicates and literals, including types, variable instantiation properties (including modes), non-failure, determinacy, bounds on computational cost, bounds on sizes of terms in the program, etc. Using such analysis information, Ciaopp can find errors at compile-time in programs and/or perform partial verification. Ciaopp checks how programs cali system librarles and also any assertions present in the program or in other modules used by the program. These assertions are also used to genérate documentation automatically. Ciaopp also uses analysis information to perform program transformations and optimizations such as múltiple abstract specialization, parallelization (including granularity control), and optimization of run-time tests for properties which cannot be checked completely at compile-time. We illustrate "hands-on" the use of Ciaopp in all these tasks. By design, Ciaopp is a generic tool, which can be easily tailored to perform these and other tasks for different LP and CLP dialects.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present in a tutorial fashion CiaoPP, the preprocessor of the Ciao multi-paradigm programming system, which implements a novel program development framework which uses abstract interpretation as a fundamental tool. The framework uses modular, incremental abstract interpretation to obtain information about the program. This information is used to validate programs, to detect bugs with respect to partial specifications written using assertions (in the program itself and/or in system libraries), to generate and simplify run-time tests, and to perform high-level program transformations such as multiple abstract specialization, parallelization, and resource usage control, all in a provably correct way. In the case of validation and debugging, the assertions can refer to a variety of program points such as procedure entry, procedure exit, points within procedures, or global computations. The system can reason with much richer information than, for example, traditional types. This includes data structure shape (including pointer sharing), bounds on data structure sizes, and other operational variable instantiation properties, as well as procedure-level properties such as determinacy, termination, non-failure, and bounds on resource consumption (time or space cost).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Optimized structure of the educational program consisting of a set of the interconnected educational objects is offered by means of problem solution of optimum partition of the acyclic weighed graph. The condition of acyclicity preservation for subgraphs is formulated and the quantitative assessment of decision options is executed. The original algorithm of search of quasioptimum partition using the genetic algorithm scheme with coding chromosomes by permutation is offered. Object-oriented realization of algorithm in language C++ is described and results of numerical experiments are presented.