957 resultados para Optimization algorithms
Resumo:
There is a positive correlation between the intensity of use of a given antibiotic and the prevalence of resistant strains. The more you treat, more patients infected with resistant strains appears and, as a consequence, the higher the mortality due to the infection and the longer the hospitalization time. In contrast, the less you treat, the higher the mortality rates and the longer the hospitalization time of patients infected with sensitive strains that could be successfully treated. The hypothesis proposed in this paper is an attempt to solve such a conflict: there must be an optimum treatment intensity that minimizes both the additional mortality and hospitalization time due to the infection by both sensitive and resistant bacteria strains. In order to test this hypothesis we applied a simple mathematical model that allowed us to estimate the optimum proportion of patients to be treated in order to minimize the total number of deaths and hospitalization time due to the infection in a hospital setting. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
We suggest a new notion of behaviour preserving transition refinement based on partial order semantics. This notion is called transition refinement. We introduced transition refinement for elementary (low-level) Petri Nets earlier. For modelling and verifying complex distributed algorithms, high-level (Algebraic) Petri nets are usually used. In this paper, we define transition refinement for Algebraic Petri Nets. This notion is more powerful than transition refinement for elementary Petri nets because it corresponds to the simultaneous refinement of several transitions in an elementary Petri net. Transition refinement is particularly suitable for refinement steps that increase the degree of distribution of an algorithm, e.g. when synchronous communication is replaced by asynchronous message passing. We study how to prove that a replacement of a transition is a transition refinement.
Resumo:
This study aimed to develop a plate to treat fractures of the mandibular body in dogs and to validate the project using finite elements and biomechanical essays. Mandible prototypes were produced with 10 oblique ventrorostral fractures (favorable) and 10 oblique ventrocaudal fractures (unfavorable). Three groups were established for each fracture type. Osteosynthesis with a pure titanium plate of double-arch geometry and blocked monocortical screws offree angulanon were used. The mechanical resistance of the prototype with unfavorable fracture was lower than that of the fcworable fracture. In both fractures, the deflection increased and the relative stiffness decreased proportionally to the diminishing screw number The finite element analysis validated this plate study, since the maximum tension concentration observed on the plate was lower than the resistance limit tension admitted by the titanium. In conclusion, the double-arch geometry plate fixed with blocked monocortical screws has sufficient resistance to stabilize oblique,fractures, without compromising mandibular dental or neurovascular structures. J Vet Dent 24 (7); 212 - 221, 2010
Resumo:
A data warehouse is a data repository which collects and maintains a large amount of data from multiple distributed, autonomous and possibly heterogeneous data sources. Often the data is stored in the form of materialized views in order to provide fast access to the integrated data. One of the most important decisions in designing a data warehouse is the selection of views for materialization. The objective is to select an appropriate set of views that minimizes the total query response time with the constraint that the total maintenance time for these materialized views is within a given bound. This view selection problem is totally different from the view selection problem under the disk space constraint. In this paper the view selection problem under the maintenance time constraint is investigated. Two efficient, heuristic algorithms for the problem are proposed. The key to devising the proposed algorithms is to define good heuristic functions and to reduce the problem to some well-solved optimization problems. As a result, an approximate solution of the known optimization problem will give a feasible solution of the original problem. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
The problem of designing spatially cohesive nature reserve systems that meet biodiversity objectives is formulated as a nonlinear integer programming problem. The multiobjective function minimises a combination of boundary length, area and failed representation of the biological attributes we are trying to conserve. The task is to reserve a subset of sites that best meet this objective. We use data on the distribution of habitats in the Northern Territory, Australia, to show how simulated annealing and a greedy heuristic algorithm can be used to generate good solutions to such large reserve design problems, and to compare the effectiveness of these methods.
Resumo:
Purpose: The purpose of this study was to examine the influence of three different high-intensity interval training (HIT) regimens on endurance performance in highly trained endurance athletes. Methods: Before, and after 2 and 4 wk of training, 38 cyclists and triathletes (mean +/- SD; age = 25 +/- 6 yr; mass = 75 +/- 7 kg; (V)over dot O-2peak = 64.5 +/- 5.2 mL.kg(-1).min(-1)) performed: 1) a progressive cycle test to measure peak oxygen consumption ((V)over dotO(2peak)) and peak aerobic power output (PPO), 2) a time to exhaustion test (T-max) at their (V)over dotO(2peak) power output (P-max), as well as 3) a 40-kin time-trial (TT40). Subjects were matched and assigned to one of four training groups (G(1), N = 8, 8 X 60% T-max P-max, 1:2 work:recovery ratio; G(2), N = 9, 8 X 60% T-max at P-max, recovery at 65% HRmax; G(3), N = 10, 12 X 30 s at 175% PPO, 4.5-min recovery; G(CON), N = 11). In addition to G(1) G(2), and G(3) performing HIT twice per week, all athletes maintained their regular low-intensity training throughout the experimental period. Results: All HIT groups improved TT40 performance (+4.4 to +5.8%) and PPO (+3.0 to +6.2%) significantly more than G(CON) (-0.9 to + 1.1 %; P < 0.05). Furthermore, G(1) (+5.4%) and G(2) (+8.1%) improved their (V)over dot O-2peak significantly more than G(CON) (+ 1.0%; P < 0.05). Conclusion: The present study has shown that when HIT incorporates P-max as the interval intensity and 60% of T-max as the interval duration, already highly trained cyclists can significantly improve their 40-km time trial performance. Moreover, the present data confirm prior research, in that repeated supramaximal HIT can significantly improve 40-km time trial performance.