890 resultados para Solving-problem algorithms
Resumo:
Injection drug use (involving the injection of illicit opiates) poses serious public health problems in many countries. Research has indicated that injection drug users are at higher risk for morbidity in the form of HIV/AIDS and Hepatitis B and C, and drug-related mortality, as well as increased criminal activity. Methadone maintenance treatment is the most prominent form of pharmacotherapy treatment for illicit opiate dependence in several countries, and its application varies internationally with respect to treatment regulations and delivery modes. In order to effectively treat those patients who have previously been resistant to methadone maintenance treatment, several countries have been studying and/or considering heroin-assisted treatment as a complementary form of opiate pharmacotherapy treatment. This paper provides an overview of the prevalence of injection drug use and the opiate dependence problem internationally, the current opiate dependence treatment landscape in several countries, and the status of ongoing or planned heroin-assisted treatment trials in Australia, Canada and certain European countries.
Resumo:
As in the standard land assembly problem, a developer wants to buy two adjacent blocks of land belonging to two different owners. The value of the two blocks of land to the developer is greater than the sum of the individual values of the blocks for each owner. Unlike the land assembly literature, however, our focus is on the incentive that each lot owner has to delay the start of negotiations, rather than on the public goods nature of the problem. An incentive for delay exists, for example, when owners perceive that being last to sell will allow them to capture a larger share of the joint surplus from the development. We show that competition at point of sale can cause equilibrium delay, and that cooperation at point of sale will eliminate delay. This suggests that strategic delay is another source for the inefficient allocation of land, in addition to the public-good type externality pointed out by Grossman and Hart [Bell Journal of Economics 11 (1980) 42] and O'Flaherty [Regional Science and Urban Economics 24 (1994) 287]. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
This article intends to rationally reconstruct Locke`s theory of knowledge as incorporated in a research program concerning the nature and structure of the theories and models of rationality. In previous articles we argued that the rationalist program can be subdivided into the classical rationalistic subprogram, which includes the knowledge theories of Descartes, Locke, Hume and Kant, the neoclassical subprogram, which includes the approaches of Duhem, Poincare and Mach, and the critical subprogram of Popper. The subdivision results from the different views of rationality proposed by each one of these subprograms, as well as from the tools made available by each one of them, containing theoretical instruments used to arrange, organize and develop the discussion on rationality, the main one of which is the structure of solution of problems. In this essay we intend to reconstruct the assumptions of Locke`s theory of knowledge, which in our view belongs to the classical rationalistic subprogram because it shares with it the thesis of the identity of (scientific) knowledge and certain knowledge.
Resumo:
This paper develops a multi-regional general equilibrium model for climate policy analysis based on the latest version of the MIT Emissions Prediction and Policy Analysis (EPPA) model. We develop two versions so that we can solve the model either as a fully inter-temporal optimization problem (forward-looking, perfect foresight) or recursively. The standard EPPA model on which these models are based is solved recursively, and it is necessary to simplify some aspects of it to make inter-temporal solution possible. The forward-looking capability allows one to better address economic and policy issues such as borrowing and banking of GHG allowances, efficiency implications of environmental tax recycling, endogenous depletion of fossil resources, international capital flows, and optimal emissions abatement paths among others. To evaluate the solution approaches, we benchmark each version to the same macroeconomic path, and then compare the behavior of the two versions under a climate policy that restricts greenhouse gas emissions. We find that the energy sector and CO(2) price behavior are similar in both versions (in the recursive version of the model we force the inter-temporal theoretical efficiency result that abatement through time should be allocated such that the CO(2) price rises at the interest rate.) The main difference that arises is that the macroeconomic costs are substantially lower in the forward-looking version of the model, since it allows consumption shifting as an additional avenue of adjustment to the policy. On the other hand, the simplifications required for solving the model as an optimization problem, such as dropping the full vintaging of the capital stock and fewer explicit technological options, likely have effects on the results. Moreover, inter-temporal optimization with perfect foresight poorly represents the real economy where agents face high levels of uncertainty that likely lead to higher costs than if they knew the future with certainty. We conclude that while the forward-looking model has value for some problems, the recursive model produces similar behavior in the energy sector and provides greater flexibility in the details of the system that can be represented. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Teen Triple P is a multilevel system of intervention that is designed to provide parents with specific strategies to promote the positive development of their teenage children as they make the transition into high school and through puberty. The program is based on a combination of education about the developmental needs of adolescents, skills training to improve communication and problem-solving, plus specific modules to deal with common problems encountered by parents and adolescents that can escalate into major conflict and violence. It is designed to increase the engagement of parents of adolescent and pre-adolescent children by providing them with easy access to evidencebased parenting advice and support. This paper presents data collected as part of a survey of over 1400 students in first year high school at 9 Brisbane schools. The survey instrument was constructed to obtain students' reports about behaviour which is known to be associated with their health and wellbeing, and also on the extent to which their parents promoted or discouraged such behaviour at home, at school, and in their social and recreational activities in the wider community. Selected data from the survey were extracted and presented to parents at a series of parenting seminars held at the schools to promote appropriate parenting of teenagers. The objectives were to provide parents with accurate data about teenagers' behaviour, and about teenagers' reports of how they perceived their parents' behaviour. Normative data on parent and teenager behaviour will be presented from the survey as well as psychometric data relating to the reliability and validity of this new measure. Implications of this strategy for increasing parent engagement in parenting programs that aim to reduce behavioural and emotional problems in adolescents will be discussed.
Resumo:
This paper proposes the use of the q-Gaussian mutation with self-adaptation of the shape of the mutation distribution in evolutionary algorithms. The shape of the q-Gaussian mutation distribution is controlled by a real parameter q. In the proposed method, the real parameter q of the q-Gaussian mutation is encoded in the chromosome of individuals and hence is allowed to evolve during the evolutionary process. In order to test the new mutation operator, evolution strategy and evolutionary programming algorithms with self-adapted q-Gaussian mutation generated from anisotropic and isotropic distributions are presented. The theoretical analysis of the q-Gaussian mutation is also provided. In the experimental study, the q-Gaussian mutation is compared to Gaussian and Cauchy mutations in the optimization of a set of test functions. Experimental results show the efficiency of the proposed method of self-adapting the mutation distribution in evolutionary algorithms.
Resumo:
Electrical impedance tomography is a technique to estimate the impedance distribution within a domain, based on measurements on its boundary. In other words, given the mathematical model of the domain, its geometry and boundary conditions, a nonlinear inverse problem of estimating the electric impedance distribution can be solved. Several impedance estimation algorithms have been proposed to solve this problem. In this paper, we present a three-dimensional algorithm, based on the topology optimization method, as an alternative. A sequence of linear programming problems, allowing for constraints, is solved utilizing this method. In each iteration, the finite element method provides the electric potential field within the model of the domain. An electrode model is also proposed (thus, increasing the accuracy of the finite element results). The algorithm is tested using numerically simulated data and also experimental data, and absolute resistivity values are obtained. These results, corresponding to phantoms with two different conductive materials, exhibit relatively well-defined boundaries between them, and show that this is a practical and potentially useful technique to be applied to monitor lung aeration, including the possibility of imaging a pneumothorax.
Resumo:
Extended gcd computation is interesting itself. It also plays a fundamental role in other calculations. We present a new algorithm for solving the extended gcd problem. This algorithm has a particularly simple description and is practical. It also provides refined bounds on the size of the multipliers obtained.
Resumo:
In studies assessing the trends in coronary events, such as the World Health Organization (WHO) MONICA Project (multinational MONItoring of trends and determinants of CArdiovascular disease), the main emphasis has been on coronary deaths and non-fatal definite myocardial infarctions (MI). It is, however, possible that the proportion of milder MIs may be increasing because of improvements in treatment and reductions in levels of risk factors. We used the MI register data of the WHO MONICA Project to investigate several definitions for mild non-fatal MIs that would be applicable in various settings and could be used to assess trends in milder coronary events. Of 38 populations participating in the WHO MONICA MI register study, more than half registered a sufficiently wide spectrum of events that it was possible to identify subsets of milder cases. The event rates and case fatality rates of MI are clearly dependent on the spectrum of non-fatal MIs, which are included. On clinical grounds we propose that the original MONICA category ''non-fatal possible MI'' could bt:divided into two groups: ''non fatal probable MI'' and ''prolonged chest pain.'' Non-fatal probable MIs are cases, which in addition to ''typical symptoms'' have electrocardiogram (EGG) or enzyme changes suggesting cardiac ischemia, but not severe enough to fulfil the criteria for non-fatal definite MI In more than half of the MONICA Collaborating Centers, the registration of MI covers these milder events reasonably well. Proportions of non-fatal probable MIs vary less between populations than do proportions of non fatal possible MIs. Also rates of non-fatal probable MI are somewhat more highly correlated with rates of fatal events and non-fatal definite MI. These findings support the validity of the category of non-fatal probable MI. In each center the increase in event rates and the decrease in case-fatality due to the inclusion of non-fatal probable MI was lar er for women than men. For the WHO MONICA Project and other epidemiological studies the proposed category of non-fatal probable MIs can be used for assessing trends in rates of milder MI. Copyright (C) 1997 Elsevier Science Inc.
Resumo:
A G-design of order n is a pair (P,B) where P is the vertex set of the complete graph K-n and B is an edge-disjoint decomposition of K-n into copies of the simple graph G. Following design terminology, we call these copies ''blocks''. Here K-4 - e denotes the complete graph K-4 with one edge removed. It is well-known that a K-4 - e design of order n exists if and only if n = 0 or 1 (mod 5), n greater than or equal to 6. The intersection problem here asks for which k is it possible to find two K-4 - e designs (P,B-1) and (P,B-2) of order n, with \B-1 boolean AND B-2\ = k, that is, with precisely k common blocks. Here we completely solve this intersection problem for K-4 - e designs.
Resumo:
Objective: The study we assessed how often patients who are manifesting a myocardial infarction (MI) would not be considered candidates for intensive lipid-lowering therapy based on the current guidelines. Methods: In 355 consecutive patients manifesting ST elevation MI (STEMI), admission plasma C-reactive protein (CRP) was measured and Framingham risk score (FRS), PROCAM risk score, Reynolds risk score, ASSIGN risk score, QRISK, and SCORE algorithms were applied. Cardiac computed tomography and carotid ultrasound were performed to assess the coronary artery calcium score (CAC), carotid intima-media thickness (cIMT) and the presence of carotid plaques. Results: Less than 50% of STEMI patients would be identified as having high risk before the event by any of these algorithms. With the exception of FRS (9%), all other algorithms would assign low risk to about half of the enrolled patients. Plasma CRP was <1.0 mg/L in 70% and >2 mg/L in 14% of the patients. The average cIMT was 0.8 +/- 0.2 mm and only in 24% of patients was >= 1.0 mm. Carotid plaques were found in 74% of patients. CAC > 100 was found in 66% of patients. Adding CAC >100 plus the presence of carotid plaque, a high-risk condition would be identified in 100% of the patients using any of the above mentioned algorithms. Conclusion: More than half of patients manifesting STEMI would not be considered as candidates for intensive preventive therapy by the current clinical algorithms. The addition of anatomical parameters such as CAC and the presence of carotid plaques can substantially reduce the CVD risk underestimation. (C) 2010 Elsevier Ireland Ltd. All rights reserved.
Orofacial granulomatosis: A diagnostic problem for the unwary and a management dilemma. Case reports
Resumo:
Orofacial granulomatosis is a condition that, may be difficult to diagnose for those unfamiliar with the entity. This paper describes two cases and addresses the presentation, pathogenesis and treatment. The clinical recognition of his condition is important as is the subsequent investigation by an appropriate specialist. Management of patients needs to take into account the results of further investigations, the patient's expectations, and the severity of the condition.
Resumo:
We propose a simulated-annealing-based genetic algorithm for solving model parameter estimation problems. The algorithm incorporates advantages of both genetic algorithms and simulated annealing. Tests on computer-generated synthetic data that closely resemble optical constants of a metal were performed to compare the efficiency of plain genetic algorithms against the simulated-annealing-based genetic algorithms. These tests assess the ability of the algorithms to and the global minimum and the accuracy of values obtained for model parameters. Finally, the algorithm with the best performance is used to fit the model dielectric function to data for platinum and aluminum. (C) 1997 Optical Society of America.
Resumo:
The concept of parameter-space size adjustment is pn,posed in order to enable successful application of genetic algorithms to continuous optimization problems. Performance of genetic algorithms with six different combinations of selection and reproduction mechanisms, with and without parameter-space size adjustment, were severely tested on eleven multiminima test functions. An algorithm with the best performance was employed for the determination of the model parameters of the optical constants of Pt, Ni and Cr.