985 resultados para Reasonable time


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The folding mechanism of a 125-bead heteropolymer model for proteins is investigated with Monte Carlo simulations on a cubic lattice. Sequences that do and do not fold in a reasonable time are compared. The overall folding behavior is found to be more complex than that of models for smaller proteins. Folding begins with a rapid collapse followed by a slow search through the semi-compact globule for a sequence-dependent stable core with about 30 out of 176 native contacts which serves as the transition state for folding to a near-native structure. Efficient search for the core is dependent on structural features of the native state. Sequences that fold have large amounts of stable, cooperative structure that is accessible through short-range initiation sites, such as those in anti-parallel sheets connected by turns. Before folding is completed, the system can encounter a second bottleneck, involving the condensation and rearrangement of surface residues. Overly stable local structure of the surface residues slows this stage of the folding process. The relation of the results from the 125-mer model studies to the folding of real proteins is discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O direito à razoável duração do processo, inserido expressamente no ordenamento jurídico brasileiro a partir do advento da Emenda Constitucional 45/2004, já poderia ser inferido desde a incorporação da Convenção Americana de Direitos Humanos, bem como ser considerado um corolário da garantia do devido processo legal. Todo indivíduo tem o direito a um processo sem dilações indevidas, em especial aquele que se encontre submetido a uma prisão preventiva, medida cautelar pessoal de extrema gravosidade. Nesse contexto, exsurge o direito que o indivíduo preso preventivamente tem de que o seu processo seja julgado em um prazo razoável ou de que ele seja desencarcerado, caso preso além da necessidade fática contida no caso concreto. Entretanto, a interpretação da garantia não pode restar somente à livre vontade dos aplicadores do direito, sendo necessária uma regulamentação legal efetiva da duração da prisão preventiva, por meio de prazos concretos nos quais o sujeito deverá ser posto em liberdade, ante a desídia estatal. Incorporando experiências estrangeiras, deve o legislador pátrio adotar marcos temporais legais, em que a prisão preventiva deverá cessar, caso excessivamente prolongada. Muito embora no ano de 2011 tenha sido reformada a tutela das medidas cautelares pessoais no Código de Processo Penal, o legislador ordinário não aprovou a imposição de limites de duração da prisão preventiva, permanecendo ao livre arbítrio das autoridades judiciárias a interpretação da garantia em referência. Assim, o Projeto de Lei do Novo Código de Processo Penal, atualmente em trâmite no Congresso Nacional, ao prever limites máximos de duração da prisão preventiva, dá uma efetiva regulamentação à garantia da duração razoável do imputado preso, devendo ser, espera-se, mantido no eventual texto final aprovado.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The delineation of functional economic areas, or market areas, is a problem of high practical relevance, since the delineation of functional sets such as economic areas in the US, Travel-to-Work Areas in the United Kingdom, and their counterparts in other OECD countries are the basis of many statistical operations and policy making decisions at local level. This is a combinatorial optimisation problem defined as the partition of a given set of indivisible spatial units (covering a territory) into regions characterised by being (a) self-contained and (b) cohesive, in terms of spatial interaction data (flows, relationships). Usually, each region must reach a minimum size and self-containment level, and must be continuous. Although these optimisation problems have been typically solved through greedy methods, a recent strand of the literature in this field has been concerned with the use of evolutionary algorithms with ad hoc operators. Although these algorithms have proved to be successful in improving the results of some of the more widely applied official procedures, they are so time consuming that cannot be applied directly to solve real-world problems. In this paper we propose a new set of group-based mutation operators, featuring general operations over disjoint groups, tailored to ensure that all the constraints are respected during the operation to improve efficiency. A comparative analysis of our results with those from previous approaches shows that the proposed algorithm systematically improves them in terms of both quality and processing time, something of crucial relevance since it allows dealing with most large, real-world problems in reasonable time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper aims at identifying ways to pursue the EU–Mercosul negotiations leading to a free trade agreement (FTA). After reviewing their already long history, it outlines a basic framework, in goods, services and other themes, judged possible. The main point is that, given the prevailing conditions on both sides, an agreement to be signed within a reasonable time must be modest, i.e. along the described lines. It then clearly sets up the decisions confronting the negotiators: either to pursue the modest, feasible option or to terminate negotiations under the FTA heading. The latter, however, does not imply an end to the dialogue. Many actions and measures may be taken – which are easier to discuss and fix – that could pave the way for, in due time, a closer-to-ideal FTA to be considered again. These are the subjects of a last section.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract: Purpose – The aim of this research is to determine the optimal upgrade and preventive maintenance actions that minimize the total expected cost (maintenance costs+penalty costs). Design/methodology/approach – The problem is a four-parameter optimization with two parameters being k-dimensional. The optimal solution is obtained by using a four-stage approach where at each stage a one-parameter optimization is solved. Findings – Upgrading action is an extra option before the lease of used equipment, in addition to preventive maintenance action. Upgrading action makes equipment younger and preventive maintenance action lowers the ROCOF. Practical implications – There is a growing trend towards leasing equipment rather than owning it. The lease contract contains penalties if the equipment fails often and repairs are done within reasonable time period. This implies that the lessor needs to look at optimal preventive maintenance strategies in the case of new equipment lease, and upgrade actions plus preventive maintenance in the case of used equipment lease. The paper deals with this topic and is of great significant to business involved with leasing equipment. Originality/value – Nowadays many organizations are interested in leasing equipment and outsourcing maintenance. The model in this paper addresses the preventive maintenance problem for leased equipment. It provides an approach to dealing with this problem.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper is concerned with evaluating the performance of loss networks. Accurate determination of loss network performance can assist in the design and dimensioning of telecommunications networks. However, exact determination can be difficult and generally cannot be done in reasonable time. For these reasons there is much interest in developing fast and accurate approximations. We develop a reduced load approximation which improves on the famous Erlang fixed point approximation (EFPA) in a variety of circumstances. We illustrate our results with reference to a range of networks for which the EFPA may be expected to perform badly.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame. Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics. In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Post-disaster recovery of Micro, Small and Medium-Scale Enterprises (SMEs) remains an issue of interest for policy and practice given the wide scale occurrences of natural disasters around the globe and their significant impacts on local economies and SMEs. Asian Tsunami of December 2004 affected many SMEs in southern Sri Lanka. The study was developed to identify the main issues encountered by the Tsunami affected SMEs in Southern Sri Lanka in the process of their post-tsunami recovery. The study: a) identifies tsunami damage and loss in micro and SMEs in the Galle district; b) ascertains the type of benefits received from various parties by the affected micro and SMEs; c) evaluates the problems and difficulties faced by the beneficiary organizations in the benefit distribution process; and d) recommends strategies and policies for the tsunami-affected micro and SMEs for them to become self-sustaining within a reasonable time frame. Fifty randomly selected tsunami-affected micro and SMEs were surveyed for this study. Interviews were conducted in person with the business owners in order to identify the damages, recovery, rehabilitation, re-establishment and difficulties faced in the benefit distribution process. The analysis identifies that the benefits were given the wrong priorities and that they were not sufficient for the recovery process. In addition, the many governance-related problems that arose while distributing benefits are discussed. Overall, the business recovery rate was approximately 65%, and approximately 88% of business organizations were sole proprietorships. Therefore, the policies of the tsunami relief agencies should adequately address the needs of sole proprietorship business requirements. Consideration should also be given to strengthen the capacity and skills of the entrepreneurs by improving operational, technological, management and marketing skills and capabilities.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this article, the results achieved by applying an electromagnetism (EM) inspired metaheuristic to the uncapacitated multiple allocation hub location problem (UMAHLP) are discussed. An appropriate objective function which natively conform with the problem, 1-swap local search and scaling technique conduce to good overall performance.Computational tests demonstrate the reliability of this method, since the EM-inspired metaheuristic reaches all optimal/best known solutions for UMAHLP, except one, in a reasonable time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Secondary pyrolysis in fluidized bed fast pyrolysis of biomass is the focus of this work. A novel computational fluid dynamics (CFD) model coupled with a comprehensive chemistry scheme (134 species and 4169 reactions, in CHEMKIN format) has been developed to investigate this complex phenomenon. Previous results from a transient three-dimensional model of primary pyrolysis were used for the source terms of primary products in this model. A parametric study of reaction atmospheres (H2O, N2, H2, CO2, CO) has been performed. For the N2 and H2O atmosphere, results of the model compared favorably to experimentally obtained yields after the temperature was adjusted to a value higher than that used in experiments. One notable deviation versus experiments is pyrolytic water yield and yield of higher hydrocarbons. The model suggests a not overly strong impact of the reaction atmosphere. However, both chemical and physical effects were observed. Most notably, effects could be seen on the yield of various compounds, temperature profile throughout the reactor system, residence time, radical concentration, and turbulent intensity. At the investigated temperature (873 K), turbulent intensity appeared to have the strongest influence on liquid yield. With the aid of acceleration techniques, most importantly dimension reduction, chemistry agglomeration, and in-situ tabulation, a converged solution could be obtained within a reasonable time (∼30 h). As such, a new potentially useful method has been suggested for numerical analysis of fast pyrolysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Homogenous secondary pyrolysis is category of reactions following the primary pyrolysis and presumed important for fast pyrolysis. For the comprehensive chemistry and fluid dynamics, a probability density functional (PDF) approach is used; with a kinetic scheme comprising 134 species and 4169 reactions being implemented. With aid of acceleration techniques, most importantly Dimension Reduction, Chemistry Agglomeration and In-situ Tabulation (ISAT), a solution within reasonable time was obtained. More work is required; however, a solution for levoglucosan (C6H10O5) being fed through the inlet with fluidizing gas at 500 °C, has been obtained. 88.6% of the levoglucosan remained non-decomposed, and 19 different decomposition product species were found above 0.01% by weight. A homogenous secondary pyrolysis scheme proposed can thus be implemented in a CFD environment and acceleration techniques can speed-up the calculation for application in engineering settings.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ebben a tanulmányban a szerző egy új harmóniakereső metaheurisztikát mutat be, amely a minimális időtartamú erőforrás-korlátos ütemezések halmazán a projekt nettó jelenértékét maximalizálja. Az optimális ütemezés elméletileg két egész értékű (nulla-egy típusú) programozási feladat megoldását jelenti, ahol az első lépésben meghatározzuk a minimális időtartamú erőforrás-korlátos ütemezések időtartamát, majd a második lépésben az optimális időtartamot feltételként kezelve megoldjuk a nettó jelenérték maximalizálási problémát minimális időtartamú erőforrás-korlátos ütemezések halmazán. A probléma NP-hard jellege miatt az egzakt megoldás elfogadható idő alatt csak kisméretű projektek esetében képzelhető el. A bemutatandó metaheurisztika a Csébfalvi (2007) által a minimális időtartamú erőforrás-korlátos ütemezések időtartamának meghatározására és a tevékenységek ennek megfelelő ütemezésére kifejlesztett harmóniakereső metaheurisztika továbbfejlesztése, amely az erőforrás-felhasználási konfliktusokat elsőbbségi kapcsolatok beépítésével oldja fel. Az ajánlott metaheurisztika hatékonyságának és életképességének szemléltetésére számítási eredményeket adunk a jól ismert és népszerű PSPLIB tesztkönyvtár J30 részhalmazán futtatva. Az egzakt megoldás generálásához egy korszerű MILP-szoftvert (CPLEX) alkalmaztunk. _______________ This paper presents a harmony search metaheuristic for the resource-constrained project scheduling problem with discounted cash flows. In the proposed approach, a resource-constrained project is characterized by its „best” schedule, where best means a makespan minimal resource constrained schedule for which the net present value (NPV) measure is maximal. Theoretically the optimal schedule searching process is formulated as a twophase mixed integer linear programming (MILP) problem, which can be solved for small-scale projects in reasonable time. The applied metaheuristic is based on the "conflict repairing" version of the "Sounds of Silence" harmony search metaheuristic developed by Csébfalvi (2007) for the resource-constrained project scheduling problem (RCPSP). In order to illustrate the essence and viability of the proposed harmony search metaheuristic, we present computational results for a J30 subset from the well-known and popular PSPLIB. To generate the exact solutions a state-of-the-art MILP solver (CPLEX) was used.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This dissertation aims to improve the performance of existing assignment-based dynamic origin-destination (O-D) matrix estimation models to successfully apply Intelligent Transportation Systems (ITS) strategies for the purposes of traffic congestion relief and dynamic traffic assignment (DTA) in transportation network modeling. The methodology framework has two advantages over the existing assignment-based dynamic O-D matrix estimation models. First, it combines an initial O-D estimation model into the estimation process to provide a high confidence level of initial input for the dynamic O-D estimation model, which has the potential to improve the final estimation results and reduce the associated computation time. Second, the proposed methodology framework can automatically convert traffic volume deviation to traffic density deviation in the objective function under congested traffic conditions. Traffic density is a better indicator for traffic demand than traffic volume under congested traffic condition, thus the conversion can contribute to improving the estimation performance. The proposed method indicates a better performance than a typical assignment-based estimation model (Zhou et al., 2003) in several case studies. In the case study for I-95 in Miami-Dade County, Florida, the proposed method produces a good result in seven iterations, with a root mean square percentage error (RMSPE) of 0.010 for traffic volume and a RMSPE of 0.283 for speed. In contrast, Zhou's model requires 50 iterations to obtain a RMSPE of 0.023 for volume and a RMSPE of 0.285 for speed. In the case study for Jacksonville, Florida, the proposed method reaches a convergent solution in 16 iterations with a RMSPE of 0.045 for volume and a RMSPE of 0.110 for speed, while Zhou's model needs 10 iterations to obtain the best solution, with a RMSPE of 0.168 for volume and a RMSPE of 0.179 for speed. The successful application of the proposed methodology framework to real road networks demonstrates its ability to provide results both with satisfactory accuracy and within a reasonable time, thus establishing its potential usefulness to support dynamic traffic assignment modeling, ITS systems, and other strategies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The teaching of the lumbar puncture (LP) technique with simulator is not well systematized in the curricula of medical schools. Studies show that training in the simulator provides learning technical skills, acquisition and retention of knowledge, improve self-confidence of the learner and enables the transfer to clinical practice. We intend this study to introduce simulated training in LP in medical course at the Universidade Federal do Rio Grande do Norte evaluating the experience taking into account quantitative aspects (performance on standardized tests) and qualitative (perception of the students regarding the method and the teaching process learning). The study was conducted in two phases. In the first phase practical training in PL was introduced in the 3rd year of medical school. Seventy-seven students were trained in small groups, guided by a checklist developed in the model Objective Structured Assessment of Technical Skill (OSATS), at this moment they knew they were not under performance evaluation. They were also asked whether they had prior chances to make an LP in patients. At the end of the first phase the students evaluated training in the following areas: teaching technique, simulator realism, time available per group, number of participants per group and relevance to medical practice. In the second phase, two years later, 18 students trained in first stage performed a new LP on the mannequin simulator, and its performance was evaluated through the same checklist of training in order to verify the technical retention. In addition, they answered a multiple choice test about practical aspects of the LP technique. Each participant received individual feedback on their performance at the end of their participation in the study. In the first phase of the study we found that only 4% of students had performed a lumbar puncture in patients until the 3rd year. The training of LP technique with simulator mannequin was considered relevant and the teaching methods was thoroughly evaluated. In the second phase, all participants were successful in implementing the lumbar puncture on the mannequin simulator, compliance with the most steps in a reasonable time, suggesting that would be able to perform the procedure in a patient.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The computational modeling of ocean waves and ocean-faring devices poses numerous challenges. Among these are the need to stably and accurately represent both the fluid-fluid interface between water and air as well as the fluid-structure interfaces arising between solid devices and one or more fluids. As techniques are developed to stably and accurately balance the interactions between fluid and structural solvers at these boundaries, a similarly pressing challenge is the development of algorithms that are massively scalable and capable of performing large-scale three-dimensional simulations on reasonable time scales. This dissertation introduces two separate methods for approaching this problem, with the first focusing on the development of sophisticated fluid-fluid interface representations and the second focusing primarily on scalability and extensibility to higher-order methods.

We begin by introducing the narrow-band gradient-augmented level set method (GALSM) for incompressible multiphase Navier-Stokes flow. This is the first use of the high-order GALSM for a fluid flow application, and its reliability and accuracy in modeling ocean environments is tested extensively. The method demonstrates numerous advantages over the traditional level set method, among these a heightened conservation of fluid volume and the representation of subgrid structures.

Next, we present a finite-volume algorithm for solving the incompressible Euler equations in two and three dimensions in the presence of a flow-driven free surface and a dynamic rigid body. In this development, the chief concerns are efficiency, scalability, and extensibility (to higher-order and truly conservative methods). These priorities informed a number of important choices: The air phase is substituted by a pressure boundary condition in order to greatly reduce the size of the computational domain, a cut-cell finite-volume approach is chosen in order to minimize fluid volume loss and open the door to higher-order methods, and adaptive mesh refinement (AMR) is employed to focus computational effort and make large-scale 3D simulations possible. This algorithm is shown to produce robust and accurate results that are well-suited for the study of ocean waves and the development of wave energy conversion (WEC) devices.