953 resultados para Random Number Generation
Resumo:
Invariant integrals are derived for nematic liquid crystals and applied to materials with small Ericksen number and topological defects. The nematic material is confined between two infinite plates located at y = -h and y = h (h is an element of R+) with a semi-infinite plate at y = 0 and x < 0. Planar and homeotropic strong anchoring boundary conditions to the director field are assumed at these two infinite and semi-infinite plates, respectively. Thus, a line disclination appears in the system which coincides with the z-axis. Analytical solutions to the director field in the neighbourhood of the singularity are obtained. However, these solutions depend on an arbitrary parameter. The nematic elastic force is thus evaluated from an invariant integral of the energy-momentum tensor around a closed surface which does not contain the singularity. This allows one to determine this parameter which is a function of the nematic cell thickness and the strength of the disclination. Analytical solutions are also deduced for the director field in the whole region using the conformal mapping method. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Applied Mathematical Modelling, Vol.33
Resumo:
European Transactions on Telecommunications, vol. 18
Resumo:
Many learning problems require handling high dimensional datasets with a relatively small number of instances. Learning algorithms are thus confronted with the curse of dimensionality, and need to address it in order to be effective. Examples of these types of data include the bag-of-words representation in text classification problems and gene expression data for tumor detection/classification. Usually, among the high number of features characterizing the instances, many may be irrelevant (or even detrimental) for the learning tasks. It is thus clear that there is a need for adequate techniques for feature representation, reduction, and selection, to improve both the classification accuracy and the memory requirements. In this paper, we propose combined unsupervised feature discretization and feature selection techniques, suitable for medium and high-dimensional datasets. The experimental results on several standard datasets, with both sparse and dense features, show the efficiency of the proposed techniques as well as improvements over previous related techniques.
Resumo:
Swarm Intelligence (SI) is the property of a system whereby the collective behaviors of (unsophisticated) agents interacting locally with their environment cause coherent functional global patterns to emerge. Particle swarm optimization (PSO) is a form of SI, and a population-based search algorithm that is initialized with a population of random solutions, called particles. These particles are flying through hyperspace and have two essential reasoning capabilities: their memory of their own best position and knowledge of the swarm's best position. In a PSO scheme each particle flies through the search space with a velocity that is adjusted dynamically according with its historical behavior. Therefore, the particles have a tendency to fly towards the best search area along the search process. This work proposes a PSO based algorithm for logic circuit synthesis. The results show the statistical characteristics of this algorithm with respect to number of generations required to achieve the solutions. It is also presented a comparison with other two Evolutionary Algorithms, namely Genetic and Memetic Algorithms.
Resumo:
A nationwide seroepidemiologic survey of human T. cruzi infection was carried out in Brazil from 1975 to 1980 as a joint programme of the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) and the Superintendência de Campanhas (SUGAM), Ministry of Health, of Brazil. Due to the marked heterogeneity of urban populations as result of wide migratory movements in the country and since triatomine transmission of the disease occurs mostly in rural areas, the survey was limited to rural populations. The survey was based on a large cluster sampling of complete households, from randomly selected localities comprised of 10 to 500 houses, or up to 200 houses in the Amazon region. Random selection of localities and houses was permitted by a detailed mapping of every locality in the country, as performed and continuously adjusted, by SUCAM. In the selected houses duplicate samples on filter paper were collected from every resistent 1 year or older. Samples were tested in one of 14 laboratories scattered in the country by the indirect anti-IgG immunofluorescence test, with reagents produced and standardized by a central laboratory located at the Instituto de Medicina Tropical de São Paulo. A continuous quality control was performed at this laboratory, which tested duplicates of 10% to 15% of all samples examined by the collaborating laboratories. Data regarding number of sera collected, patients'age, sex, place of residence, place of birth and test result were computerized at the Department of Preventive Medicine, Medical School, University of São Paulo, São Paulo, Brazil. Serologic prevalence indices were estimated for each Municipality and mapped according to States and Territories in Brazil. Since data were already available for the State of São Paulo and the Federal District, these unities were not included in the survey.
Resumo:
This paper presents a genetic algorithm for the Resource Constrained Project Scheduling Problem (RCPSP). The chromosome representation of the problem is based on random keys. The schedule is constructed using a heuristic priority rule in which the priorities of the activities are defined by the genetic algorithm. The heuristic generates parameterized active schedules. The approach was tested on a set of standard problems taken from the literature and compared with other approaches. The computational results validate the effectiveness of the proposed algorithm.
Resumo:
IEEE International Symposium on Circuits and Systems, MAY 25-28, 2003, Bangkok, Thailand. (ISI Web of Science)
Resumo:
A random, double-blind, parallel group clinical trial program was carried out to compare praziquantel, a recently developed anti-helmintic drug, and oxamniquine, an already established agent for treating mansoni schistosomiasis. Both drugs were administered orally as a single dose, on the average, praziquantel 55 mg/kg and oxamniquine 16 mg/kg BWT. The diagnosis and the parasitological follow-up lasting for a minimum of six months, were based on stool examinations according to Kato/Katz technique. A patient was considered cured if all results were negative and if he had performed at least three post-treatment controls, each one comprising three stool examinations. The finding of a single S. mansoni egg in any stool examination indicated, a therapeutical failure. A total of 267, cases were treated with praziquantel and 272 with oxamniquine. The two groups were homogeneous in regard to patients, age, clinical form of the disease, risk of reinfection and worm burden, relevant factors in the therapeutical response. The incidence and severity of untoward, effects were similar in both groups but abdominal distress and diarrhoea were more frequently reported under praziquantel and dizzines under oxamniquine (p < 0.05). In the former group a marked urticariform reaction was observed whereas in the latter one patient presented convulsion. The laboratory work-up. failed to disclose any significant alteration although the AST, ALT and y-GT mean values revealed a tendence to increase on the 7th day after oxamniquine intake. The overall parasitological cure rates were 75.5% (139/ 184) with praziquantel and 69.8% (134/192) with oxamniquine (p > 0.05). Amongst the noncured aptients a reduction of 88.6% and 74.6% in the mean number of eggs/g of feces Was seen following the treatment with praziquantel and oxamniquine, respectively (p < 0.05). In conclusion, in spite of their different chemical, pharmacological and toxicological profiles as well as mechanisms-of-action, inclusively praziquantel already had proved to be 100% active against S. mansoni strains resistant to oxamniquine, both drugs showed comparable tolerance and therapeutical efficacy.
Resumo:
The present generation of eLearning platforms values the interchange of learning objects standards. Nevertheless, for specialized domains these standards are insufficient to fully describe all the assets, especially when they are used as input for other eLearning services. To address this issue we extended an existing learning objects standard to the particular requirements of a specialized domain, namely the automatic evaluation of programming problems. The focus of this paper is the definition of programming problems as learning objects. We introduce a new schema to represent metadata related to automatic evaluation that cannot be conveniently represented using existing standards, such as: the type of automatic evaluation; the requirements of the evaluation engine; or the roles of different assets - tests cases, program solutions, etc. This new schema is being used in an interoperable repository of learning objects, called crimsonHex.
Resumo:
The main purpose of this work was the development of procedures for the simulation of atmospheric ows over complex terrain, using OpenFOAM. For this aim, tools and procedures were developed apart from this code for the preprocessing and data extraction, which were thereafter applied in the simulation of a real case. For the generation of the computational domain, a systematic method able to translate the terrain elevation model to a native OpenFOAM format (blockMeshDict) was developed. The outcome was a structured mesh, in which the user has the ability to de ne the number of control volumes and its dimensions. With this procedure, the di culties of case set up and the high computation computational e ort reported in literature associated to the use of snappyHexMesh, the OpenFOAM resource explored until then for the accomplishment of this task, were considered to be overwhelmed. Developed procedures for the generation of boundary conditions allowed for the automatic creation of idealized inlet vertical pro les, de nition of wall functions boundary conditions and the calculation of internal eld rst guesses for the iterative solution process, having as input experimental data supplied by the user. The applicability of the generated boundary conditions was limited to the simulation of turbulent, steady-state, incompressible and neutrally strati ed atmospheric ows, always recurring to RaNS (Reynolds-averaged Navier-Stokes) models. For the modelling of terrain roughness, the developed procedure allowed to the user the de nition of idealized conditions, like an uniform aerodynamic roughness length or making its value variable as a function of topography characteristic values, or the using of real site data, and it was complemented by the development of techniques for the visual inspection of generated roughness maps. The absence and the non inclusion of a forest canopy model limited the applicability of this procedure to low aerodynamic roughness lengths. The developed tools and procedures were then applied in the simulation of a neutrally strati ed atmospheric ow over the Askervein hill. In the performed simulations was evaluated the solution sensibility to di erent convection schemes, mesh dimensions, ground roughness and formulations of the k - ε and k - ω models. When compared to experimental data, calculated values showed a good agreement of speed-up in hill top and lee side, with a relative error of less than 10% at a height of 10 m above ground level. Turbulent kinetic energy was considered to be well simulated in the hill windward and hill top, and grossly predicted in the lee side, where a zone of ow separation was also identi ed. Despite the need of more work to evaluate the importance of the downstream recirculation zone in the quality of gathered results, the agreement between the calculated and experimental values and the OpenFOAM sensibility to the tested parameters were considered to be generally in line with the simulations presented in the reviewed bibliographic sources.
Resumo:
Empowered by virtualisation technology, cloud infrastructures enable the construction of flexi- ble and elastic computing environments, providing an opportunity for energy and resource cost optimisation while enhancing system availability and achieving high performance. A crucial re- quirement for effective consolidation is the ability to efficiently utilise system resources for high- availability computing and energy-efficiency optimisation to reduce operational costs and carbon footprints in the environment. Additionally, failures in highly networked computing systems can negatively impact system performance substantially, prohibiting the system from achieving its initial objectives. In this paper, we propose algorithms to dynamically construct and readjust vir- tual clusters to enable the execution of users’ jobs. Allied with an energy optimising mechanism to detect and mitigate energy inefficiencies, our decision-making algorithms leverage virtuali- sation tools to provide proactive fault-tolerance and energy-efficiency to virtual clusters. We conducted simulations by injecting random synthetic jobs and jobs using the latest version of the Google cloud tracelogs. The results indicate that our strategy improves the work per Joule ratio by approximately 12.9% and the working efficiency by almost 15.9% compared with other state-of-the-art algorithms.
Resumo:
Trabalho Final de Mestrado para a obtenção do grau de Mestre em Engenharia Mecânica /Energia
Resumo:
Volatile organic compounds are a common source of groundwater contamination that can be easily removed by air stripping in columns with random packing and using a counter-current flow between the phases. This work proposes a new methodology for column design for any type of packing and contaminant which avoids the necessity of an arbitrary chosen diameter. It also avoids the employment of the usual graphical Eckert correlations for pressure drop. The hydraulic features are previously chosen as a project criterion. The design procedure was translated into a convenient algorithm in C++ language. A column was built in order to test the design, the theoretical steady-state and dynamic behaviour. The experiments were conducted using a solution of chloroform in distilled water. The results allowed for a correction in the theoretical global mass transfer coefficient previously estimated by the Onda correlations, which depend on several parameters that are not easy to control in experiments. For best describe the column behaviour in stationary and dynamic conditions, an original mathematical model was developed. It consists in a system of two partial non linear differential equations (distributed parameters). Nevertheless, when flows are steady, the system became linear, although there is not an evident solution in analytical terms. In steady state the resulting ODE can be solved by analytical methods, and in dynamic state the discretization of the PDE by finite differences allows for the overcoming of this difficulty. To estimate the contaminant concentrations in both phases in the column, a numerical algorithm was used. The high number of resulting algebraic equations and the impossibility of generating a recursive procedure did not allow the construction of a generalized programme. But an iterative procedure developed in an electronic worksheet allowed for the simulation. The solution is stable only for similar discretizations values. If different values for time/space discretization parameters are used, the solution easily becomes unstable. The system dynamic behaviour was simulated for the common liquid phase perturbations: step, impulse, rectangular pulse and sinusoidal. The final results do not configure strange or non-predictable behaviours.
Resumo:
Recent changes in the operation and planning of power systems have been motivated by the introduction of Distributed Generation (DG) and Demand Response (DR) in the competitive electricity markets' environment, with deep concerns at the efficiency level. In this context, grid operators, market operators, utilities and consumers must adopt strategies and methods to take full advantage of demand response and distributed generation. This requires that all the involved players consider all the market opportunities, as the case of energy and reserve components of electricity markets. The present paper proposes a methodology which considers the joint dispatch of demand response and distributed generation in the context of a distribution network operated by a virtual power player. The resources' participation can be performed in both energy and reserve contexts. This methodology contemplates the probability of actually using the reserve and the distribution network constraints. Its application is illustrated in this paper using a 32-bus distribution network with 66 DG units and 218 consumers classified into 6 types of consumers.