948 resultados para fixed-point arithmetic
Resumo:
Economic dispatch (ED) problems often exhibit non-linear, non-convex characteristics due to the valve point effects. Further, various constraints and factors, such as prohibited operation zones, ramp rate limits and security constraints imposed by the generating units, and power loss in transmission make it even more challenging to obtain the global optimum using conventional mathematical methods. Meta-heuristic approaches are capable of solving non-linear, non-continuous and non-convex problems effectively as they impose no requirements on the optimization problems. However, most methods reported so far mainly focus on a specific type of ED problems, such as static or dynamic ED problems. This paper proposes a hybrid harmony search with arithmetic crossover operation, namely ACHS, for solving five different types of ED problems, including static ED with valve point effects, ED with prohibited operating zones, ED considering multiple fuel cells, combined heat and power ED, and dynamic ED. In this proposed ACHS, the global best information and arithmetic crossover are used to update the newly generated solution and speed up the convergence, which contributes to the algorithm exploitation capability. To balance the exploitation and exploration capabilities, the opposition based learning (OBL) strategy is employed to enhance the diversity of solutions. Further, four commonly used crossover operators are also investigated, and the arithmetic crossover shows its efficiency than the others when they are incorporated into HS. To make a comprehensive study on its scalability, ACHS is first tested on a group of benchmark functions with a 100 dimensions and compared with several state-of-the-art methods. Then it is used to solve seven different ED cases and compared with the results reported in literatures. All the results confirm the superiority of the ACHS for different optimization problems.
Resumo:
This paper explores the theme of exhibiting architectural research through a particular example, the development of the Irish pavilion for the 14th architectural biennale, Venice 2014. Responding to Rem Koolhaas’s call to investigate the international absorption of modernity, the Irish pavilion became a research project that engaged with the development of the architectures of infrastructure in Ireland in the twentieth and twenty-first centuries. Central to this proposition was that infrastructure is simultaneously a technological and cultural construct, one that for Ireland occupied a critical position in the building of a new, independent post-colonial nation state, after 1921.
Presupposing infrastructure as consisting of both visible and invisible networks, the idea of a matrix become a central conceptual and visual tool in the curatorial and design process for the exhibition and pavilion. To begin with this was a two-dimensional grid used to identify and order what became described as a series of ten ‘infrastructural episodes’. These were determined chronologically across the decades between 1914 and 2014 and their spatial manifestations articulated in terms of scale: micro, meso and macro. At this point ten academics were approached as researchers. Their purpose was twofold, to establish the broader narratives around which the infrastructures developed and to scrutinise relevant archives for compelling visual material. Defining the meso scale as that of the building, the media unearthed was further filtered and edited according to a range of categories – filmic/image, territory, building detail, and model – which sought to communicate the relationship between the pieces of architecture and the larger systems to which they connect. New drawings realised by the design team further iterated these relationships, filling in gaps in the narrative by providing composite, strategic or detailed drawings.
Conceived as an open-ended and extendable matrix, the pavilion was influenced by a series of academic writings, curatorial practices, artworks and other installations including: Frederick Kiesler’s City of Space (1925), Eduardo Persico and Marcello Nizzoli’s Medaglio d’Oro room (1934), Sol Le Witt’s Incomplete Open Cubes (1974) and Rosalind Krauss’s seminal text ‘Grids’ (1979). A modular frame whose structural bays would each hold and present an ‘episode’, the pavilion became both a visual analogue of the unseen networks embodying infrastructural systems and a reflection on the predominance of framed structures within the buildings exhibited. Sharing the aspiration of adaptability of many of these schemes, its white-painted timber components are connected by easily-dismantled steel fixings. These and its modularity allow the structure to be both taken down and re-erected subsequently in different iterations. The pavilion itself is, therefore, imagined as essentially provisional and – as with infrastructure – as having no fixed form. Presenting archives and other material over time, the transparent nature of the space allowed these to overlap visually conveying the nested nature of infrastructural production. Pursuing a means to evoke the qualities of infrastructural space while conveying a historical narrative, the exhibition’s termination in the present is designed to provoke in the visitor, a perceptual extension of the matrix to engage with the future.
Resumo:
La compression des données est la technique informatique qui vise à réduire la taille de l’information pour minimiser l’espace de stockage nécessaire et accélérer la transmission des données dans les réseaux à bande passante limitée. Plusieurs techniques de compression telles que LZ77 et ses variantes souffrent d’un problème que nous appelons la redondance causée par la multiplicité d’encodages. La multiplicité d’encodages (ME) signifie que les données sources peuvent être encodées de différentes manières. Dans son cas le plus simple, ME se produit lorsqu’une technique de compression a la possibilité, au cours du processus d’encodage, de coder un symbole de différentes manières. La technique de compression par recyclage de bits a été introduite par D. Dubé et V. Beaudoin pour minimiser la redondance causée par ME. Des variantes de recyclage de bits ont été appliquées à LZ77 et les résultats expérimentaux obtenus conduisent à une meilleure compression (une réduction d’environ 9% de la taille des fichiers qui ont été compressés par Gzip en exploitant ME). Dubé et Beaudoin ont souligné que leur technique pourrait ne pas minimiser parfaitement la redondance causée par ME, car elle est construite sur la base du codage de Huffman qui n’a pas la capacité de traiter des mots de code (codewords) de longueurs fractionnaires, c’est-à-dire qu’elle permet de générer des mots de code de longueurs intégrales. En outre, le recyclage de bits s’appuie sur le codage de Huffman (HuBR) qui impose des contraintes supplémentaires pour éviter certaines situations qui diminuent sa performance. Contrairement aux codes de Huffman, le codage arithmétique (AC) peut manipuler des mots de code de longueurs fractionnaires. De plus, durant ces dernières décennies, les codes arithmétiques ont attiré plusieurs chercheurs vu qu’ils sont plus puissants et plus souples que les codes de Huffman. Par conséquent, ce travail vise à adapter le recyclage des bits pour les codes arithmétiques afin d’améliorer l’efficacité du codage et sa flexibilité. Nous avons abordé ce problème à travers nos quatre contributions (publiées). Ces contributions sont présentées dans cette thèse et peuvent être résumées comme suit. Premièrement, nous proposons une nouvelle technique utilisée pour adapter le recyclage de bits qui s’appuie sur les codes de Huffman (HuBR) au codage arithmétique. Cette technique est nommée recyclage de bits basé sur les codes arithmétiques (ACBR). Elle décrit le cadriciel et les principes de l’adaptation du HuBR à l’ACBR. Nous présentons aussi l’analyse théorique nécessaire pour estimer la redondance qui peut être réduite à l’aide de HuBR et ACBR pour les applications qui souffrent de ME. Cette analyse démontre que ACBR réalise un recyclage parfait dans tous les cas, tandis que HuBR ne réalise de telles performances que dans des cas très spécifiques. Deuxièmement, le problème de la technique ACBR précitée, c’est qu’elle requiert des calculs à précision arbitraire. Cela nécessite des ressources illimitées (ou infinies). Afin de bénéficier de cette dernière, nous proposons une nouvelle version à précision finie. Ladite technique devienne ainsi efficace et applicable sur les ordinateurs avec les registres classiques de taille fixe et peut être facilement interfacée avec les applications qui souffrent de ME. Troisièmement, nous proposons l’utilisation de HuBR et ACBR comme un moyen pour réduire la redondance afin d’obtenir un code binaire variable à fixe. Nous avons prouvé théoriquement et expérimentalement que les deux techniques permettent d’obtenir une amélioration significative (moins de redondance). À cet égard, ACBR surpasse HuBR et fournit une classe plus étendue des sources binaires qui pouvant bénéficier d’un dictionnaire pluriellement analysable. En outre, nous montrons qu’ACBR est plus souple que HuBR dans la pratique. Quatrièmement, nous utilisons HuBR pour réduire la redondance des codes équilibrés générés par l’algorithme de Knuth. Afin de comparer les performances de HuBR et ACBR, les résultats théoriques correspondants de HuBR et d’ACBR sont présentés. Les résultats montrent que les deux techniques réalisent presque la même réduction de redondance sur les codes équilibrés générés par l’algorithme de Knuth.
Resumo:
This thesis examines the performance of Canadian fixed-income mutual funds in the context of an unobservable market factor that affects mutual fund returns. We use various selection and timing models augmented with univariate and multivariate regime-switching structures. These models assume a joint distribution of an unobservable latent variable and fund returns. The fund sample comprises six Canadian value-weighted portfolios with different investing objectives from 1980 to 2011. These are the Canadian fixed-income funds, the Canadian inflation protected fixed-income funds, the Canadian long-term fixed-income funds, the Canadian money market funds, the Canadian short-term fixed-income funds and the high yield fixed-income funds. We find strong evidence that more than one state variable is necessary to explain the dynamics of the returns on Canadian fixed-income funds. For instance, Canadian fixed-income funds clearly show that there are two regimes that can be identified with a turning point during the mid-eighties. This structural break corresponds to an increase in the Canadian bond index from its low values in the early 1980s to its current high values. Other fixed-income funds results show latent state variables that mimic the behaviour of the general economic activity. Generally, we report that Canadian bond fund alphas are negative. In other words, fund managers do not add value through their selection abilities. We find evidence that Canadian fixed-income fund portfolio managers are successful market timers who shift portfolio weights between risky and riskless financial assets according to expected market conditions. Conversely, Canadian inflation protected funds, Canadian long-term fixed-income funds and Canadian money market funds have no market timing ability. We conclude that these managers generally do not have positive performance by actively managing their portfolios. We also report that the Canadian fixed-income fund portfolios perform asymmetrically under different economic regimes. In particular, these portfolio managers demonstrate poorer selection skills during recessions. Finally, we demonstrate that the multivariate regime-switching model is superior to univariate models given the dynamic market conditions and the correlation between fund portfolios.
Resumo:
Las superfícies implícitas son útiles en muchas áreasde los gráficos por ordenador. Una de sus principales ventajas es que pueden ser fácilmente usadas como primitivas para modelado. Aun asi, no son muy usadas porque su visualización toma bastante tiempo. Cuando se necesita una visualización precisa, la mejor opción es usar trazado de rayos. Sin embargo, pequeñas partes de las superficies desaparecen durante la visualización. Esto ocurre por la truncación que se presenta en la representación en punto flotante de los ordenadores; algunos bits se puerden durante las operaciones matemáticas en los algoritmos de intersección. En este tesis se presentan algoritmos para solucionar esos problemas. La investigación se basa en el uso del Análisis Intervalar Modal el cual incluye herramientas para resolver problemas con incertidumbe cuantificada. En esta tesis se proporcionan los fundamentos matemáticos necesarios para el desarrollo de estos algoritmos.
Resumo:
The conventional method for the assessment of acute dermal toxicity (OECD Test Guideline 402, 1987) uses death of animals as an endpoint to identify the median lethal dose (LD50). A new OECD Testing Guideline called the dermal fixed dose procedure (dermal FDP) is being prepared to provide an alternative to Test Guideline 402. In contrast to Test Guideline 402, the dermal FDP does not provide a point estimate of the LD50, but aims to identify that dose of the substance under investigation that causes clear signs of nonlethal toxicity. This is then used to assign classification according to the new Globally Harmonised System of Classification and Labelling scheme (GHS). The dermal FDP has been validated using statistical modelling rather than by in vivo testing. The statistical modelling approach enables calculation of the probability of each GHS classification and the expected numbers of deaths and animals used in the test for imaginary substances with a range of LD50 values and dose-response curve slopes. This paper describes the dermal FDP and reports the results from the statistical evaluation. It is shown that the procedure will be completed with considerably less death and suffering than guideline 402, and will classify substances either in the same or a more stringent GHS class than that assigned on the basis of the LD50 value.
Statistical evaluation of the fixed concentration procedure for acute inhalation toxicity assessment
Resumo:
The conventional method for the assessment of acute inhalation toxicity (OECD Test Guideline 403, 1981) uses death of animals as an endpoint to identify the median lethal concentration (LC50). A new OECD Testing Guideline called the Fixed Concentration Procedure (FCP) is being prepared to provide an alternative to Test Guideline 403. Unlike Test Guideline 403, the FCP does not provide a point estimate of the LC50, but aims to identify an airborne exposure level that causes clear signs of nonlethal toxicity. This is then used to assign classification according to the new Globally Harmonized System of Classification and Labelling scheme (GHS). The FCP has been validated using statistical simulation rather than byin vivo testing. The statistical simulation approach predicts the GHS classification outcome and the numbers of deaths and animals used in the test for imaginary substances with a range of LC50 values and dose response curve slopes. This paper describes the FCP and reports the results from the statistical simulation study assessing its properties. It is shown that the procedure will be completed with considerably less death and suffering than Test Guideline 403, and will classify substances either in the same or a more stringent GHS class than that assigned on the basis of the LC50 value.
Resumo:
The successful implementation of just-in-time (JIT) purchasing policy in many industries has prompted many companies that still use the economic order quantity (EOQ) purchasing policy to ponder if they should switch to the JIT purchasing policy. Despite existing studies that directly compare the costs between the EOQ and JIT purchasing systems, this decision is, however, still difficult to be made, especially when price discount has to be considered. JIT purchasing may not always be successful even though plants that adopted JIT operations have experienced or can take advantage of physical space reduction. Hence, the objective of this study is to expand on a classical EOQ with a price discount model to derive the EOQ–JIT cost indifference point. The objective was tested and achieved through a survey and case study conducted in the ready-mixed concrete industry in Singapore.
Resumo:
We give a comprehensive analysis of the Euler-Jacobi problem of motion in the field of two fixed centers with arbitrary relative strength and for positive values of the energy. These systems represent nontrivial examples of integrable dynamics and are analysed from the point of view of the energy-momentum mapping from the phase space to the space of the integration constants. In this setting, we describe the structure of the scattering trajectories in phase space and derive an explicit description of the bifurcation diagram, i.e., the set of critical value of the energy-momentum map.
Resumo:
Introduction: Orthodontic tooth movement uses mechanical forces that result in inflammation in the first days. Myeloperoxidase (MPO) is an enzyme found in polymorphonuclear neutrophil (PMN) granules, and it is used to estimate the number of PMN granules in tissues. So far, MPO has not been used to study the inflammatory alterations after the application of orthodontic tooth movement forces. The aim of this study was to determine MPO activity in the gingival crevicular fluid (GCF) and saliva (whole stimulated saliva) of orthodontic patients at different time points after fixed appliance activation. Methods: MPO was determined in the GCF and collected by means of periopaper from the saliva of 14 patients with orthodontic fixed appliances. GCF and saliva samples were collected at baseline, 2 hours, and 7 and 14 days after application of the orthodontic force. Results: Mean MPO activity was increased in both the GCF and saliva of orthodontic patients at 2 hours after appliance activation (P<0.02 for all comparisons). At 2 hours, PMN infiltration into the periodontal ligament from the orthodontic force probably results in the increased MPO level observed at this time point. Conclusions: MPO might be a good marker to assess inflammation in orthodontic movement; it deserves further studies in orthodontic therapy. (Am J Orthod Dentofacial Orthop 2010;138:613-6)
Resumo:
From a financial perspective, this dissertation analyzes the Brazilian mutual fund industry performance for an average retail client. The most representative funds for the local population, that are the fixed income open-end ones, will be selected and their performance will be measured aiming to answer if clients of this industry obtained a proper return over their investments in the period between August 2010 and August 2013. A proper return will be understood as the preservation of the purchasing power of the individual´s savings, what is achieved with a positive performance of a mutual fund after discounting taxes, administrative fees and inflation. After obtaining an answer for the previous question, this dissertation will explore a possible alternative solution: Tesouro Direto, that is an example of a financial approach that could foster the disintermediation between savings and investments through electronic channels. New electronic platforms, with a broader scope, could be utilized to increase the efficiency of funding productive investments through better remunerating Brazilian savings. Tesouro Direto may point towards a new paradigm.
Resumo:
The conventional Newton and fast decoupled power flow (FDPF) methods have been considered inadequate to obtain the maximum loading point of power systems due to ill-conditioning problems at and near this critical point. It is well known that the PV and Q-theta decoupling assumptions of the fast decoupled power flow formulation no longer hold in the vicinity of the critical point. Moreover, the Jacobian matrix of the Newton method becomes singular at this point. However, the maximum loading point can be efficiently computed through parameterization techniques of continuation methods. In this paper it is shown that by using either theta or V as a parameter, the new fast decoupled power flow versions (XB and BX) become adequate for the computation of the maximum loading point only with a few small modifications. The possible use of reactive power injection in a selected PV bus (Q(PV)) as continuation parameter (mu) for the computation of the maximum loading point is also shown. A trivial secant predictor, the modified zero-order polynomial which uses the current solution and a fixed increment in the parameter (V, theta, or mu) as an estimate for the next solution, is used in predictor step. These new versions are compared to each other with the purpose of pointing out their features, as well as the influence of reactive power and transformer tap limits. The results obtained with the new approach for the IEEE test systems (14, 30, 57 and 118 buses) are presented and discussed in the companion paper. The results show that the characteristics of the conventional method are enhanced and the region of convergence around the singular solution is enlarged. In addition, it is shown that parameters can be switched during the tracing process in order to efficiently determine all the PV curve points with few iterations. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
The parameterized fast decoupled power flow (PFDPF), versions XB and BX, using either theta or V as a parameter have been proposed by the authors in Part I of this paper. The use of reactive power injection of a selected PVbus (Q(PV)) as the continuation parameter for the computation of the maximum loading point (MLP) was also investigated. In this paper, the proposed versions obtained only with small modifications of the conventional one are used for the computation of the MLP of IEEE test systems (14, 30, 57 and 118 buses). These new versions are compared to each other with the purpose of pointing out their features, as well as the influence of reactive power and transformer tap limits. The results obtained with the new approaches are presented and discussed. The results show that the characteristics of the conventional FDPF method are enhanced and the region of convergence around the singular solution is enlarged. In addition, it is shown that these versions can be switched during the tracing process in order to efficiently determine all the PV curve points with few iterations. A trivial secant predictor, the modified zero-order polynomial, which uses the current solution and a fixed increment in the parameter (V, theta, or mu) as an estimate for the next solution, is used for the predictor step. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Objectives: The present study used strain gauge analysis to perform an in vitro evaluation of the effect of axial loading on 3 elements of implant-supported partial fixed prostheses, varying the type of prosthetic cylinder and the loading points. Material and methods: Three internal hexagon implants were linearly embedded in a polyurethane block. Microunit abutments were connected to the implants applying a torque of 20 Ncm, and prefabricated Co-Cr cylinders and plastic prosthetic cylinders were screwed onto the abutments, which received standard patterns cast in Co-Cr alloy (n=5). Four strain gauges (SG) were bonded onto the surface of the block tangentially to the implants, SG 01 mesially to implant 1, SG 02 and SG 03 mesially and distally to implant 2, respectively, and SG 04 distally to implant 3. Each metallic structure was screwed onto the abutments with a 10 Ncm torque and an axial load of 30 kg was applied at five predetermined points (A, B, C, D, E). The data obtained from the strain gauge analyses were analyzed statistically by RM ANOVA and Tukey's test, with a level of significance of p<0.05. Results: There was a significant difference for the loading point (p=0.0001), with point B generating the smallest microdeformation (239.49 mu epsilon) and point D the highest (442.77 mu epsilon). No significant difference was found for the cylinder type (p=0.748). Conclusions: It was concluded that the type of cylinder did not affect in the magnitude of microdeformation, but the axial loading location influenced this magnitude.
Resumo:
Purpose: To evaluate the flexural strength of two fixed dental prosthesis (FDP) designs simulating frameworks of adhesive fixed partial prostheses, reinforced or not by glass fiber.Materials and Methods: Forty specimens, made with composite resin, were divided into 4 groups according to the framework design and the presence of fiber reinforcement: A1 - occlusal support; A2: occlusal support + glass fiber; B1: occlusal and proximal supports; B2: occlusal and proximal supports + glass fiber. The specimens were subjected to the three-point bending test, and the data were submitted to two-way ANOVA and Tukey's test (5%).Results: Group A2 (97.9 +/- 38 N) was statistically significantly different from all other experimental groups, presenting a significantly lower mean flexural strength.Conclusion: The use of glass fibers did not improve the flexural strength of composite resin, and designs with occlusal and proximal supports presented better results than designs simulating only occlusal support.