884 resultados para Computational Geometry and Object Modelling
Resumo:
Canada releases over 150 billion litres of untreated and undertreated wastewater into the water environment every year1. To clean up urban wastewater, new Federal Wastewater Systems Effluent Regulations (WSER) on establishing national baseline effluent quality standards that are achievable through secondary wastewater treatment were enacted on July 18, 2012. With respect to the wastewater from the combined sewer overflows (CSO), the Regulations require the municipalities to report the annual quantity and frequency of effluent discharges. The City of Toronto currently has about 300 CSO locations within an area of approximately 16,550 hectares. The total sewer length of the CSO area is about 3,450 km and the number of sewer manholes is about 51,100. A system-wide monitoring of all CSO locations has never been undertaken due to the cost and practicality. Instead, the City has relied on estimation methods and modelling approaches in the past to allow funds that would otherwise be used for monitoring to be applied to the reduction of the impacts of the CSOs. To fulfill the WSER requirements, the City is now undertaking a study in which GIS-based hydrologic and hydraulic modelling is the approach. Results show the usefulness of this for 1) determining the flows contributing to the combined sewer system in the local and trunk sewers for dry weather flow, wet weather flow, and snowmelt conditions; 2) assessing hydraulic grade line and surface water depth in all the local and trunk sewers under heavy rain events; 3) analysis of local and trunk sewer capacities for future growth; and 4) reporting of the annual quantity and frequency of CSOs as per the requirements in the new Regulations. This modelling approach has also allowed funds to be applied toward reducing and ultimately eliminating the adverse impacts of CSOs rather than expending resources on unnecessary and costly monitoring.
Resumo:
This thesis provides three original contributions to the field of Decision Sciences. The first contribution explores the field of heuristics and biases. New variations of the Cognitive Reflection Test (CRT--a test to measure "the ability or disposition to resist reporting the response that first comes to mind"), are provided. The original CRT (S. Frederick [2005] Journal of Economic Perspectives, v. 19:4, pp.24-42) has items in which the response is immediate--and erroneous. It is shown that by merely varying the numerical parameters of the problems, large deviations in response are found. Not only the final results are affected by the proposed variations, but so is processing fluency. It seems that numbers' magnitudes serve as a cue to activate system-2 type reasoning. The second contribution explores Managerial Algorithmics Theory (M. Moldoveanu [2009] Strategic Management Journal, v. 30, pp. 737-763); an ambitious research program that states that managers display cognitive choices with a "preference towards solving problems of low computational complexity". An empirical test of this hypothesis is conducted, with results showing that this premise is not supported. A number of problems are designed with the intent of testing the predictions from managerial algorithmics against the predictions of cognitive psychology. The results demonstrate (once again) that framing effects profoundly affect choice, and (an original insight) that managers are unable to distinguish computational complexity problem classes. The third contribution explores a new approach to a computationally complex problem in marketing: the shelf space allocation problem (M-H Yang [2001] European Journal of Operational Research, v. 131, pp.107--118). A new representation for a genetic algorithm is developed, and computational experiments demonstrate its feasibility as a practical solution method. These studies lie at the interface of psychology and economics (with bounded rationality and the heuristics and biases programme), psychology, strategy, and computational complexity, and heuristics for computationally hard problems in management science.
5th BRICS Trade and Economic Research Network (TERN) meeting: the impact of mega agreements on BRICS
Resumo:
The BRICS TERN – BRICS Trade and Economics Research Network is a group of independent research institutes established four years ago by five think tanks from Brazil, Russia, India, China and South Africa. The main objective of the network is to study different aspects of trade and economic relations amongst these five countries. The purpose of the V BRICS TERN Meeting was to analyze and debate the effects of the negotiations of the Mega Agreements, mainly those initiated by the US and the EU, already in negotiation, to each of the BRICS Trade Policies. Both Mega Agreements were examined – the Trans Pacific Partnership (TPP) and the Transatlantic Trade and Investment Partnership (TTIP). The studies included the main impacts on trade flows and on the international trade rules system, respecting the perspective of each of the countries concerned. This workshop was an initiative of the Center for Global Trade and Investments (CGTI), a think-tank on International Trade held by FGV Sao Paulo School of Economics. Its main objective is the research on trade regulation, preferential trade agreements, trade and currency, trade and global value chains, through legal analysis and economic modelling. One of its main researches, now, is on the potential economic and legal impacts of the Mega Agreements on Brazil and WTO rules. This meeting was organized in March14, 2014, in Rio de Janeiro, in a perfect timing for introducing such issues in the international agenda, in advance of the 6th BRICS Summit scheduled to be held in Brazil in July 2014.
Resumo:
Online geographic-databases have been growing increasingly as they have become a crucial source of information for both social networks and safety-critical systems. Since the quality of such applications is largely related to the richness and completeness of their data, it becomes imperative to develop adaptable and persistent storage systems, able to make use of several sources of information as well as enabling the fastest possible response from them. This work will create a shared and extensible geographic model, able to retrieve and store information from the major spatial sources available. A geographic-based system also has very high requirements in terms of scalability, computational power and domain complexity, causing several difficulties for a traditional relational database as the number of results increases. NoSQL systems provide valuable advantages for this scenario, in particular graph databases which are capable of modeling vast amounts of inter-connected data while providing a very substantial increase of performance for several spatial requests, such as finding shortestpath routes and performing relationship lookups with high concurrency. In this work, we will analyze the current state of geographic information systems and develop a unified geographic model, named GeoPlace Explorer (GE). GE is able to import and store spatial data from several online sources at a symbolic level in both a relational and a graph databases, where several stress tests were performed in order to find the advantages and disadvantages of each database paradigm.
Resumo:
Oil production and exploration techniques have evolved in the last decades in order to increase fluid flows and optimize how the required equipment are used. The base functioning of Electric Submersible Pumping (ESP) lift method is the use of an electric downhole motor to move a centrifugal pump and transport the fluids to the surface. The Electric Submersible Pumping is an option that has been gaining ground among the methods of Artificial Lift due to the ability to handle a large flow of liquid in onshore and offshore environments. The performance of a well equipped with ESP systems is intrinsically related to the centrifugal pump operation. It is the pump that has the function to turn the motor power into Head. In this present work, a computer model to analyze the three-dimensional flow in a centrifugal pump used in Electric Submersible Pumping has been developed. Through the commercial program, ANSYS® CFX®, initially using water as fluid flow, the geometry and simulation parameters have been defined in order to obtain an approximation of what occurs inside the channels of the impeller and diffuser pump in terms of flow. Three different geometry conditions were initially tested to determine which is most suitable to solving the problem. After choosing the most appropriate geometry, three mesh conditions were analyzed and the obtained values were compared to the experimental characteristic curve of Head provided by the manufacturer. The results have approached the experimental curve, the simulation time and the model convergence were satisfactory if it is considered that the studied problem involves numerical analysis. After the tests with water, oil was used in the simulations. The results were compared to a methodology used in the petroleum industry to correct viscosity. In general, for models with water and oil, the results with single-phase fluids were coherent with the experimental curves and, through three-dimensional computer models, they are a preliminary evaluation for the analysis of the two-phase flow inside the channels of centrifugal pump used in ESP systems
Resumo:
Due of industrial informatics several attempts have been done to develop notations and semantics, which are used for classifying and describing different kind of system behavior, particularly in the modeling phase. Such attempts provide the infrastructure to resolve some real problems of engineering and construct practical systems that aim at, mainly, to increase the productivity, quality, and security of the process. Despite the many studies that have attempted to develop friendly methods for industrial controller programming, they are still programmed by conventional trial-and-error methods and, in practice, there is little written documentation on these systems. The ideal solution would be to use a computational environment that allows industrial engineers to implement the system using high-level language and that follows international standards. Accordingly, this work proposes a methodology for plant and control modelling of the discrete event systems that include sequential, parallel and timed operations, using a formalism based on Statecharts, denominated Basic Statechart (BSC). The methodology also permits automatic procedures to validate and implement these systems. To validate our methodology, we presented two case studies with typical examples of the manufacturing sector. The first example shows a sequential control for a tagged machine, which is used to illustrated dependences between the devices of the plant. In the second example, we discuss more than one strategy for controlling a manufacturing cell. The model with no control has 72 states (distinct configurations) and, the model with sequential control generated 20 different states, but they only act in 8 distinct configurations. The model with parallel control generated 210 different states, but these 210 configurations act only in 26 distinct configurations, therefore, one strategy control less restrictive than previous. Lastly, we presented one example for highlight the modular characteristic of our methodology, which it is very important to maintenance of applications. In this example, the sensors for identifying pieces in the plant were removed. So, changes in the control model are needed to transmit the information of the input buffer sensor to the others positions of the cell
Resumo:
The frequency selective surfaces, or FSS (Frequency Selective Surfaces), are structures consisting of periodic arrays of conductive elements, called patches, which are usually very thin and they are printed on dielectric layers, or by openings perforated on very thin metallic surfaces, for applications in bands of microwave and millimeter waves. These structures are often used in aircraft, missiles, satellites, radomes, antennae reflector, high gain antennas and microwave ovens, for example. The use of these structures has as main objective filter frequency bands that can be broadcast or rejection, depending on the specificity of the required application. In turn, the modern communication systems such as GSM (Global System for Mobile Communications), RFID (Radio Frequency Identification), Bluetooth, Wi-Fi and WiMAX, whose services are highly demanded by society, have required the development of antennas having, as its main features, and low cost profile, and reduced dimensions and weight. In this context, the microstrip antenna is presented as an excellent choice for communications systems today, because (in addition to meeting the requirements mentioned intrinsically) planar structures are easy to manufacture and integration with other components in microwave circuits. Consequently, the analysis and synthesis of these devices mainly, due to the high possibility of shapes, size and frequency of its elements has been carried out by full-wave models, such as the finite element method, the method of moments and finite difference time domain. However, these methods require an accurate despite great computational effort. In this context, computational intelligence (CI) has been used successfully in the design and optimization of microwave planar structures, as an auxiliary tool and very appropriate, given the complexity of the geometry of the antennas and the FSS considered. The computational intelligence is inspired by natural phenomena such as learning, perception and decision, using techniques such as artificial neural networks, fuzzy logic, fractal geometry and evolutionary computation. This work makes a study of application of computational intelligence using meta-heuristics such as genetic algorithms and swarm intelligence optimization of antennas and frequency selective surfaces. Genetic algorithms are computational search methods based on the theory of natural selection proposed by Darwin and genetics used to solve complex problems, eg, problems where the search space grows with the size of the problem. The particle swarm optimization characteristics including the use of intelligence collectively being applied to optimization problems in many areas of research. The main objective of this work is the use of computational intelligence, the analysis and synthesis of antennas and FSS. We considered the structures of a microstrip planar monopole, ring type, and a cross-dipole FSS. We developed algorithms and optimization results obtained for optimized geometries of antennas and FSS considered. To validate results were designed, constructed and measured several prototypes. The measured results showed excellent agreement with the simulated. Moreover, the results obtained in this study were compared to those simulated using a commercial software has been also observed an excellent agreement. Specifically, the efficiency of techniques used were CI evidenced by simulated and measured, aiming at optimizing the bandwidth of an antenna for wideband operation or UWB (Ultra Wideband), using a genetic algorithm and optimizing the bandwidth, by specifying the length of the air gap between two frequency selective surfaces, using an optimization algorithm particle swarm
Resumo:
This thesis describes design methodologies for frequency selective surfaces (FSSs) composed of periodic arrays of pre-fractals metallic patches on single-layer dielectrics (FR4, RT/duroid). Shapes presented by Sierpinski island and T fractal geometries are exploited to the simple design of efficient band-stop spatial filters with applications in the range of microwaves. Initial results are discussed in terms of the electromagnetic effect resulting from the variation of parameters such as, fractal iteration number (or fractal level), fractal iteration factor, and periodicity of FSS, depending on the used pre-fractal element (Sierpinski island or T fractal). The transmission properties of these proposed periodic arrays are investigated through simulations performed by Ansoft DesignerTM and Ansoft HFSSTM commercial softwares that run full-wave methods. To validate the employed methodology, FSS prototypes are selected for fabrication and measurement. The obtained results point to interesting features for FSS spatial filters: compactness, with high values of frequency compression factor; as well as stable frequency responses at oblique incidence of plane waves. This thesis also approaches, as it main focus, the application of an alternative electromagnetic (EM) optimization technique for analysis and synthesis of FSSs with fractal motifs. In application examples of this technique, Vicsek and Sierpinski pre-fractal elements are used in the optimal design of FSS structures. Based on computational intelligence tools, the proposed technique overcomes the high computational cost associated to the full-wave parametric analyzes. To this end, fast and accurate multilayer perceptron (MLP) neural network models are developed using different parameters as design input variables. These neural network models aim to calculate the cost function in the iterations of population-based search algorithms. Continuous genetic algorithm (GA), particle swarm optimization (PSO), and bees algorithm (BA) are used for FSSs optimization with specific resonant frequency and bandwidth. The performance of these algorithms is compared in terms of computational cost and numerical convergence. Consistent results can be verified by the excellent agreement obtained between simulations and measurements related to FSS prototypes built with a given fractal iteration
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This work presents a study on the generation of digital masks aiming at edge detection with previously known directions. This solution is important when edge direction is available either from a direction histogram or from a prediction based on camera and object models. A modification in the non-maximum suppression method of thinning is also presented enabling the comparison of local maxima for any edge directions. Results with a synthetic image and with crops of a CBERS satellite images are presented showing an example with its application in road detection, provided that directions are previously known.
Resumo:
The study area is located in the NW portion of the Ceará state nearby the city of Santana do Acaraú. Geologically it lies along the Sobral-Pedro II lineament which limits the domains of Ceará Central and Noroeste do Ceará, both belonging to the Borborema Province.The object of study was a NE trending 30km long siliciclastic body (sandstone and conglomerate) bounded by transcurrent dextral faults. The sediments are correlated to the Ipú Formation (Serra Grande Group) from the Parnaiba basin, which age is thought to be Siluro-Devonian. Existing structural data shown that bedding has higher but variable dips (70-45) near the borders faults and much lower to subhorizontal inward the body. The brittle deformation was related to a reactivation, in lower crustal level, of the Sobral-Pedro II lineament (Destro (1987, 1999; Galvão, 2002).The study presented here was focused in applying geophysicals methods (gravimetry and seismic) to determine the geometry of the sandstone/conglomeratic body and together with the structural data, to propose a model to explain its deformation. The residual anomalies maps indicate the presence of two main graben-like structures. The sedimentary pile width was estimated from 2D gravimetric models to be about 500-600 meters. The 3D gravimetric model stressed the two maximum width regions where a good correlation is observed between the isopach geometry and the centripetal strike/dip pattern displayed by the sediments bedding. Two main directions (N-S and E-W) of block moving are interpreted from the distribution pattern of the maximum width regions of the sedimentary rock
Resumo:
The Baixa grande fault is located on the edge of the S-SW Potiguar Rift. It limits the south part of Umbuzeiro Graben and the Apodi Graben. Although a number of studies have associated the complex deformation styles in the hanging wall of the Baixa Grande Fault with geometry and displacement variations, none have applied the modern computational techniques such as geometrical and kinematic validations to address this problem. This work proposes a geometric analysis of the Baixa Fault using seismic interpretation. The interpretation was made on 3D seismic data of the Baixa Grande fault using the software OpendTect (dGB Earth Sciences). It was also used direct structural modeling, such as Analog Direct Modeling know as Folding Vectors and, 2D and 3D Direct Computational Modeling. The Folding Vectors Modeling presented great similarity with the conventional structural seismic interpretations of the Baixa Grande Fault, thus, the conventional interpretation was validated geometrically. The 2D direct computational modeling was made on some sections of the 3D data of the Baixa Grande Fault on software Move (Midland Valley Ltd) using the horizon modeling tool. The modeling confirms the influence of fault geometry on the hanging wall. The Baixa Grande Fault ramp-flat-ramp geometry generates synform on the concave segments of the fault and antiform in the convex segments. On the fault region that does not have segments angle change, the beds are dislocated without deformation, and on the listric faults occur rollover. On the direct 3D computational modeling, structural attributes were obtained as horizons on the hanging wall of the main fault, after the simulation of several levels of deformation along the fault. The occurrence of structures that indicates shortening in this modeling, also indicates that the antiforms on the Baixa Grande Fault were influenced by fault geometry
Resumo:
This study aimed to develop a plate to treat fractures of the mandibular body in dogs and to validate the project using finite elements and biomechanical essays. Mandible prototypes were produced with 10 oblique ventrorostral fractures (favorable) and 10 oblique ventrocaudal fractures (unfavorable). Three groups were established for each fracture type. Osteosynthesis with a pure titanium plate of double-arch geometry and blocked monocortical screws offree angulanon were used. The mechanical resistance of the prototype with unfavorable fracture was lower than that of the fcworable fracture. In both fractures, the deflection increased and the relative stiffness decreased proportionally to the diminishing screw number The finite element analysis validated this plate study, since the maximum tension concentration observed on the plate was lower than the resistance limit tension admitted by the titanium. In conclusion, the double-arch geometry plate fixed with blocked monocortical screws has sufficient resistance to stabilize oblique,fractures, without compromising mandibular dental or neurovascular structures. J Vet Dent 24 (7); 212 - 221, 2010
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)