926 resultados para Complex engineering problems


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O método de polarização induzida, consagrado como ferramenta de prospecção de sulfetos metálicos e indiretamente, do ouro em associações polimetálicas, vem crescendo em outras aplicações como prospecção de águas subterrâneas, investigações de interesse à Engenharia Civil e ao uso de ocupação do solo. Neste trabalho, inicialmente, faz-se uma visualização geral do método IP, suas aplicabilidades, metodologia de trabalhos, arranjos de campo, fundamentos teóricos, vantagens e desvantagens etc. O objetivo principal, entretanto, é o modelamento analógico em laboratório, no sentido de simular-se as condições encontradas na natureza. Para tanto, utilizou-se de modelos geométricos simples, cilindros e placas. Na confecção destes modelos, utilizou-se cimento, areia quartzitica e grafita. Como a grafita apresenta um intenso fenômeno IP, variou-se o teor de grafita contido nos modelos. Assim, para diferentes formas e medidas dos modelos, variou-se também o conteúdo de partículas polarizáveis. As medidas foram feitas no sentido de verificar-se o comportamento da resposta IP quanto a forma, conteúdo de grafita, profundidade e direção do modelo estudado.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A determinação da capacidade de uso das terras numa bacia é muito importante para o planejamento e uso do solo, pois o uso inadequado e sem planejamento dessas terras provocam a baixa produtividade das culturas. Este trabalho visou definir as classes homogêneas de capacidade de uso da terra da bacia do Ribeirão Água Fria - Bofete (SP) para atender ao planejamento de práticas de conservação do solo desta área. A bacia situa-se entre as coordenadas geográficas 22o 58' 30`` a 23o 04' 30`` de latitude S e 48o 09' 30`` a 48o 18' 30`` de longitude W Gr., apresentando uma área de 9.180,12 hectares. A carta de capacidade de uso da terra da bacia foi elaborada a partir da carta clinográfica obtida por Santos et al. (1999), mapa pedológico do Estado de São Paulo (Oliveira et al., 1999), da tabela de julgamento de classes de capacidade de uso do solo (França, 1963) e das recomendações constantes no manual para levantamento utilitário do meio físico e classificação das terras no sistema de capacidade de uso (Lepsch et al., 1983). A discriminação, o mapeamento e a quantificação das áreas das classes e subclasses de capacidade de uso pelo Sistema de Informação Geográfica - IDRISI apresentaram os seguintes valores: IIIe,s - 517,020 ha (5,63%); IIIs - 863,150 ha (9,40%); IVe - 846,730 ha (9,23%); VIe - 871,110 ha (9,49%) e VIIe - 6082,115 ha (66,25%). Os resultados permitiram concluir que a bacia essencialmente constituída por 2/3 pela subclasse VIIe, ou seja, são terras que podem ser utilizadas por pastagens com uso moderado ou florestas, pois apresentam problemas complexos de erosão por causa de sua declividade. O Sistema de Informação Geográfica IDRISI permitiu através de seus módulos discriminar, mapear e quantificar as áreas das classes e subclasses de capacidade de uso das terras da bacia com rapidez e confiabilidade.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Elétrica - FEIS

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Artificial neural networks (ANNs) have been widely applied to the resolution of complex biological problems. An important feature of neural models is that their implementation is not precluded by the theoretical distribution shape of the data used. Frequently, the performance of ANNs over linear or non-linear regression-based statistical methods is deemed to be significantly superior if suitable sample sizes are provided, especially in multidimensional and non-linear processes. The current work was aimed at utilising three well-known neural network methods in order to evaluate whether these models would be able to provide more accurate outcomes in relation to a conventional regression method in pupal weight predictions of Chrysomya megacephala, a species of blowfly (Diptera: Calliphoridae), using larval density (i.e. the initial number of larvae), amount of available food and pupal size as input data. It was possible to notice that the neural networks yielded more accurate performances in comparison with the statistical model (multiple regression). Assessing the three types of networks utilised (Multi-layer Perceptron, Radial Basis Function and Generalised Regression Neural Network), no considerable differences between these models were detected. The superiority of these neural models over a classical statistical method represents an important fact, because more accurate models may clarify several intricate aspects concerning the nutritional ecology of blowflies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this Doctoral Thesis is to develop a genetic algorithm based optimization methods to find the best conceptual design architecture of an aero-piston-engine, for given design specifications. Nowadays, the conceptual design of turbine airplanes starts with the aircraft specifications, then the most suited turbofan or turbo propeller for the specific application is chosen. In the aeronautical piston engines field, which has been dormant for several decades, as interest shifted towards turboaircraft, new materials with increased performance and properties have opened new possibilities for development. Moreover, the engine’s modularity given by the cylinder unit, makes it possible to design a specific engine for a given application. In many real engineering problems the amount of design variables may be very high, characterized by several non-linearities needed to describe the behaviour of the phenomena. In this case the objective function has many local extremes, but the designer is usually interested in the global one. The stochastic and the evolutionary optimization techniques, such as the genetic algorithms method, may offer reliable solutions to the design problems, within acceptable computational time. The optimization algorithm developed here can be employed in the first phase of the preliminary project of an aeronautical piston engine design. It’s a mono-objective genetic algorithm, which, starting from the given design specifications, finds the engine propulsive system configuration which possesses minimum mass while satisfying the geometrical, structural and performance constraints. The algorithm reads the project specifications as input data, namely the maximum values of crankshaft and propeller shaft speed and the maximal pressure value in the combustion chamber. The design variables bounds, that describe the solution domain from the geometrical point of view, are introduced too. In the Matlab® Optimization environment the objective function to be minimized is defined as the sum of the masses of the engine propulsive components. Each individual that is generated by the genetic algorithm is the assembly of the flywheel, the vibration damper and so many pistons, connecting rods, cranks, as the number of the cylinders. The fitness is evaluated for each individual of the population, then the rules of the genetic operators are applied, such as reproduction, mutation, selection, crossover. In the reproduction step the elitist method is applied, in order to save the fittest individuals from a contingent mutation and recombination disruption, making it undamaged survive until the next generation. Finally, as the best individual is found, the optimal dimensions values of the components are saved to an Excel® file, in order to build a CAD-automatic-3D-model for each component of the propulsive system, having a direct pre-visualization of the final product, still in the engine’s preliminary project design phase. With the purpose of showing the performance of the algorithm and validating this optimization method, an actual engine is taken, as a case study: it’s the 1900 JTD Fiat Avio, 4 cylinders, 4T, Diesel. Many verifications are made on the mechanical components of the engine, in order to test their feasibility and to decide their survival through generations. A system of inequalities is used to describe the non-linear relations between the design variables, and is used for components checking for static and dynamic loads configurations. The design variables geometrical boundaries are taken from actual engines data and similar design cases. Among the many simulations run for algorithm testing, twelve of them have been chosen as representative of the distribution of the individuals. Then, as an example, for each simulation, the corresponding 3D models of the crankshaft and the connecting rod, have been automatically built. In spite of morphological differences among the component the mass is almost the same. The results show a significant mass reduction (almost 20% for the crankshaft) in comparison to the original configuration, and an acceptable robustness of the method have been shown. The algorithm here developed is shown to be a valid method for an aeronautical-piston-engine preliminary project design optimization. In particular the procedure is able to analyze quite a wide range of design solutions, rejecting the ones that cannot fulfill the feasibility design specifications. This optimization algorithm could increase the aeronautical-piston-engine development, speeding up the production rate and joining modern computation performances and technological awareness to the long lasting traditional design experiences.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Im ersten Teil dieser Arbeit wird die Anwendung der MALDI-TOF Massenspektrometrie auf annähernd monodisperse und eng verteilte Verbindungen polymerer Gestalt beschrieben. Dazu werden mit Polyphenylendendrimeren, Polybutadien-Sternpolymeren, Propionylethyleniminen und Polyethylmethylsiloxanen weniger gängige Verbindungen ausgewählt, für deren Analyse keine Standardmethodik vorliegt. Dabei wird gezeigt, dass andere Analysemethoden (wie z.B. GPC oder NMR) der MALDI-TOF Massenspektrometrie oftmals deutlich unterlegen sind - in einigen Fällen werden deren Ergebnisse gar widerlegt. In anderen Fällen wird allerdings auch festgestellt, dass diese Methoden die Massenspektrometrie unterstützen oder ergänzen können. Im darauf folgenden Teil werden durch Mischen eng verteilter Polymerstandards breite Verteilung simuliert, um so die Gründe für die Limitierung der Anwendbarkeit der MALDI-TOF Massenspektrometrie auf breite Verteilungen zu ermitteln. Die Problematik der Messung von breiten Verteilungen wird oftmals durch eine vorherige Fraktionierung mit der GPC gelöst. Auch dieses Verfahren wird in dieser Arbeit an Hand von simulierten breiten Verteilungen untersucht und bewertet. Neben qualitativen Untersuchungen mit der MALDI-TOF Massenspektrometrie wird in dieser Arbeit abgeschätzt, inwieweit diese Methode auch zur quantitativen Analyse eingesetzt werden kann. Dazu werden die Untersuchungen von Oligo(para-phenylene) (OPP) und von Cyclodehydrierungsprodukten sehr großer Dendrimere (C385, C474) beschrieben. Dabei stellt sich heraus, das neben der direkten UV-Absorption (im Falle der OPPs) auch die zunehmende Ionisierungswahrscheinlichkeit und die sinkende Desorptionswahrscheinlichkeit mit fortschreitender Cyclodehydrierung eine Quantifizierung problematisch gestalten. In einem abschließenden Kapitel wird der Versuch beschrieben, aus polyzyklischen aromatischen Kohlenwasserstoffen (PAHs) durch den Beschuss mit den UV-Strahlen des MALDI-TOF Massenspektrometers Fullerene zu erzeugen. Der Einsatz von NaCl als sogenannte Inertmatrix führt zu vielversprechenden ersten Ergebnissen.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Combinatorial Optimization is becoming ever more crucial, in these days. From natural sciences to economics, passing through urban centers administration and personnel management, methodologies and algorithms with a strong theoretical background and a consolidated real-word effectiveness is more and more requested, in order to find, quickly, good solutions to complex strategical problems. Resource optimization is, nowadays, a fundamental ground for building the basements of successful projects. From the theoretical point of view, Combinatorial Optimization rests on stable and strong foundations, that allow researchers to face ever more challenging problems. However, from the application point of view, it seems that the rate of theoretical developments cannot cope with that enjoyed by modern hardware technologies, especially with reference to the one of processors industry. In this work we propose new parallel algorithms, designed for exploiting the new parallel architectures available on the market. We found that, exposing the inherent parallelism of some resolution techniques (like Dynamic Programming), the computational benefits are remarkable, lowering the execution times by more than an order of magnitude, and allowing to address instances with dimensions not possible before. We approached four Combinatorial Optimization’s notable problems: Packing Problem, Vehicle Routing Problem, Single Source Shortest Path Problem and a Network Design problem. For each of these problems we propose a collection of effective parallel solution algorithms, either for solving the full problem (Guillotine Cuts and SSSPP) or for enhancing a fundamental part of the solution method (VRP and ND). We endorse our claim by presenting computational results for all problems, either on standard benchmarks from the literature or, when possible, on data from real-world applications, where speed-ups of one order of magnitude are usually attained, not uncommonly scaling up to 40 X factors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Some patients at university hospital no longer need frequent medical treatment but complex professional nursing care. At University Hospital (Inselspital) Bern a Nursing Unit with six beds was run as a pilot project based on experiences in British Nursing Development Units. The care concept was specifically developed and based on a definition of professional nursing, an evidence-based practice approach, resource oriented self management, and caring. Primary nursing was used, and the primary nurse was responsible for the coordination and steering of patient care. The project was evaluated prospectively. During the pilot phase, 37 patients were cared for on the NU. On average, 85% of the beds were occupied, patients were hospitalized for 21.5 days and had a mean age of 68.9 years. They were older than the University Hospital's average patient, and cases were more complex than the University Hospital's average case. The nurses' experiences were mainly positive. Their enhanced responsibility and the structured care process were seen as a challenge allowing them to enlarge their abilities. With this project, the University Hospital built up innovative services for patients with complex nursing problems. The project showed that well trained nurses can take on more responsibility for this patient group than in the context of conventional care models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Heuristic optimization algorithms are of great importance for reaching solutions to various real world problems. These algorithms have a wide range of applications such as cost reduction, artificial intelligence, and medicine. By the term cost, one could imply that that cost is associated with, for instance, the value of a function of several independent variables. Often, when dealing with engineering problems, we want to minimize the value of a function in order to achieve an optimum, or to maximize another parameter which increases with a decrease in the cost (the value of this function). The heuristic cost reduction algorithms work by finding the optimum values of the independent variables for which the value of the function (the “cost”) is the minimum. There is an abundance of heuristic cost reduction algorithms to choose from. We will start with a discussion of various optimization algorithms such as Memetic algorithms, force-directed placement, and evolution-based algorithms. Following this initial discussion, we will take up the working of three algorithms and implement the same in MATLAB. The focus of this report is to provide detailed information on the working of three different heuristic optimization algorithms, and conclude with a comparative study on the performance of these algorithms when implemented in MATLAB. In this report, the three algorithms we will take in to consideration will be the non-adaptive simulated annealing algorithm, the adaptive simulated annealing algorithm, and random restart hill climbing algorithm. The algorithms are heuristic in nature, that is, the solution these achieve may not be the best of all the solutions but provide a means to reach a quick solution that may be a reasonably good solution without taking an indefinite time to implement.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents our research works in the domain of Collaborative Environments centred on Problem Based Learning (PBL) and taking advantage of existing Electronic Documents. We first present the modelling and engineering problems that we want to address; then we discuss technological issues of such a research particularly the use of OpenUSS and of the Enterprise Java Open Source Architecture (EJOSA) to implement such collaborative PBL environments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

CONTEXT The necessity of specific intervention components for the successful treatment of patients with posttraumatic stress disorder is the subject of controversy. OBJECTIVE To investigate the complexity of clinical problems as a moderator of relative effects between specific and nonspecific psychological interventions. METHODS We included 18 randomized controlled trials, directly comparing specific and nonspecific psychological interventions. We conducted moderator analyses, including the complexity of clinical problems as predictor. RESULTS Our results have confirmed the moderate superiority of specific over nonspecific psychological interventions; however, the superiority was small in studies with complex clinical problems and large in studies with noncomplex clinical problems. CONCLUSIONS For patients with complex clinical problems, our results suggest that particular nonspecific psychological interventions may be offered as an alternative to specific psychological interventions. In contrast, for patients with noncomplex clinical problems, specific psychological interventions are the best treatment option.