927 resultados para feasible path
Resumo:
The response of the Gulf Stream (GS) system to atmospheric forcing is generally linked either to the basin-scale winds on the subtropical gyre or to the buoyancy forcing from the Labrador Sea. This study presents a multiscale synergistic perspective to describe the low-frequency response of the GS system. The authors identify dominant temporal variability in the North Atlantic Oscillation (NAO), in known indices of the GS path, and in the observed GS latitudes along its path derived from sea surface height (SSH) contours over the period 1993-2013. The analysis suggests that the signature of interannual variability changes along the stream's path from 75 degrees to 55 degrees W. From its separation at Cape Hatteras to the west of 65 degrees W, the variability of the GS is mainly in the near-decadal (7-10 years) band, which is missing to the east of 60 degrees W, where a new interannual (4-5 years) band peaks. The latter peak (4-5 years) was missing to the west of 65 degrees W. The region between 65 degrees and 60 degrees W seems to be a transition region. A 2-3-yr secondary peak was pervasive in all time series, including that for the NAO. This multiscale response of the GS system is supported by results from a basin-scale North Atlantic model. The near-decadal response can be attributed to similar forcing periods in the NAO signal; however, the interannual variability of 4-5 years in the eastern segment of the GS path is as yet unexplained. More numerical and observational studies are warranted to understand such causality.
Resumo:
This research paper presents a five step algorithm to generate tool paths for machining Free form / Irregular Contoured Surface(s) (FICS) by adopting STEP-NC (AP-238) format. In the first step, a parametrized CAD model with FICS is created or imported in UG-NX6.0 CAD package. The second step recognizes the features and calculates a Closeness Index (CI) by comparing them with the B-Splines / Bezier surfaces. The third step utilizes the CI and extracts the necessary data to formulate the blending functions for identified features. In the fourth step Z-level 5 axis tool paths are generated by adopting flat and ball end mill cutters. Finally, in the fifth step, tool paths are integrated with STEP-NC format and validated. All these steps are discussed and explained through a validated industrial component.
Resumo:
This research paper presents the work on feature recognition, tool path data generation and integration with STEP-NC (AP-238 format) for features having Free form / Irregular Contoured Surface(s) (FICS). Initially, the FICS features are modelled / imported in UG CAD package and a closeness index is generated. This is done by comparing the FICS features with basic B-Splines / Bezier curves / surfaces. Then blending functions are caculated by adopting convolution theorem. Based on the blending functions, contour offsett tool paths are generated and simulated for 5 axis milling environment. Finally, the tool path (CL) data is integrated with STEP-NC (AP-238) format. The tool path algorithm and STEP- NC data is tested with various industrial parts through an automated UFUNC plugin.
Resumo:
This study investigates topology optimization of energy absorbing structures in which material damage is accounted for in the optimization process. The optimization objective is to design the lightest structures that are able to absorb the required mechanical energy. A structural continuity constraint check is introduced that is able to detect when no feasible load path remains in the finite element model, usually as a result of large scale fracture. This assures that designs do not fail when loaded under the conditions prescribed in the design requirements. This continuity constraint check is automated and requires no intervention from the analyst once the optimization process is initiated. Consequently, the optimization algorithm proceeds towards evolving an energy absorbing structure with the minimum structural mass that is not susceptible to global structural failure. A method is also introduced to determine when the optimization process should halt. The method identifies when the optimization method has plateaued and is no longer likely to provide improved designs if continued for further iterations. This provides the designer with a rational method to determine the necessary time to run the optimization and avoid wasting computational resources on unnecessary iterations. A case study is presented to demonstrate the use of this method.
Resumo:
Otto-von Guericke-Universität Magdeburg, Fakultät für Maschinenbau, Dissertation, 2016
Resumo:
The power of computer game technology is currently being harnessed to produce “serious games”. These “games” are targeted at the education and training marketplace, and employ various key game-engine components such as the graphics and physics engines to produce realistic “digital-world” simulations of the real “physical world”. Many approaches are driven by the technology and often lack a consideration of a firm pedagogical underpinning. The authors believe that an analysis and deployment of both the technological and pedagogical dimensions should occur together, with the pedagogical dimension providing the lead. This chapter explores the relationship between these two dimensions, and explores how “pedagogy may inform the use of technology”, how various learning theories may be mapped onto the use of the affordances of computer game engines. Autonomous and collaborative learning approaches are discussed. The design of a serious game is broken down into spatial and temporal elements. The spatial dimension is related to the theories of knowledge structures, especially “concept maps”. The temporal dimension is related to “experiential learning”, especially the approach of Kolb. The multi-player aspect of serious games is related to theories of “collaborative learning” which is broken down into a discussion of “discourse” versus “dialogue”. Several general guiding principles are explored, such as the use of “metaphor” (including metaphors of space, embodiment, systems thinking, the internet and emergence). The topological design of a serious game is also highlighted. The discussion of pedagogy is related to various serious games we have recently produced and researched, and is presented in the hope of informing the “serious game community”.
Resumo:
The work of cataloging and digitizing the Historical Ar-chive of the Prelature of Humahuaca, presents us with documen-tary mass, almost unused for historical research. Due to organiza-tional reasons, this documentary heritage was limited to consulta-tion of researchers. The development of “Documenta” project will allow us to know the contents of that file, get closer to these do-cuments for consultation and scientific production.
Resumo:
Rezension von: Maurer, Markus: Skill Formation Regimes in South Asia, A Comparative Study on the Path-Dependent Development of Technical and Vocational Education and Training for the Garment Industry (Komparatistische Bibliothek; Bd. 21), Frankfurt am Main: Peter Lang 2011 (449 S.; ISBN Skill Formation Regi)
Resumo:
The recently reported Monte Carlo Random Path Sampling method (RPS) is here improved and its application is expanded to the study of the 2D and 3D Ising and discrete Heisenberg models. The methodology was implemented to allow use in both CPU-based high-performance computing infrastructures (C/MPI) and GPU-based (CUDA) parallel computation, with significant computational performance gains. Convergence is discussed, both in terms of free energy and magnetization dependence on field/temperature. From the calculated magnetization-energy joint density of states, fast calculations of field and temperature dependent thermodynamic properties are performed, including the effects of anisotropy on coercivity, and the magnetocaloric effect. The emergence of first-order magneto-volume transitions in the compressible Ising model is interpreted using the Landau theory of phase transitions. Using metallic Gadolinium as a real-world example, the possibility of using RPS as a tool for computational magnetic materials design is discussed. Experimental magnetic and structural properties of a Gadolinium single crystal are compared to RPS-based calculations using microscopic parameters obtained from Density Functional Theory.
Resumo:
Seaports play a critical role as gateways and facilitators of economic interchange and logistics processes and thus have become crucial nodes in globalised production networks andmobility systems. Both the physical port infrastructure and its operational superstructure have undergone intensive evolution processes in an effort to adapt to changing economic environments, technological advances,maritime industry expectations and institutional reforms. The results, in terms of infrastructure, operator models and the role of an individual port within the port system, vary by region, institutional and economic context. While ports have undoubtedly developed in scale to respond to the changing volumes and structures in geographies of trade (Wilmsmeier, 2015), the development of hinterland access infrastructure, regulatory systems and institutional structures have in many instances lagged behind. The resulting bottlenecks reflect deficits in the interplay between the economic system and the factors defining port development (e.g. transport demand, the structure of trade, transport services, institutional capacities, etc. cf. Cullinane and Wilmsmeier, 2011). There is a wide range of case study approaches and analyses of individual ports, but analyses from a port system perspective are less common, and those that exist are seldom critical of the dominant discourse assuming the efficiency of market competition (cf. Debrie et al., 2013). This special section aims to capture the spectrum of approaches in current geography research on port system evolution. Thus, the papers reach from the traditional spatial approach (Rodrigue and Ashar, this volume) to network analysis (Mohamed-Chérif and Ducruet, this volume) to institutional discussions (Vonck and Notteboom, this volume; Wilmsmeier and Monios, this volume). The selection of papers allows an opening of discussion and reflection on current research, necessary critical analysis of the influences on port systemevolution and,most importantly, future directions. The remainder of this editorial aims to reflect on these challenges and identify the potential for future research.
Resumo:
A natural way to generalize tensor network variational classes to quantum field systems is via a continuous tensor contraction. This approach is first illustrated for the class of quantum field states known as continuous matrix-product states (cMPS). As a simple example of the path-integral representation we show that the state of a dynamically evolving quantum field admits a natural representation as a cMPS. A completeness argument is also provided that shows that all states in Fock space admit a cMPS representation when the number of variational parameters tends to infinity. Beyond this, we obtain a well-behaved field limit of projected entangled-pair states (PEPS) in two dimensions that provide an abstract class of quantum field states with natural symmetries. We demonstrate how symmetries of the physical field state are encoded within the dynamics of an auxiliary field system of one dimension less. In particular, the imposition of Euclidean symmetries on the physical system requires that the auxiliary system involved in the class' definition must be Lorentz-invariant. The physical field states automatically inherit entropy area laws from the PEPS class, and are fully described by the dissipative dynamics of a lower dimensional virtual field system. Our results lie at the intersection many-body physics, quantum field theory and quantum information theory, and facilitate future exchanges of ideas and insights between these disciplines.
Resumo:
A oportunidade de produção de biomassa microalgal tem despertado interesse pelos diversos destinos que a mesma pode ter, seja na produção de bioenergia, como fonte de alimento ou servindo como produto da biofixação de dióxido de carbono. Em geral, a produção em larga escala de cianobactérias e microalgas é feita com acompanhamento através de análises físicoquímicas offline. Neste contexto, o objetivo deste trabalho foi monitorar a concentração celular em fotobiorreator raceway para produção de biomassa microalgal usando técnicas de aquisição digital de dados e controle de processos, pela aquisição de dados inline de iluminância, concentração de biomassa, temperatura e pH. Para tal fim foi necessário construir sensor baseado em software capaz de determinar a concentração de biomassa microalgal a partir de medidas ópticas de intensidade de radiação monocromática espalhada e desenvolver modelo matemático para a produção da biomassa microalgal no microcontrolador, utilizando algoritmo de computação natural no ajuste do modelo. Foi projetado, construído e testado durante cultivos de Spirulina sp. LEB 18, em escala piloto outdoor, um sistema autônomo de registro de informações advindas do cultivo. Foi testado um sensor de concentração de biomassa baseado na medição da radiação passante. Em uma segunda etapa foi concebido, construído e testado um sensor óptico de concentração de biomassa de Spirulina sp. LEB 18 baseado na medição da intensidade da radiação que sofre espalhamento pela suspensão da cianobactéria, em experimento no laboratório, sob condições controladas de luminosidade, temperatura e fluxo de suspensão de biomassa. A partir das medidas de espalhamento da radiação luminosa, foi construído um sistema de inferência neurofuzzy, que serve como um sensor por software da concentração de biomassa em cultivo. Por fim, a partir das concentrações de biomassa de cultivo, ao longo do tempo, foi prospectado o uso da plataforma Arduino na modelagem empírica da cinética de crescimento, usando a Equação de Verhulst. As medidas realizadas no sensor óptico baseado na medida da intensidade da radiação monocromática passante através da suspensão, usado em condições outdoor, apresentaram baixa correlação entre a concentração de biomassa e a radiação, mesmo para concentrações abaixo de 0,6 g/L. Quando da investigação do espalhamento óptico pela suspensão do cultivo, para os ângulos de 45º e 90º a radiação monocromática em 530 nm apresentou um comportamento linear crescente com a concentração, apresentando coeficiente de determinação, nos dois casos, 0,95. Foi possível construir um sensor de concentração de biomassa baseado em software, usando as informações combinadas de intensidade de radiação espalhada nos ângulos de 45º e 135º com coeficiente de determinação de 0,99. É factível realizar simultaneamente a determinação inline de variáveis do processo de cultivo de Spirulina e a modelagem cinética empírica do crescimento do micro-organismo através da equação de Verhulst, em microcontrolador Arduino.
Resumo:
Reliability and dependability modeling can be employed during many stages of analysis of a computing system to gain insights into its critical behaviors. To provide useful results, realistic models of systems are often necessarily large and complex. Numerical analysis of these models presents a formidable challenge because the sizes of their state-space descriptions grow exponentially in proportion to the sizes of the models. On the other hand, simulation of the models requires analysis of many trajectories in order to compute statistically correct solutions. This dissertation presents a novel framework for performing both numerical analysis and simulation. The new numerical approach computes bounds on the solutions of transient measures in large continuous-time Markov chains (CTMCs). It extends existing path-based and uniformization-based methods by identifying sets of paths that are equivalent with respect to a reward measure and related to one another via a simple structural relationship. This relationship makes it possible for the approach to explore multiple paths at the same time,· thus significantly increasing the number of paths that can be explored in a given amount of time. Furthermore, the use of a structured representation for the state space and the direct computation of the desired reward measure (without ever storing the solution vector) allow it to analyze very large models using a very small amount of storage. Often, path-based techniques must compute many paths to obtain tight bounds. In addition to presenting the basic path-based approach, we also present algorithms for computing more paths and tighter bounds quickly. One resulting approach is based on the concept of path composition whereby precomputed subpaths are composed to compute the whole paths efficiently. Another approach is based on selecting important paths (among a set of many paths) for evaluation. Many path-based techniques suffer from having to evaluate many (unimportant) paths. Evaluating the important ones helps to compute tight bounds efficiently and quickly.
Resumo:
Quantum mechanics, optics and indeed any wave theory exhibits the phenomenon of interference. In this thesis we present two problems investigating interference due to indistinguishable alternatives and a mostly unrelated investigation into the free space propagation speed of light pulses in particular spatial modes. In chapter 1 we introduce the basic properties of the electromagnetic field needed for the subsequent chapters. In chapter 2 we review the properties of interference using the beam splitter and the Mach-Zehnder interferometer. In particular we review what happens when one of the paths of the interferometer is marked in some way so that the particle having traversed it contains information as to which path it went down (to be followed up in chapter 3) and we review Hong-Ou-Mandel interference at a beam splitter (to be followed up in chapter 5). In chapter 3 we present the first of the interference problems. This consists of a nested Mach-Zehnder interferometer in which each of the free space propagation segments are weakly marked by mirrors vibrating at different frequencies [1]. The original experiment drew the conclusions that the photons followed disconnected paths. We partition the description of the light in the interferometer according to the number of paths it contains which-way information about and reinterpret the results reported in [1] in terms of the interference of paths spatially connected from source to detector. In chapter 4 we briefly review optical angular momentum, entanglement and spontaneous parametric down conversion. These concepts feed into chapter 5 in which we present the second of the interference problems namely Hong-Ou-Mandel interference with particles possessing two degrees of freedom. We analyse the problem in terms of exchange symmetry for both boson and fermion pairs and show that the particle statistics at a beam splitter can be controlled for suitably chosen states. We propose an experimental test of these ideas using orbital angular momentum entangled photons. In chapter 6 we look at the effect that the transverse spatial structure of the mode that a pulse of light is excited in has on its group velocity. We show that the resulting group velocity is slower than the speed of light in vacuum for plane waves and that this reduction in the group velocity is related to the spread in the wave vectors required to create the transverse spatial structure. We present experimental results of the measurement of this slowing down using Hong-Ou-Mandel interference.