993 resultados para parameter-space graph
Resumo:
The objective of this work was to optimize the parameter setup for GTAW of aluminum using an AC rectangular wave output and continuous feeding. A series of welds was carried-out in an industrial joint, with variation of the negative and positive current amplitude, the negative and positive duration time, the travel speed and the feeding speed. Another series was carried out to investigate the isolate effect of the negative duration time and travel speed. Bead geometry aspects were assessed, such as reinforcement, penetration, incomplete fusion and joint wall bridging. The results showed that currents at both polarities are remarkably more significant than the respective duration times. It was also shown that there is a straight relationship between welding speed and feeding speed and this relationship must be followed for obtaining sound beads. A very short positive duration time is enough for arc stability achievement and when the negative duration time is longer than 5 ms its effect on geometry appears. The possibility of optimizing the parameter selection, despite the high inter-correlation amongst them, was demonstrate through a computer program. An approach to reduce the number of variables in this process is also presented.
Resumo:
Some properties of generalized canonical systems - special dynamical systems described by a Hamiltonian function linear in the adjoint variables - are applied in determining the solution of the two-dimensional coast-arc problem in an inverse-square gravity field. A complete closed-form solution for Lagrangian multipliers - adjoint variables - is obtained by means of such properties for elliptic, circular, parabolic and hyperbolic motions. Classic orbital elements are taken as constants of integration of this solution in the case of elliptic, parabolic and hyperbolic motions. For circular motion, a set of nonsingular orbital elements is introduced as constants of integration in order to eliminate the singularity of the solution.
Resumo:
Thermal louvers, using movable or rotating shutters over a radiating surface, have gained a wide acceptance as highly efficient devices for controlling the temperature of a spacecraft. This paper presents a detailed analysis of the performance of a rectangular thermal louver with movable blades. The radiative capacity of the louver, determined by its effective emittance, is calculated for different values of the blades opening angle. Experimental results obtained with a prototype of a spacecraft thermal louver show good agreement with the theoretical values.
Resumo:
State-of-the-art predictions of atmospheric states rely on large-scale numerical models of chaotic systems. This dissertation studies numerical methods for state and parameter estimation in such systems. The motivation comes from weather and climate models and a methodological perspective is adopted. The dissertation comprises three sections: state estimation, parameter estimation and chemical data assimilation with real atmospheric satellite data. In the state estimation part of this dissertation, a new filtering technique based on a combination of ensemble and variational Kalman filtering approaches, is presented, experimented and discussed. This new filter is developed for large-scale Kalman filtering applications. In the parameter estimation part, three different techniques for parameter estimation in chaotic systems are considered. The methods are studied using the parameterized Lorenz 95 system, which is a benchmark model for data assimilation. In addition, a dilemma related to the uniqueness of weather and climate model closure parameters is discussed. In the data-oriented part of this dissertation, data from the Global Ozone Monitoring by Occultation of Stars (GOMOS) satellite instrument are considered and an alternative algorithm to retrieve atmospheric parameters from the measurements is presented. The validation study presents first global comparisons between two unique satellite-borne datasets of vertical profiles of nitrogen trioxide (NO3), retrieved using GOMOS and Stratospheric Aerosol and Gas Experiment III (SAGE III) satellite instruments. The GOMOS NO3 observations are also considered in a chemical state estimation study in order to retrieve stratospheric temperature profiles. The main result of this dissertation is the consideration of likelihood calculations via Kalman filtering outputs. The concept has previously been used together with stochastic differential equations and in time series analysis. In this work, the concept is applied to chaotic dynamical systems and used together with Markov chain Monte Carlo (MCMC) methods for statistical analysis. In particular, this methodology is advocated for use in numerical weather prediction (NWP) and climate model applications. In addition, the concept is shown to be useful in estimating the filter-specific parameters related, e.g., to model error covariance matrix parameters.
Resumo:
Multiprocessor system-on-chip (MPSoC) designs utilize the available technology and communication architectures to meet the requirements of the upcoming applications. In MPSoC, the communication platform is both the key enabler, as well as the key differentiator for realizing efficient MPSoCs. It provides product differentiation to meet a diverse, multi-dimensional set of design constraints, including performance, power, energy, reconfigurability, scalability, cost, reliability and time-to-market. The communication resources of a single interconnection platform cannot be fully utilized by all kind of applications, such as the availability of higher communication bandwidth for computation but not data intensive applications is often unfeasible in the practical implementation. This thesis aims to perform the architecture-level design space exploration towards efficient and scalable resource utilization for MPSoC communication architecture. In order to meet the performance requirements within the design constraints, careful selection of MPSoC communication platform, resource aware partitioning and mapping of the application play important role. To enhance the utilization of communication resources, variety of techniques such as resource sharing, multicast to avoid re-transmission of identical data, and adaptive routing can be used. For implementation, these techniques should be customized according to the platform architecture. To address the resource utilization of MPSoC communication platforms, variety of architectures with different design parameters and performance levels, namely Segmented bus (SegBus), Network-on-Chip (NoC) and Three-Dimensional NoC (3D-NoC), are selected. Average packet latency and power consumption are the evaluation parameters for the proposed techniques. In conventional computing architectures, fault on a component makes the connected fault-free components inoperative. Resource sharing approach can utilize the fault-free components to retain the system performance by reducing the impact of faults. Design space exploration also guides to narrow down the selection of MPSoC architecture, which can meet the performance requirements with design constraints.
Resumo:
The power rating of wind turbines is constantly increasing; however, keeping the voltage rating at the low-voltage level results in high kilo-ampere currents. An alternative for increasing the power levels without raising the voltage level is provided by multiphase machines. Multiphase machines are used for instance in ship propulsion systems, aerospace applications, electric vehicles, and in other high-power applications including wind energy conversion systems. A machine model in an appropriate reference frame is required in order to design an efficient control for the electric drive. Modeling of multiphase machines poses a challenge because of the mutual couplings between the phases. Mutual couplings degrade the drive performance unless they are properly considered. In certain multiphase machines there is also a problem of high current harmonics, which are easily generated because of the small current path impedance of the harmonic components. However, multiphase machines provide special characteristics compared with the three-phase counterparts: Multiphase machines have a better fault tolerance, and are thus more robust. In addition, the controlled power can be divided among more inverter legs by increasing the number of phases. Moreover, the torque pulsation can be decreased and the harmonic frequency of the torque ripple increased by an appropriate multiphase configuration. By increasing the number of phases it is also possible to obtain more torque per RMS ampere for the same volume, and thus, increase the power density. In this doctoral thesis, a decoupled d–q model of double-star permanent-magnet (PM) synchronous machines is derived based on the inductance matrix diagonalization. The double-star machine is a special type of multiphase machines. Its armature consists of two three-phase winding sets, which are commonly displaced by 30 electrical degrees. In this study, the displacement angle between the sets is considered a parameter. The diagonalization of the inductance matrix results in a simplified model structure, in which the mutual couplings between the reference frames are eliminated. Moreover, the current harmonics are mapped into a reference frame, in which they can be easily controlled. The work also presents methods to determine the machine inductances by a finite-element analysis and by voltage-source inverters on-site. The derived model is validated by experimental results obtained with an example double-star interior PM (IPM) synchronous machine having the sets displaced by 30 electrical degrees. The derived transformation, and consequently, the decoupled d–q machine model, are shown to model the behavior of an actual machine with an acceptable accuracy. Thus, the proposed model is suitable to be used for the model-based control design of electric drives consisting of double-star IPM synchronous machines.
Resumo:
Today’s electrical machine technology allows increasing the wind turbine output power by an order of magnitude from the technology that existed only ten years ago. However, it is sometimes argued that high-power direct-drive wind turbine generators will prove to be of limited practical importance because of their relatively large size and weight. The limited space for the generator in a wind turbine application together with the growing use of wind energy pose a challenge for the design engineers who are trying to increase torque without making the generator larger. When it comes to high torque density, the limiting factor in every electrical machine is heat, and if the electrical machine parts exceed their maximum allowable continuous operating temperature, even for a short time, they can suffer permanent damage. Therefore, highly efficient thermal design or cooling methods is needed. One of the promising solutions to enhance heat transfer performances of high-power, low-speed electrical machines is the direct cooling of the windings. This doctoral dissertation proposes a rotor-surface-magnet synchronous generator with a fractional slot nonoverlapping stator winding made of hollow conductors, through which liquid coolant can be passed directly during the application of current in order to increase the convective heat transfer capabilities and reduce the generator mass. This doctoral dissertation focuses on the electromagnetic design of a liquid-cooled direct-drive permanent-magnet synchronous generator (LC DD-PMSG) for a directdrive wind turbine application. The analytical calculation of the magnetic field distribution is carried out with the ambition of fast and accurate predicting of the main dimensions of the machine and especially the thickness of the permanent magnets; the generator electromagnetic parameters as well as the design optimization. The focus is on the generator design with a fractional slot non-overlapping winding placed into open stator slots. This is an a priori selection to guarantee easy manufacturing of the LC winding. A thermal analysis of the LC DD-PMSG based on a lumped parameter thermal model takes place with the ambition of evaluating the generator thermal performance. The thermal model was adapted to take into account the uneven copper loss distribution resulting from the skin effect as well as the effect of temperature on the copper winding resistance and the thermophysical properties of the coolant. The developed lumpedparameter thermal model and the analytical calculation of the magnetic field distribution can both be integrated with the presented algorithm to optimize an LC DD-PMSG design. Based on an instrumented small prototype with liquid-cooled tooth-coils, the following targets have been achieved: experimental determination of the performance of the direct liquid cooling of the stator winding and validating the temperatures predicted by an analytical thermal model; proving the feasibility of manufacturing the liquid-cooled tooth-coil winding; moreover, demonstration of the objectives of the project to potential customers.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
Biofuels for transport are a renewable source of energy that were once heralded as a solution to multiple problems associated with poor urban air quality, the overproduction of agricultural commodities, the energy security of the European Union (EU) and climate change. It was only after the Union had implemented an incentivizing framework of legal and political instruments for the production, trade and consumption of biofuels that the problems of weakening food security, environmental degradation and increasing greenhouse gases through land-use changes began to unfold. In other words, the difference between political aims for why biofuels are promoted and their consequences has grown – which is also recognized by the EU policy-makers. Therefore, the global networks of producing, trading and consuming biofuels may face a complete restructure if the European Commission accomplishes its pursuit to sideline crop-based biofuels after 2020. My aim with this dissertation is not only to trace the manifold evolutions of the instruments used by the Union to govern biofuels but also to reveal how this evolution has influenced the dynamics of biofuel development. Therefore, I study the ways the EU’s legal and political instruments of steering biofuels are coconstitutive with the globalized spaces of biofuel development. My analytical strategy can be outlined through three concepts. I use the term ‘assemblage’ to approach the operations of the loose entity of actors and non-human elements that are the constituents of multi-scalar and -sectorial biofuel development. ‘Topology’ refers to the spatiality of this European biofuel assemblage and its parts whose evolving relations are treated as the active constituents of space, instead of simply being located in space. I apply the concept of ‘nomosphere’ to characterize the framework of policies, laws and other instruments that the EU applies and construes while attempting to govern biofuels. Even though both the materials and methods vary in the independent articles, these three concepts characterize my analytical strategy that allows me to study law, policy and space associated with each other. The results of my examinations underscore the importance of the instruments of governance of the EU constituting and stabilizing the spaces of producing and, on the other hand, how topological ruptures in biofuel development have enforced the need to reform policies. This analysis maps the vast scope of actors that are influenced by the mechanism of EU biofuel governance and, what is more, shows how they are actively engaging in the Union’s institutional policy formulation. By examining the consequences of fast biofuel development that are spatially dislocated from the established spaces of producing, trading and consuming biofuels such as indirect land use changes, I unfold the processes not tackled by the instruments of the EU. Indeed, it is these spatially dislocated processes that have pushed the Commission construing a new type of governing biofuels: transferring the instruments of climate change mitigation to land-use policies. Although efficient in mitigating these dislocated consequences, these instruments have also created peculiar ontological scaffolding for governing biofuels. According to this mode of governance, the spatiality of biofuel development appears to be already determined and the agency that could dampen the negative consequences originating from land-use practices is treated as irrelevant.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
Heat transfer effectiveness in nuclear rod bundles is of great importance to nuclear reactor safety and economics. An important design parameter is the Critical Heat Flux (CHF), which limits the transferred heat from the fuel to the coolant. The CHF is determined by flow behaviour, especially the turbulence created inside the fuel rod bundle. Adiabatic experiments can be used to characterize the flow behaviour separately from the heat transfer phenomena in diabatic flow. To enhance the turbulence, mixing vanes are attached to spacer grids, which hold the rods in place. The vanes either make the flow swirl around a single sub-channel or induce cross-mixing between adjacent sub-channels. In adiabatic two-phase conditions an important phenomenon that can be investigated is the effect of the spacer on canceling the lift force, which collects the small bubbles to the rod surfaces leading to decreased CHF in diabatic conditions and thus limits the reactor power. Computational Fluid Dynamics (CFD) can be used to simulate the flow numerically and to test how different spacer configurations affect the flow. Experimental data is needed to validate and verify the used CFD models. Especially the modeling of turbulence is challenging even for single-phase flow inside the complex sub-channel geometry. In two-phase flow other factors such as bubble dynamics further complicate the modeling. To investigate the spacer grid effect on two-phase flow, and to provide further experimental data for CFD validation, a series of experiments was run on an adiabatic sub-channel flow loop using a duct-type spacer grid with different configurations. Utilizing the wire-mesh sensor technology, the facility gives high resolution experimental data in both time and space. The experimental results indicate that the duct-type spacer grid is less effective in canceling the lift force effect than the egg-crate type spacer tested earlier.
Resumo:
We present a critical analysis of the generalized use of the "impact factor". By means of the Kruskal-Wallis test, it was shown that it is not possible to compare distinct disciplines using the impact factor without adjustments. After assigning the median journal the value of one (1.000), the impact factor value for each journal was calculated by the rule of three. The adjusted values were homogeneous, thus permitting comparison among distinct disciplines.
Resumo:
The objective of the present study was to determine the levels of amino acids in maternal plasma, placental intervillous space and fetal umbilical vein in order to identify the similarities and differences in amino acid levels in these compartments of 15 term newborns from normal pregnancies and deliveries. All amino acids, except tryptophan, were present in at least 186% higher concentrations in the intervillous space than in maternal venous blood, with the difference being statistically significant. This result contradicted the initial hypothesis of the study that the plasma amino acid levels in the placental intervillous space should be similar to those of maternal plasma. When the maternal venous compartment was compared with the umbilical vein, we observed values 103% higher on the fetal side which is compatible with currently accepted mechanisms of active amino acid transport. Amino acid levels of the placental intervillous space were similar to the values of the umbilical vein except for proline, glycine and aspartic acid, whose levels were significantly higher than fetal umbilical vein levels (average 107% higher). The elevated levels of the intervillous space are compatible with syncytiotrophoblast activity, which maintain high concentrations of free amino acids inside syncytiotrophoblast cells, permitting asymmetric efflux or active transport from the trophoblast cells to the blood in the intervillous space. The plasma amino acid levels in the umbilical vein of term newborns probably may be used as a standard of local normality for clinical studies of amino acid profiles.
Resumo:
The growing population on earth along with diminishing fossil deposits and the climate change debate calls out for a better utilization of renewable, bio-based materials. In a biorefinery perspective, the renewable biomass is converted into many different products such as fuels, chemicals, and materials, quite similar to the petroleum refinery industry. Since forests cover about one third of the land surface on earth, ligno-cellulosic biomass is the most abundant renewable resource available. The natural first step in a biorefinery is separation and isolation of the different compounds the biomass is comprised of. The major components in wood are cellulose, hemicellulose, and lignin, all of which can be made into various end-products. Today, focus normally lies on utilizing only one component, e.g., the cellulose in the Kraft pulping process. It would be highly desirable to utilize all the different compounds, both from an economical and environmental point of view. The separation process should therefore be optimized. Hemicelluloses can partly be extracted with hot-water prior to pulping. Depending in the severity of the extraction, the hemicelluloses are degraded to various degrees. In order to be able to choose from a variety of different end-products, the hemicelluloses should be as intact as possible after the extraction. The main focus of this work has been on preserving the hemicellulose molar mass throughout the extraction at a high yield by actively controlling the extraction pH at the high temperatures used. Since it has not been possible to measure pH during an extraction due to the high temperatures, the extraction pH has remained a “black box”. Therefore, a high-temperature in-line pH measuring system was developed, validated, and tested for hot-water wood extractions. One crucial step in the measurements is calibration, therefore extensive efforts was put on developing a reliable calibration procedure. Initial extractions with wood showed that the actual extraction pH was ~0.35 pH units higher than previously believed. The measuring system was also equipped with a controller connected to a pump. With this addition it was possible to control the extraction to any desired pH set point. When the pH dropped below the set point, the controller started pumping in alkali and by that the desired set point was maintained very accurately. Analyses of the extracted hemicelluloses showed that less hemicelluloses were extracted at higher pH but with a higher molar-mass. Monomer formation could, at a certain pH level, be completely inhibited. Increasing the temperature, but maintaining a specific pH set point, would speed up the extraction without degrading the molar-mass of the hemicelluloses and thereby intensifying the extraction. The diffusion of the dissolved hemicelluloses from the wood particle is a major part of the extraction process. Therefore, a particle size study ranging from 0.5 mm wood particles to industrial size wood chips was conducted to investigate the internal mass transfer of the hemicelluloses. Unsurprisingly, it showed that hemicelluloses were extracted faster from smaller wood particles than larger although it did not seem to have a substantial effect on the average molar mass of the extracted hemicelluloses. However, smaller particle sizes require more energy to manufacture and thus increases the economic cost. Since bark comprises 10 – 15 % of a tree, it is important to also consider it in a biorefinery concept. Spruce inner and outer bark was hot-water extracted separately to investigate the possibility to isolate the bark hemicelluloses. It was showed that the bark hemicelluloses comprised mostly of pectic material and differed considerably from the wood hemicelluloses. The bark hemicelluloses, or pectins, could be extracted at lower temperatures than the wood hemicelluloses. A chemical characterization, done separately on inner and outer bark, showed that inner bark contained over 10 % stilbene glucosides that could be extracted already at 100 °C with aqueous acetone.
Resumo:
Successful management of rivers requires an understanding of the fluvial processes that govern them. This, in turn cannot be achieved without a means of quantifying their geomorphology and hydrology and the spatio-temporal interactions between them, that is, their hydromorphology. For a long time, it has been laborious and time-consuming to measure river topography, especially in the submerged part of the channel. The measurement of the flow field has been challenging as well, and hence, such measurements have long been sparse in natural environments. Technological advancements in the field of remote sensing in the recent years have opened up new possibilities for capturing synoptic information on river environments. This thesis presents new developments in fluvial remote sensing of both topography and water flow. A set of close-range remote sensing methods is employed to eventually construct a high-resolution unified empirical hydromorphological model, that is, river channel and floodplain topography and three-dimensional areal flow field. Empirical as well as hydraulic theory-based optical remote sensing methods are tested and evaluated using normal colour aerial photographs and sonar calibration and reference measurements on a rocky-bed sub-Arctic river. The empirical optical bathymetry model is developed further by the introduction of a deep-water radiance parameter estimation algorithm that extends the field of application of the model to shallow streams. The effect of this parameter on the model is also assessed in a study of a sandy-bed sub-Arctic river using close-range high-resolution aerial photography, presenting one of the first examples of fluvial bathymetry modelling from unmanned aerial vehicles (UAV). Further close-range remote sensing methods are added to complete the topography integrating the river bed with the floodplain to create a seamless high-resolution topography. Boat- cart- and backpack-based mobile laser scanning (MLS) are used to measure the topography of the dry part of the channel at a high resolution and accuracy. Multitemporal MLS is evaluated along with UAV-based photogrammetry against terrestrial laser scanning reference data and merged with UAV-based bathymetry to create a two-year series of seamless digital terrain models. These allow the evaluation of the methodology for conducting high-resolution change analysis of the entire channel. The remote sensing based model of hydromorphology is completed by a new methodology for mapping the flow field in 3D. An acoustic Doppler current profiler (ADCP) is deployed on a remote-controlled boat with a survey-grade global navigation satellite system (GNSS) receiver, allowing the positioning of the areally sampled 3D flow vectors in 3D space as a point cloud and its interpolation into a 3D matrix allows a quantitative volumetric flow analysis. Multitemporal areal 3D flow field data show the evolution of the flow field during a snow-melt flood event. The combination of the underwater and dry topography with the flow field yields a compete model of river hydromorphology at the reach scale.