998 resultados para Discrete part
Resumo:
The dataset is based on samples collected in the spring of 2002 in the Western Black Sea in front of Bulgaria coast. The whole dataset is composed of 76 samples (from 27 stations of National Monitoring Grid) with data of mesozooplankton species composition abundance and biomass. Sampling on zooplankton was performed from bottom up to the surface at depths depending on water column stratification and the thermocline depth. Zooplankton samples were collected with vertical closing Juday net,diameter - 36cm, mesh size 150 µm. Tows were performed from surface down to bottom meters depths in discrete layers. Samples were preserved by a 4% formaldehyde sea water buffered solution. Sampling volume was estimated by multiplying the mouth area with the wire length. Mesozooplankton abundance: The collected material was analysed using the method of Domov (1959). Samples were brought to volume of 25-30 ml depending upon zooplankton density and mixed intensively until all organisms were distributed randomly in the sample volume. After that 5 ml of sample was taken and poured in the counting chamber which is a rectangle form for taxomomic identification and count. Large (> 1 mm body length) and not abundant species were calculated in whole sample. Counting of organisms were made in the Dimov chamber under the stereomicroscope to the lowest taxon possible. Taxonomic identification was done at the Institute of Oceanology by Kremena Stefanova using the relevant taxonomic literature (Mordukhay-Boltovskoy, F.D. (Ed.). 1968, 1969,1972). Taxon-specific abundance: The collected material was analysed using the method of Domov (1959). Samples were brought to volume of 25-30 ml depending upon zooplankton density and mixed intensively until all organisms were distributed randomly in the sample volume. After that 5 ml of sample was taken and poured in the counting chamber which is a rectangle form for taxomomic identification and count. Copepods and Cladoceras were identified and enumerated; the other mesozooplankters were identified and enumerated at higher taxonomic level (commonly named as mesozooplankton groups). Large (> 1 mm body length) and not abundant species were calculated in whole sample. Counting and measuring of organisms were made in the Dimov chamber under the stereomicroscope to the lowest taxon possible. Taxonomic identification was done at the Institute of Oceanology by Kremena Stefanova using the relevant taxonomic literature (Mordukhay-Boltovskoy, F.D. (Ed.). 1968, 1969,1972).
Resumo:
The dataset is based on samples collected in the autumn of 2001 in the Western Black Sea in front of Bulgaria coast. The whole dataset is composed of 42 samples (from 19 stations of National Monitoring Grid) with data of mesozooplankton species composition abundance and biomass. Samples were collected in the layers 0-10, 0-20, 0-50, 10-25, 25-50, 50-100 and from bottom up to the surface at depths depending on water column stratification and the thermocline depth. Zooplankton samples were collected with vertical closing Juday net,diameter - 36cm, mesh size 150 µm. Tows were performed from surface down to bottom meters depths in discrete layers. Samples were preserved by a 4% formaldehyde sea water buffered solution. Sampling volume was estimated by multiplying the mouth area with the wire length. Mesozooplankton abundance: The collected material was analysed using the method of Domov (1959). Samples were brought to volume of 25-30 ml depending upon zooplankton density and mixed intensively until all organisms were distributed randomly in the sample volume. After that 5 ml of sample was taken and poured in the counting chamber which is a rectangle form for taxomomic identification and count. Large (> 1 mm body length) and not abundant species were calculated in whole sample. Counting and measuring of organisms were made in the Dimov chamber under the stereomicroscope to the lowest taxon possible. Taxonomic identification was done at the Institute of Oceanology by Kremena Stefanova using the relevant taxonomic literature (Mordukhay-Boltovskoy, F.D. (Ed.). 1968, 1969,1972). Taxon-specific abundance: The collected material was analysed using the method of Domov (1959). Samples were brought to volume of 25-30 ml depending upon zooplankton density and mixed intensively until all organisms were distributed randomly in the sample volume. After that 5 ml of sample was taken and poured in the counting chamber which is a rectangle form for taxomomic identification and count. Copepods and Cladoceras were identified and enumerated; the other mesozooplankters were identified and enumerated at higher taxonomic level (commonly named as mesozooplankton groups). Large (> 1 mm body length) and not abundant species were calculated in whole sample. Counting and measuring of organisms were made in the Dimov chamber under the stereomicroscope to the lowest taxon possible. Taxonomic identification was done at the Institute of Oceanology by Kremena Stefanova using the relevant taxonomic literature (Mordukhay-Boltovskoy, F.D. (Ed.). 1968, 1969,1972).
Resumo:
The track of the cruise, and the location of the different stations cover a large range of water masses, many of which take part in the exchange across the Greenland-Scotland Ridge, and of importance for the biogeochemical fluxes in the region. These water masses are of very different origins, which can be observed in the concentration of the different biogeochemical parameters. The concentrations are a result of the combination of the physical and biogeochemical environment in each formation region, and the processes acting on the water masses as they are transported away from the formation areas. The aim of the biogeochemistry measurements was to achieve a better understanding of the strength and variability of the biological carbon pump in the North Atlantic and Nordic Seas.
Resumo:
The "CoMSBlack92" dataset is based on samples collected in the summer of 1992 along the Bulgarian coast including coastal and open sea areas. The whole dataset is composed of 79 samples (28 stations) with data of zooplankton species composition, abundance and biomass. Sampling for zooplankton was performed from bottom up to the surface at standard depths depending on water column stratification and the thermocline depth. Zooplankton samples were collected with vertical closing Juday net,diameter - 36cm, mesh size 150 ?m. Tows were performed from surface down to bottom meters depths in discrete layers. Samples were preserved by a 4% formaldehyde sea water buffered solution. Sampling volume was estimated by multiplying the mouth area with the wire length. Sampling volume was estimated by multiplying the mouth area with the wire length. The collected material was analysed using the method of Domov (1959). Samples were brought to volume of 25-30 ml depending upon zooplankton density and mixed intensively until all organisms were distributed randomly in the sample volume. After that 5 ml of sample was taken and poured in the counting chamber which is a rectangle form for taxomomic identification and count. Large (> 1 mm body length) and not abundant species were calculated in whole sample. Counting and measuring of organisms were made in the Dimov chamber under the stereomicroscope to the lowest taxon possible. Taxonomic identification was done at the Institute of Oceanology by Asen Konsulov using the relevant taxonomic literature (Mordukhay-Boltovskoy, F.D. (Ed.). 1968, 1969,1972 ). The biomass was estimated as wet weight by Petipa, 1959 (based on species specific wet weight). Wet weight values were transformed to dry weight using the equation DW=0.16*WW as suggested by Vinogradov & Shushkina, 1987. Copepods and Cladoceras were identified and enumerated; the other mesozooplankters were identified and enumerated at higher taxonomic level (commonly named as mesozooplankton groups). Large (> 1 mm body length) and not abundant species were calculated in whole sample. The biomass was estimated as wet weight by Petipa, 1959 ussing standard average weight of each species in mg/m**3.
Resumo:
The "Hydroblack91" dataset is based on samples collected in the summer of 1991 and covers part of North-Western in front of Romanian coast and Western Black Sea (Bulgarian coasts) (between 43°30' - 42°10' N latitude and 28°40'- 31°45' E longitude). Mesozooplankton sampling was undertaken at 20 stations. The whole dataset is composed of 72 samples with data of zooplankton species composition, abundance and biomass. Samples were collected in discrete layers 0-10, 0-20, 0-50, 10-25, 25-50, 50-100 and from bottom up to the surface at depths depending on water column stratification and the thermocline depth. Zooplankton samples were collected with vertical closing Juday net,diameter - 36cm, mesh size 150 µm. Tows were performed from surface down to bottom meters depths in discrete layers. Samples were preserved by a 4% formaldehyde sea water buffered solution. Sampling volume was estimated by multiplying the mouth area with the wire length. Mesozooplankton abundance: The collected materia was analysed using the method of Domov (1959). Samples were brought to volume of 25-30 ml depending upon zooplankton density and mixed intensively until all organisms were distributed randomly in the sample volume. After that 5 ml of sample was taken and poured in the counting chamber which is a rectangle form for taxomomic identification and count. Large (> 1 mm body length) and not abundant species were calculated in whole sample. Counting and measuring of organisms were made in the Dimov chamber under the stereomicroscope to the lowest taxon possible. Taxonomic identification was done at the Institute of Oceanology by Asen Konsulov using the relevant taxonomic literature (Mordukhay-Boltovskoy, F.D. (Ed.). 1968, 1969,1972). The biomass was estimated as wet weight by Petipa, 1959 (based on species specific wet weight). Wet weight values were transformed to dry weight using the equation DW=0.16*WW as suggested by Vinogradov & Shushkina, 1987. Taxon-specific abundance: The collected material was analysed using the method of Domov (1959). Samples were brought to volume of 25-30 ml depending upon zooplankton density and mixed intensively until all organisms were distributed randomly in the sample volume. After that 5 ml of sample was taken and poured in the counting chamber which is a rectangle form for taxomomic identification and count. Copepods and Cladoceras were identified and enumerated; the other mesozooplankters were identified and enumerated at higher taxonomic level (commonly named as mesozooplankton groups). Large (> 1 mm body length) and not abundant species were calculated in whole sample. Counting and measuring of organisms were made in the Dimov chamber under the stereomicroscope to the lowest taxon possible. Taxonomic identification was done at the Institute of Oceanology by Asen Konsulov using the relevant taxonomic literature (Mordukhay-Boltovskoy, F.D. (Ed.). 1968, 1969,1972). The biomass was estimated as wet weight by Petipa, 1959 ussing standard average weight of each species in mg/m3. WW were converted to DW by equation DW=0.16*WW (Vinogradov ME, Sushkina EA, 1987).
Resumo:
A real-time large scale part-to-part video matching algorithm, based on the cross correlation of the intensity of motion curves, is proposed with a view to originality recognition, video database cleansing, copyright enforcement, video tagging or video result re-ranking. Moreover, it is suggested how the most representative hashes and distance functions - strada, discrete cosine transformation, Marr-Hildreth and radial - should be integrated in order for the matching algorithm to be invariant against blur, compression and rotation distortions: (R; _) 2 [1; 20]_[1; 8], from 512_512 to 32_32pixels2 and from 10 to 180_. The DCT hash is invariant against blur and compression up to 64x64 pixels2. Nevertheless, although its performance against rotation is the best, with a success up to 70%, it should be combined with the Marr-Hildreth distance function. With the latter, the image selected by the DCT hash should be at a distance lower than 1.15 times the Marr-Hildreth minimum distance.
Resumo:
One of the common failure modes of reinforced concrete (RC) beams strengthened in flexure with a bonded fibre-reinforced polymer (FRP) is intermediate crack (IC) debonding, which is originated at a critical section in the vicinity of flexural cracks and propagates to a plate end. Despite considerable research over the last years, few reliable and simplified IC debonding strength models have been developed. This paper firstly presents a one-dimensional model based on the discrete crack approach for concrete and the spectral element method for the numerical simulation of the IC debonding process. The progressive formation of flexural cracks and subsequent concrete-FRP interfacial debonding is formulated by the introduction of a new element able to represent both phenomena simultaneously without perturbing the numerical procedure. Furthermore, with the proposed model, high frequency dynamic response for these kinds of structures can also be obtained in a very simple and non-expensive way, which makes this procedure very useful as a tool for diagnoses and detection of debonding in its initial stage by monitoring the change in local dynamic characteristics.
Resumo:
Stirred Mills are becoming increasingly used for fine and ultra-fine grinding. This technology is still poorly understood when used in the mineral processing context. This makes process optimisation of such devices problematic. 3D DEM simulations of the flow of grinding media in pilot scale tower mills and pin mills are carried out in order to investigate the relative performance of these stirred mills. In the first part of this paper, media flow patterns and energy absorption rates and distributions were analysed to provide a good understanding of the media flow and the collisional environment in these mills. In this second part we analyse steady state coherent flow structures, liner stress and wear by impact and abrasion. We also examine mixing and transport efficiency. Together these provide a comprehensive understanding of all the key processes operating in these mills and a clear understanding of the relative performance issues. (C) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Stirred mills are becoming increasingly used for fine and ultra-fine grinding. This technology is still poorly understood when used in the mineral processing context. This makes process optimisation of such devices problematic. 3D DEM simulations of the flow of grinding media in pilot scale tower mills and pin mills are carried out in order to investigate the relative performance of these stirred mills. Media flow patterns and energy absorption rates and distributions are analysed here. In the second part of this paper, coherent flow structures, equipment wear and mixing and transport efficiency are analysed. (C) 2006 Published by Elsevier Ltd.
Resumo:
The fluid–particle interaction inside a 150 g/h fluidised bed reactor is modelled. The biomass particle is injected into the fluidised bed and the heat, momentum and mass transport from the fluidising gas and fluidised sand is modelled. The Eulerian approach is used to model the bubbling behaviour of the sand, which is treated as a continuum. Heat transfer from the bubbling bed to the discrete biomass particle, as well as biomass reaction kinetics are modelled according to the literature. The particle motion inside the reactor is computed using drag laws, dependent on the local volume fraction of each phase. FLUENT 6.2 has been used as the modelling framework of the simulations with the whole pyrolysis model incorporated in the form of user-defined function (UDF). The study completes the fast pyrolysis modelling in bubbling fluidised bed reactors.
Resumo:
In this paper, we propose a new edge-based matching kernel for graphs by using discrete-time quantum walks. To this end, we commence by transforming a graph into a directed line graph. The reasons of using the line graph structure are twofold. First, for a graph, its directed line graph is a dual representation and each vertex of the line graph represents a corresponding edge in the original graph. Second, we show that the discrete-time quantum walk can be seen as a walk on the line graph and the state space of the walk is the vertex set of the line graph, i.e., the state space of the walk is the edges of the original graph. As a result, the directed line graph provides an elegant way of developing new edge-based matching kernel based on discrete-time quantum walks. For a pair of graphs, we compute the h-layer depth-based representation for each vertex of their directed line graphs by computing entropic signatures (computed from discrete-time quantum walks on the line graphs) on the family of K-layer expansion subgraphs rooted at the vertex, i.e., we compute the depth-based representations for edges of the original graphs through their directed line graphs. Based on the new representations, we define an edge-based matching method for the pair of graphs by aligning the h-layer depth-based representations computed through the directed line graphs. The new edge-based matching kernel is thus computed by counting the number of matched vertices identified by the matching method on the directed line graphs. Experiments on standard graph datasets demonstrate the effectiveness of our new kernel.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
This paper presents an integrated model for an offshore wind turbine taking into consideration a contribution for the marine wave and wind speed with perturbations influences on the power quality of current injected into the electric grid. The paper deals with the simulation of one floating offshore wind turbine equipped with a permanent magnet synchronous generator, and a two-level converter connected to an onshore electric grid. The use of discrete mass modeling is accessed in order to reveal by computing the total harmonic distortion on how the perturbations of the captured energy are attenuated at the electric grid injection point. Two torque actions are considered for the three-mass modeling, the aerodynamic on the flexible part and on the rigid part of the blades. Also, a torque due to the influence of marine waves in deep water is considered. Proportional integral fractional-order control supports the control strategy. A comparison between the drive train models is presented.
Resumo:
This paper presents an integrated model for an offshore wind energy system taking into consideration a contribution for the marine wave and wind speed with perturbations influences on the power quality of current injected into the electric grid. The paper deals with the simulation of one floating offshore wind turbine equipped with a PMSG and a two-level converter connected to an onshore electric grid. The use of discrete mass modeling is accessed in order to reveal by computing the THD on how the perturbations of the captured energy are attenuated at the electric grid injection point. Two torque actions are considered for the three-mass modeling, the aerodynamic on the flexible part and on the rigid part of the blades. Also, a torque due to the influence of marine waves in deep water is considered. PI fractional-order control supports the control strategy. A comparison between the drive train models is presented.