964 resultados para FLOW MODELS


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Clinical oncologists and cancer researchers benefit from information on the vascularization or non-vascularization of solid tumors because of blood flow's influence on three popular treatment types: hyperthermia therapy, radiotherapy, and chemotherapy. The objective of this research is the development of a clinically useful tumor blood flow measurement technique. The designed technique is sensitive, has good spatial resolution, in non-invasive and presents no risk to the patient beyond his usual treatment (measurements will be subsequent only to normal patient treatment).^ Tumor blood flow was determined by measuring the washout of positron emitting isotopes created through neutron therapy treatment. In order to do this, several technical and scientific questions were addressed first. These questions were: (1) What isotopes are created in tumor tissue when it is irradiated in a neutron therapy beam and how much of each isotope is expected? (2) What are the chemical states of the isotopes that are potentially useful for blood flow measurements and will those chemical states allow these or other isotopes to be washed out of the tumor? (3) How should isotope washout by blood flow be modeled in order to most effectively use the data? These questions have been answered through both theoretical calculation and measurement.^ The first question was answered through the measurement of macroscopic cross sections for the predominant nuclear reactions in the body. These results correlate well with an independent mathematical prediction of tissue activation and measurements of mouse spleen neutron activation. The second question was addressed by performing cell suspension and protein precipitation techniques on neutron activated mouse spleens. The third and final question was answered by using first physical principles to develop a model mimicking the blood flow system and measurement technique.^ In a final set of experiments, the above were applied to flow models and animals. The ultimate aim of this project is to apply its methodology to neutron therapy patients. ^

Relevância:

70.00% 70.00%

Publicador:

Resumo:

As part of their development, the predictions of numerical wind flow models must be compared with measurements in order to estimate the uncertainty related to their use. Of course, the most rigorous such comparison is under blind conditions. The following paper includes a detailed description of three different wind flow models, all based on a Reynolds-averaged Navier-Stokes approach and two-equation k-ε closure, that were tested as part of the Bolund blind comparison (itself based on the Bolund experiment which measured the wind around a small coastal island). The models are evaluated in terms of predicted normalized wind speed and turbulent kinetic energy at 2 m and 5 m above ground level for a westerly wind direction. Results show that all models predict the mean velocity reasonably well; however accurate prediction of the turbulent kinetic energy remains achallenge.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A critical assessment is presented for the existing fluid flow models used for dense medium cyclones (DMCs) and hydrocyclones. As the present discussion indicates, the understanding of dense medium cyclone flow is still far from the complete. However, its similarity to the hydrocyclone provides a basis for improved understanding of fluid flow in DMCs. The complexity of fluid flow in DMCs is basically due to the existence of medium as well as the dominance of turbulent particle size and density effects on separation. Both the theoretical and experimental analysis is done with respect to two-phase motions and solid phase flow in hydrocyclones or DMCs. A detailed discussion is presented on the empirical, semiempirical, and the numerical models based upon both the vorticity-stream function approach and Navier-Stokes equations in their primitive variables and in cylindrical coordinates available in literature. The existing equations describing turbulence and multiphase flows in cyclone are also critically reviewed.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The paper presents a new network-flow interpretation of Łukasiewicz’s logic based on models with an increased effectiveness. The obtained results show that the presented network-flow models principally may work for multivalue logics with more than three states of the variables i.e. with a finite set of states in the interval from 0 to 1. The described models give the opportunity to formulate various logical functions. If the results from a given model that are contained in the obtained values of the arc flow functions are used as input data for other models then it is possible in Łukasiewicz’s logic to interpret successfully other sophisticated logical structures. The obtained models allow a research of Łukasiewicz’s logic with specific effective methods of the network-flow programming. It is possible successfully to use the specific peculiarities and the results pertaining to the function ‘traffic capacity of the network arcs’. Based on the introduced network-flow approach it is possible to interpret other multivalue logics – of E.Post, of L.Brauer, of Kolmogorov, etc.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The objective of this paper is to develop and validate a mechanistic model for the degradation of phenol by the Fenton process. Experiments were performed in semi-batch operation, in which phenol, catechol and hydroquinone concentrations were measured. Using the methodology described in Pontes and Pinto [R.F.F. Pontes, J.M. Pinto, Analysis of integrated kinetic and flow models for anaerobic digesters, Chemical Engineering journal 122 (1-2) (2006) 65-80], a stoichiometric model was first developed, with 53 reactions and 26 compounds, followed by the corresponding kinetic model. Sensitivity analysis was performed to determine the most influential kinetic parameters of the model that were estimated with the obtained experimental results. The adjusted model was used to analyze the impact of the initial concentration and flow rate of reactants on the efficiency of the Fenton process to degrade phenol. Moreover, the model was applied to evaluate the treatment cost of wastewater contaminated with phenol in order to meet environmental standards. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Modelos de escoamento multifásico são amplamente usados em diversas áreas de pesquisa ambiental, como leitos fluidizados, dispersão de gás em líquidos e vários outros processos que englobam mais de uma propriedade físico-química do meio. Dessa forma, um modelo multifásico foi desenvolvido e adaptado para o estudo do transporte de sedimentos de fundo devido à ação de ondas de gravidade. Neste trabalho, foi elaborado o acoplamento multifásico de um modelo euleriano não-linear de ondas do tipo Boussinesq, baseado na formulação numérica encontrada em Wei et al. (1995), com um modelo lagrangiano de partículas, fundamentado pelo princípio Newtoniano do movimento com o esquema de colisões do tipo esferas rígidas. O modelo de ondas foi testado quanto à sua fonte geradora, representada por uma função gaussiana, pá-pistão e pá-batedor, e quanto à sua interação com a profundidade, através da não-linearidade e de propriedades dispersivas. Nos testes realizados da fonte geradora, foi observado que a fonte gaussiana, conforme Wei et al. (1999), apresentou melhor consistência e estabilidade na geração das ondas, quando comparada à teoria linear para um kh   . A não-linearidade do modelo de ondas de 2ª ordem para a dispersão apresentou resultados satisfatórios quando confrontados com o experimento de ondas sobre um obstáculo trapezoidal, onde a deformação da onda sobre a estrutura submersa está em concordância com os dados experimentais encontrados na literatura. A partir daí, o modelo granular também foi testado em dois experimentos. O primeiro simula uma quebra de barragem em um tanque contendo água e o segundo, a quebra de barragem é simulada com um obstáculo rígido adicionado ao centro do tanque. Nesses experimentos, o algoritmo de colisão foi eficaz no tratamento da interação entre partícula-partícula e partícula-parede, permitindo a evidência de processos físicos que são complicados de serem simulados por modelos de malhas regulares. Para o acoplamento do modelo de ondas e de sedimentos, o algoritmo foi testado com base de dados da literatura quanto à morfologia do leito. Os resultados foram confrontados com dados analíticos e de modelos numéricos, e se mostraram satisfatórios com relação aos pontos de erosão, de sedimentação e na alteração da forma da barra arenosa

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Managing the physical and compute infrastructure of a large data center is an embodiment of a Cyber-Physical System (CPS). The physical parameters of the data center (such as power, temperature, pressure, humidity) are tightly coupled with computations, even more so in upcoming data centers, where the location of workloads can vary substantially due, for example, to workloads being moved in a cloud infrastructure hosted in the data center. In this paper, we describe a data collection and distribution architecture that enables gathering physical parameters of a large data center at a very high temporal and spatial resolutionof the sensor measurements. We think this is an important characteristic to enable more accurate heat-flow models of the data center andwith them, _and opportunities to optimize energy consumption. Havinga high resolution picture of the data center conditions, also enables minimizing local hotspots, perform more accurate predictive maintenance (pending failures in cooling and other infrastructure equipment can be more promptly detected) and more accurate billing. We detail this architecture and define the structure of the underlying messaging system that is used to collect and distribute the data. Finally, we show the results of a preliminary study of a typical data center radio environment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The knowledge of the anisotropic properties beneath the Iberian Peninsula and Northern Morocco has been dramatically improved since late 2007 with the analysis of the data provided by the dense TopoIberia broadband seismic network, the increasing number of permanent stations operating in Morocco, Portugal and Spain, and the contribution of smaller scale/higher resolution experiments. Results from the two first TopoIberia deployments have evidenced a spectacular rotation of the fast polarization direction (FPD) along the Gibraltar Arc, interpreted as an evidence of mantle flow deflected around the high velocity slab beneath the Alboran Sea, and a rather uniform N100 degrees E FPD beneath the central Iberian Variscan Massif, consistent with global mantle flow models taking into account contributions of surface plate motion, density variations and net lithosphere rotation. The results from the last Iberarray deployment presented here, covering the northern part of the Iberian Peninsula, also show a rather uniform FPD orientation close to N100 degrees E, thus confirming the previous interpretation globally relating the anisotropic parameters to the LPO of mantle minerals generated by mantle flow at asthenospheric depths. However, the degree of anisotropy varies significantly, from delay time values of around 0.5 s beneath NW Iberia to values reaching 2.0 sin its NE comer. The anisotropic parameters retrieved from single events providing high quality data also show significant differences for stations located in the Variscan units of NW Iberia, suggesting that the region includes multiple anisotropic layers or complex anisotropy systems. These results allow to complete the map of the anisotropic properties of the westernmost Mediterranean region, which can now be considered as one of best constrained regions worldwide, with more than 300 sites investigated over an area extending from the Bay of Biscay to the Sahara platform. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nowadays, data centers are large energy consumers and the trend for next years is expected to increase further, considering the growth in the order of cloud services. A large portion of this power consumption is due to the control of physical parameters of the data center (such as temperature and humidity). However, these physical parameters are tightly coupled with computations, and even more so in upcoming data centers, where the location of workloads can vary substantially due, for example, to workloads being moved in the cloud infrastructure hosted in the data center. Therefore, managing the physical and compute infrastructure of a large data center is an embodiment of a Cyber-Physical System (CPS). In this paper, we describe a data collection and distribution architecture that enables gathering physical parameters of a large data center at a very high temporal and spatial resolution of the sensor measurements. We think this is an important characteristic to enable more accurate heat-flow models of the data center and with them, find opportunities to optimize energy consumptions. Having a high-resolution picture of the data center conditions, also enables minimizing local hot-spots, perform more accurate predictive maintenance (failures in all infrastructure equipments can be more promptly detected) and more accurate billing. We detail this architecture and define the structure of the underlying messaging system that is used to collect and distribute the data. Finally, we show the results of a preliminary study of a typical data center radio environment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The research performed a sustainability assessment of supply chains of the anchoveta (Engraulis ringens) in Peru. The corresponding fisheries lands 6.5 million t per year, of which <2% is rendered into products for direct human consumption (DHC) and 98% reduced into feed ingredients (fishmeal and fish oil, FMFO), for export. Several industries compete for the anchoveta resources, generating local and global impacts. The need for understanding these dynamics, towards sustainability-improving management and policy recommendations, determined the development of a sustainability assessment framework: 1) characterisation and modelling of the systems under study (with Life Cycle Assessment and other tools) including local aquaculture, 2) calculation of sustainability indicators (i.e. energy efficiency, nutritional value, socio-economic performances), and 3) sustainability comparison of supply chains; definition and comparison of alternative exploitation scenarios. Future exploitation scenarios were defined by combining an ecosystem and a material flow models: continuation of the status quo (Scenario 1), shift towards increased proportion of DHC production (Scenario 2), and radical reduction of the anchoveta harvest in order for other fish stocks to recover and be exploited for DHC (Scenario 3). Scenario 2 was identified as the most sustainable. Management and policy recommendations include improving of: controls for compliance with management measures, sanitary conditions for DHC, landing infrastructure for small- and medium-scale (SMS) fisheries; the development of a national refrigerated distribution chain; and the assignation of flexible tolerances for discards from different DHC processes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Quantifying the spatial configuration of hydraulic conductivity (K) in heterogeneous geological environments is essential for accurate predictions of contaminant transport, but is difficult because of the inherent limitations in resolution and coverage associated with traditional hydrological measurements. To address this issue, we consider crosshole and surface-based electrical resistivity geophysical measurements, collected in time during a saline tracer experiment. We use a Bayesian Markov-chain-Monte-Carlo (McMC) methodology to jointly invert the dynamic resistivity data, together with borehole tracer concentration data, to generate multiple posterior realizations of K that are consistent with all available information. We do this within a coupled inversion framework, whereby the geophysical and hydrological forward models are linked through an uncertain relationship between electrical resistivity and concentration. To minimize computational expense, a facies-based subsurface parameterization is developed. The Bayesian-McMC methodology allows us to explore the potential benefits of including the geophysical data into the inverse problem by examining their effect on our ability to identify fast flowpaths in the subsurface, and their impact on hydrological prediction uncertainty. Using a complex, geostatistically generated, two-dimensional numerical example representative of a fluvial environment, we demonstrate that flow model calibration is improved and prediction error is decreased when the electrical resistivity data are included. The worth of the geophysical data is found to be greatest for long spatial correlation lengths of subsurface heterogeneity with respect to wellbore separation, where flow and transport are largely controlled by highly connected flowpaths.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sustainable resource use is one of the most important environmental issues of our times. It is closely related to discussions on the 'peaking' of various natural resources serving as energy sources, agricultural nutrients, or metals indispensable in high-technology applications. Although the peaking theory remains controversial, it is commonly recognized that a more sustainable use of resources would alleviate negative environmental impacts related to resource use. In this thesis, sustainable resource use is analysed from a practical standpoint, through several different case studies. Four of these case studies relate to resource metabolism in the Canton of Geneva in Switzerland: the aim was to model the evolution of chosen resource stocks and flows in the coming decades. The studied resources were copper (a bulk metal), phosphorus (a vital agricultural nutrient), and wood (a renewable resource). In addition, the case of lithium (a critical metal) was analysed briefly in a qualitative manner and in an electric mobility perspective. In addition to the Geneva case studies, this thesis includes a case study on the sustainability of space life support systems. Space life support systems are systems whose aim is to provide the crew of a spacecraft with the necessary metabolic consumables over the course of a mission. Sustainability was again analysed from a resource use perspective. In this case study, the functioning of two different types of life support systems, ARES and BIORAT, were evaluated and compared; these systems represent, respectively, physico-chemical and biological life support systems. Space life support systems could in fact be used as a kind of 'laboratory of sustainability' given that they represent closed and relatively simple systems compared to complex and open terrestrial systems such as the Canton of Geneva. The chosen analysis method used in the Geneva case studies was dynamic material flow analysis: dynamic material flow models were constructed for the resources copper, phosphorus, and wood. Besides a baseline scenario, various alternative scenarios (notably involving increased recycling) were also examined. In the case of space life support systems, the methodology of material flow analysis was also employed, but as the data available on the dynamic behaviour of the systems was insufficient, only static simulations could be performed. The results of the case studies in the Canton of Geneva show the following: were resource use to follow population growth, resource consumption would be multiplied by nearly 1.2 by 2030 and by 1.5 by 2080. A complete transition to electric mobility would be expected to only slightly (+5%) increase the copper consumption per capita while the lithium demand in cars would increase 350 fold. For example, phosphorus imports could be decreased by recycling sewage sludge or human urine; however, the health and environmental impacts of these options have yet to be studied. Increasing the wood production in the Canton would not significantly decrease the dependence on wood imports as the Canton's production represents only 5% of total consumption. In the comparison of space life support systems ARES and BIORAT, BIORAT outperforms ARES in resource use but not in energy use. However, as the systems are dimensioned very differently, it remains questionable whether they can be compared outright. In conclusion, the use of dynamic material flow analysis can provide useful information for policy makers and strategic decision-making; however, uncertainty in reference data greatly influences the precision of the results. Space life support systems constitute an extreme case of resource-using systems; nevertheless, it is not clear how their example could be of immediate use to terrestrial systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work concerns the experimental study of rapid granular shear flows in annular Couette geometry. The flow is induced by continuous driving of the horizontal plate at the top of the granular bed in an annulus. The compressive pressure, driving torque, instantaneous bed height and rotational speed of the shearing plate are measured. Moreover, local stress fluctuations are measured in a medium made of steel spheres 2 and 3 mm in diameter. Both monodisperse packing and bidisperse packing are investigated to reveal the influence of size diversity in intermittent features of granular materials. Experiments are conducted in an annulus that can contain up to 15 kg of spherical steel balls. The shearing granular medium takes place via the rotation of the upper plate which compresses the material loaded inside the annulus. Fluctuations of compressive force are locally measured at the bottom of the annulus using a piezoelectric sensor. Rapid shear flow experiments are pursued at different compressive forces and shear rates and the sensitivity of fluctuations are then investigated by different means through monodisperse and bidisperse packings. Another important feature of rapid granular shear flows is the formation of ordered structures upon shearing. It requires a certain range for the amount of granular material (uniform size distribution) loaded in the system in order to obtain stable flows. This is studied more deeply in this thesis. The results of the current work bring some new insights into deformation dynamics and intermittency in rapid granular shear flows. The experimental apparatus is modified in comparison to earlier investigations. The measurements produce data for various quantities continuously sampled from the start of shearing to the end. Static failure and dynamic shearing ofa granular medium is investigated. The results of this work revealed some important features of failure dynamics and structure formation in the system. Furthermore, some computer simulations are performed in a 2D annulus to examine the nature of kinetic energy dissipation. It is found that turbulent flow models can statistically represent rapid granular flows with high accuracy. In addition to academic outcomes and scientific publications our results have a number of technological applications associated with grinding, mining and massive grain storages.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The thesis combines valuation and behavioral economics literature, which is not common among the Finnish management accounting research. Furthermore, the valuation is studied in biotechnology context and those type of studies are rather rare as well. The thesis studies the valuation in the Finnish biotechnology industry. The concepts of behavioral finance are employed in the empirical part of the study to explore decision-makers’ behavior in valuation processes. The main interest of this study is to explore how subjectivity of a decision-maker affects the valuation in the biotechnology industry. The valuation is studied from two perspectives. First, what is the best valuation model for biotechnology companies suggested by the valuation literature? Second, how the valuation in biotechnology industry is done in practice and how the decision-makers subjectivity affects the valuation? The literature review aims at seeking the best valuation model. The real options were found to be the most suitable valuation model for biotechnology companies, especially in the early stages of product development. The real option’s ability to take the value of the inherent options into account results in theoretically most correct valuations. The only disadvantage is the model’s complexity when compared to other models, such as discounted cash flow models. The empirical part of the study consists of a case study, which examines the valuation practices of the Finnish biotechnology companies. When it comes to the valuation models used in practice, it was found that the companies were using rather simple valuation models, which was due to two reasons. First, the interviewees did not believe in the valuation models and second, they were familiar neither with the most sophisticated models nor with all the theoretical aspects of the models they were using. The material for the study was collected with theme interviews. Four CEO’s of highly successful Finnish biotechnology companies. Strong signs of the decision-makers’ subjectivity in valuation were observed. Most obvious were the signs of framing. Furthermore, herding, excessive optimism, and overconfidence were present. All the behavioral concepts observed most likely have a severe effect on the valuation. As a result, the valuation can easily become overly optimistic, which leads to overvalued investments and to continuation of already unprofitable projects. Framing had the strongest evidence. If the product being valued is framed successfully, the risk of overvaluation is high, thus a strong belief can justify almost any value.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Two ongoing projects at ESSC that involve the development of new techniques for extracting information from airborne LiDAR data and combining this information with environmental models will be discussed. The first project in conjunction with Bristol University is aiming to improve 2-D river flood flow models by using remote sensing to provide distributed data for model calibration and validation. Airborne LiDAR can provide such models with a dense and accurate floodplain topography together with vegetation heights for parameterisation of model friction. The vegetation height data can be used to specify a friction factor at each node of a model’s finite element mesh. A LiDAR range image segmenter has been developed which converts a LiDAR image into separate raster maps of surface topography and vegetation height for use in the model. Satellite and airborne SAR data have been used to measure flood extent remotely in order to validate the modelled flood extent. Methods have also been developed for improving the models by decomposing the model’s finite element mesh to reflect floodplain features such as hedges and trees having different frictional properties to their surroundings. Originally developed for rural floodplains, the segmenter is currently being extended to provide DEMs and friction parameter maps for urban floods, by fusing the LiDAR data with digital map data. The second project is concerned with the extraction of tidal channel networks from LiDAR. These networks are important features of the inter-tidal zone, and play a key role in tidal propagation and in the evolution of salt-marshes and tidal flats. The study of their morphology is currently an active area of research, and a number of theories related to networks have been developed which require validation using dense and extensive observations of network forms and cross-sections. The conventional method of measuring networks is cumbersome and subjective, involving manual digitisation of aerial photographs in conjunction with field measurement of channel depths and widths for selected parts of the network. A semi-automatic technique has been developed to extract networks from LiDAR data of the inter-tidal zone. A multi-level knowledge-based approach has been implemented, whereby low level algorithms first extract channel fragments based mainly on image properties then a high level processing stage improves the network using domain knowledge. The approach adopted at low level uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels. The higher level processing includes a channel repair mechanism.