952 resultados para Full-scale Physical Modelling
Resumo:
A mathematical model is developed for gas-solids flows in circulating fluidized beds. An Eulerian formulation is followed based on the two-fluids model approach where both the fluid and the particulate phases are treated as a continuum. The physical modelling is discussed, including the formulation of boundary conditions and the description of the numerical methodology. Results of numerical simulation are presented and discussed. The model is validated through comparison to experiment, and simulation is performed to investigate the effects on the flow hydrodynamics of the solids viscosity.
Resumo:
In the field of molecular biology, scientists adopted for decades a reductionist perspective in their inquiries, being predominantly concerned with the intricate mechanistic details of subcellular regulatory systems. However, integrative thinking was still applied at a smaller scale in molecular biology to understand the underlying processes of cellular behaviour for at least half a century. It was not until the genomic revolution at the end of the previous century that we required model building to account for systemic properties of cellular activity. Our system-level understanding of cellular function is to this day hindered by drastic limitations in our capability of predicting cellular behaviour to reflect system dynamics and system structures. To this end, systems biology aims for a system-level understanding of functional intraand inter-cellular activity. Modern biology brings about a high volume of data, whose comprehension we cannot even aim for in the absence of computational support. Computational modelling, hence, bridges modern biology to computer science, enabling a number of assets, which prove to be invaluable in the analysis of complex biological systems, such as: a rigorous characterization of the system structure, simulation techniques, perturbations analysis, etc. Computational biomodels augmented in size considerably in the past years, major contributions being made towards the simulation and analysis of large-scale models, starting with signalling pathways and culminating with whole-cell models, tissue-level models, organ models and full-scale patient models. The simulation and analysis of models of such complexity very often requires, in fact, the integration of various sub-models, entwined at different levels of resolution and whose organization spans over several levels of hierarchy. This thesis revolves around the concept of quantitative model refinement in relation to the process of model building in computational systems biology. The thesis proposes a sound computational framework for the stepwise augmentation of a biomodel. One starts with an abstract, high-level representation of a biological phenomenon, which is materialised into an initial model that is validated against a set of existing data. Consequently, the model is refined to include more details regarding its species and/or reactions. The framework is employed in the development of two models, one for the heat shock response in eukaryotes and the second for the ErbB signalling pathway. The thesis spans over several formalisms used in computational systems biology, inherently quantitative: reaction-network models, rule-based models and Petri net models, as well as a recent formalism intrinsically qualitative: reaction systems. The choice of modelling formalism is, however, determined by the nature of the question the modeler aims to answer. Quantitative model refinement turns out to be not only essential in the model development cycle, but also beneficial for the compilation of large-scale models, whose development requires the integration of several sub-models across various levels of resolution and underlying formal representations.
Resumo:
Evidence suggests that children with developmental coordination disorder (DCD) have lower levels of cardiorespiratory fitness (CRF) compared to children without the condition. However, these studies were restricted to field-based methods in order to predict V02 peak in the determination of CRF. Such field tests have been criticised for their ability to provide a valid prediction of V02 peak and vulnerability to psychological aspects in children with DCD, such as low perceived adequacy toward physical activity. Moreover, the contribution of physical activity to the variance in V02 peak between the two groups is unknown. The purpose of our study was to determine the mediating role of physical activity and perceived adequacy towards physical activity on V02 peak in children with significant motor impairments. This prospective case-control design involved 122 (age 12-13 years) children with significant motor impairments (n=61) and healthy matched controls (n=61) based on age, gender and school location. Participants had been previously assessed for motor proficiency and classified as a probable DCD (p-DCD) or healthy control using the movement ABC test. V02 peak was measured by a progressive exercise test on a cycle ergometer. Perceived adequacy was measured using a 7 -item subscale from Children's Selfperception of Adequacy and Predilection for Physical Activity scale. Physical activity was monitored for seven days with the Actical® accelerometer. Children with p-DCD had significantly lower V02 peak (48.76±7.2 ml/ffm/min; p:50.05) compared to controls (53.12±8.2 ml/ffm/min), even after correcting for fat free mass. Regression analysis demonstrated that perceived adequacy and physical activity were significant mediators in the relationship between p-DCD and V02 peak. In conclusion, using a stringent laboratory assessment, the results of the current study verify the findings of earlier studies, adding low CRF to the list of health consequences associated with DCD. It seems that when testing for CRF in this population, there is a need to consider the psychological barriers associated with their condition. Moreover, strategies to increase physical activity in children with DCD may result in improvement in their CRF.
Resumo:
Experimental wind tunnel and smoke visualisation testing and CFD modelling were conducted to investigate the effect of air flow control mechanism and heat source inside rooms on wind catchers/towers performance. For this purpose, a full-scale wind catcher was connected to a test room and positioned centrally in an open boundary wind tunnel. Pressure coefficients (C-p's) around the wind catcher and air flow into the test room were established. The performance of the wind catcher depends greatly on the wind speed and direction. The incorporation of dampers and egg crate grille at ceiling level reduces and regulates the air flow rate with an average pressure loss coefficient of 0.01. The operation of the wind catcher in the presence of heat sources will potentially lower the internal temperatures in line with the external temperatures.
Resumo:
Wind catcher systems have been employed in buildings in the Middle East for many centuries and they are known by different names in different parts of the region. Recently there has been an increase in the application of this approach for natural ventilation and passive cooling in the UK and other countries. This paper presents the results of experimental wind tunnel and smoke visualisation testing, combined with CFD modelling, to investigate the performance of the wind catcher. For this purpose, a full-scale commercial system was connected to a test room and positioned centrally in an open boundary wind tunnel. Because much ventilation design involves the use of computational fluid dynamics, the measured performance of the system was also compared against the results of CFD analysis. Configurations included both a heated and unheated space to determine the impact of internal heat sources on airflow rate. Good comparisons between measurement and CFD analysis were obtained. Measurements showed that sufficient air change could be achieved to meet both air quality needs and passive cooling.
Resumo:
In the 1990s the Message Passing Interface Forum defined MPI bindings for Fortran, C, and C++. With the success of MPI these relatively conservative languages have continued to dominate in the parallel computing community. There are compelling arguments in favour of more modern languages like Java. These include portability, better runtime error checking, modularity, and multi-threading. But these arguments have not converted many HPC programmers, perhaps due to the scarcity of full-scale scientific Java codes, and the lack of evidence for performance competitive with C or Fortran. This paper tries to redress this situation by porting two scientific applications to Java. Both of these applications are parallelized using our thread-safe Java messaging system—MPJ Express. The first application is the Gadget-2 code, which is a massively parallel structure formation code for cosmological simulations. The second application uses the finite-domain time-difference method for simulations in the area of computational electromagnetics. We evaluate and compare the performance of the Java and C versions of these two scientific applications, and demonstrate that the Java codes can achieve performance comparable with legacy applications written in conventional HPC languages. Copyright © 2009 John Wiley & Sons, Ltd.
Resumo:
Following a malicious or accidental atmospheric release in an outdoor environment it is essential for first responders to ensure safety by identifying areas where human life may be in danger. For this to happen quickly, reliable information is needed on the source strength and location, and the type of chemical agent released. We present here an inverse modelling technique that estimates the source strength and location of such a release, together with the uncertainty in those estimates, using a limited number of measurements of concentration from a network of chemical sensors considering a single, steady, ground-level source. The technique is evaluated using data from a set of dispersion experiments conducted in a meteorological wind tunnel, where simultaneous measurements of concentration time series were obtained in the plume from a ground-level point-source emission of a passive tracer. In particular, we analyze the response to the number of sensors deployed and their arrangement, and to sampling and model errors. We find that the inverse algorithm can generate acceptable estimates of the source characteristics with as few as four sensors, providing these are well-placed and that the sampling error is controlled. Configurations with at least three sensors in a profile across the plume were found to be superior to other arrangements examined. Analysis of the influence of sampling error due to the use of short averaging times showed that the uncertainty in the source estimates grew as the sampling time decreased. This demonstrated that averaging times greater than about 5min (full scale time) lead to acceptable accuracy.
Resumo:
The large scale urban consumption of energy (LUCY) model simulates all components of anthropogenic heat flux (QF) from the global to individual city scale at 2.5 × 2.5 arc-minute resolution. This includes a database of different working patterns and public holidays, vehicle use and energy consumption in each country. The databases can be edited to include specific diurnal and seasonal vehicle and energy consumption patterns, local holidays and flows of people within a city. If better information about individual cities is available within this (open-source) database, then the accuracy of this model can only improve, to provide the community data from global-scale climate modelling or the individual city scale in the future. The results show that QF varied widely through the year, through the day, between countries and urban areas. An assessment of the heat emissions estimated revealed that they are reasonably close to those produced by a global model and a number of small-scale city models, so results from LUCY can be used with a degree of confidence. From LUCY, the global mean urban QF has a diurnal range of 0.7–3.6 W m−2, and is greater on weekdays than weekends. The heat release from building is the largest contributor (89–96%), to heat emissions globally. Differences between months are greatest in the middle of the day (up to 1 W m−2 at 1 pm). December to February, the coldest months in the Northern Hemisphere, have the highest heat emissions. July and August are at the higher end. The least QF is emitted in May. The highest individual grid cell heat fluxes in urban areas were located in New York (577), Paris (261.5), Tokyo (178), San Francisco (173.6), Vancouver (119) and London (106.7). Copyright © 2010 Royal Meteorological Society
Resumo:
Biological nitrogen removal is an important task in the wastewater treatment. However, the actual removal of total nitrogen (TN) in the wastewater treatment plant (WWTP) is often unsatisfactory due to several causes, one of which is the insufficient availability of carbon source. One possible approach to improve the nitrogen removal therefore is addition of external carbon source, while the amount of which is directly related to operation cost of a WWTP. It is obviously necessary to determine the accurate amount of addition of external carbon source according to the demand depending on the influent wastewater quality. This study focused on the real-time control of external carbon source addition based on the on-line monitoring of influent wastewater quality. The relationship between the influent wastewater quality (specifically the concentration of COD and ammonia) and the demand of carbon source was investigated through experiments on a pilot-scale A/O reactor (1m3) at the Nanjing WWTP, China. The minimum doses of carbon source addition at different situations of influent wastewater quality were determined to ensure the effluent wastewater quality meets the discharge standard. The obtained relationship is expected to be applied in the full-scale WWTPs. .
Resumo:
This study aims to assess the potential for industrial reuse of textile wastewater, after passing through a physical and chemical pretreatment, into denim washing wet processing operations in an industrial textile laundry, with no need for complementary treatments and dilutions. The methodology and evaluation of the proposed tests were based on the production techniques used in the company and upgraded for the experiments tested. The characterization of the treated effluent for 16 selected parameters and the development of a monitoring able to tailor the treated effluent for final disposal in accordance with current legislation was essential for the initiation of testing for reuse. The parameters color, turbidity, SS and pH used were satisfactory as control variables and presents simple determination methods. The denim quality variables considered were: color, odor, appearance and soft handle. The tests were started on a pilot scale following complexity factors attributed to the processes, in denim fabric and jeans, which demonstrated the possibility of reuse, because there was no interference in the processes and at quality of the tested product. Industrial scale tests were initiated by a step control that confirmed the methodology efficiency applied to identify the possibility of reuse by tests that precede each recipe to be processed. 556 replicates were performed in production scale for 47 different recipes of denim washing. The percentage of water reuse was 100% for all processes and repetitions performed after the initial adjustment testing phase. All the jeans were framed with the highest quality for internal control and marketed, being accepted by contractors. The full-scale use of treated wastewater, supported by monitoring and evaluation and control methodology suggested in this study, proved to be valid in textile production, not given any negative impact to the quality the produced jeans under the presented conditions. It is believed that this methodology can be extrapolated to other laundries to determine the possibility of reuse in denim washing wet processing with the necessary modifications to each company.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Para estudar os problemas de prospecção geofísica eletromagnética através de modelagem analógica, as condições em escala natural são representadas, no laboratório, em escala reduzida de acordo com a teoria da similitude. Portanto, para investigar os problemas de técnicas VLF, AFMAG e MT, freqüentemente é necessário criar um campo uniforme no arranjo experimental. Os sistemas físicos para geração de campos uniformes estudados aqui são a bobina circular, bobina de Helmholtz, solenóide, um plano de corrente, e dois planos paralelos de correntes. Os mapas de porcentagem de desvio de campo estão presentes para todos os sistemas estudados aqui. Um estudo comparativo desses sistemas mostra que o solenóide é a maneira mais eficiente para criar um campo uniforme, seguido pelo sistema de bobinas de Helmholtz. Porém, o campo criado em um solenóide está em um espaço fechado onde é difícil colocar modelos e substituí-los para executar experimentos. Portanto, recomenda-se o uso de bobinas de Helmholtz para criar um campo uniforme. Este último sistema fornece um campo uniforme com espaço aberto suficiente, o que facilita o experimento.
Resumo:
Field experiments have demonstrated that piles driven into sand can respond to axial cyclic loading in Stable, Unstable or Meta-Stable ways, depending on the combinations of mean and cyclic loads and the number of cycles. An understanding of the three styles of responses is provided by experiments involving a highly instrumented model displacement pile and an array of soil stress sensors installed in fine sand in a pressurised calibration chamber. The different patterns of effective stress developing on and around the shaft are reported, along with the results of static load tests that track the effects on shaft capacity. The interpretation links these observations to the sand's stress strain behaviour. The interface-shear characteristics, the kinematic yielding, the local densification, the growth of a fractured interface-shear zone and the restrained dilatancy at the pile soil interface are all found to be important. The model tests are shown to be compatible with the full-scale behaviour and to provide key information for improving the modelling and the design rules. (C) 2012 The Japanese Geotechnical Society. Production and hosting by Elsevier B.V. All rights reserved.
Resumo:
The use of the core-annular flow pattern, where a thin fluid surrounds a very viscous one, has been suggested as an attractive artificial-lift method for heavy oils in the current Brazilian ultra-deepwater production scenario. This paper reports the pressure drop measurements and the core-annular flow observed in a 2 7/8-inch and 300 meter deep pilot-scale well conveying a mixture of heavy crude oil (2000 mPa.s and 950 kg/m3 at 35 C) and water at several combinations of the individual flow rates. The two-phase pressure drop data are compared with those of single-phase oil flow to assess the gains due to water injection. Another issue is the handling of the core-annular flow once it has been established. High-frequency pressure-gradient signals were collected and a treatment based on the Gabor transform together with neural networks is proposed as a promising solution for monitoring and control. The preliminary results are encouraging. The pilot-scale tests, including long-term experiments, were conducted in order to investigate the applicability of using water to transport heavy oils in actual wells. It represents an important step towards the full scale application of the proposed artificial-lift technology. The registered improvements in terms of oil production rate and pressure drop reductions are remarkable.
Resumo:
Nella tesi si analizzano le principali fonti del rumore aeronautico, lo stato dell'arte dal punto di vista normativo, tecnologico e procedurale. Si analizza lo stato dell'arte anche riguardo alla classificazione degli aeromobili, proponendo un nuovo indice prestazionale in alternativa a quello indicato dalla metodologia di certificazione (AC36-ICAO) Allo scopo di diminuire l'impatto acustico degli aeromobili in fase di atterraggio, si analizzano col programma INM i benefici di procedure CDA a 3° rispetto alle procedure tradizionali e, di seguito di procedure CDA ad angoli maggiori in termini di riduzione di lunghezza e di area delle isofoniche SEL85, SEL80 e SEL75.