588 resultados para CHANDRASEKHAR MASS MODELS
em Queensland University of Technology - ePrints Archive
Resumo:
This paper addresses the problem of joint identification of infinite-frequency added mass and fluid memory models of marine structures from finite frequency data. This problem is relevant for cases where the code used to compute the hydrodynamic coefficients of the marine structure does not give the infinite-frequency added mass. This case is typical of codes based on 2D-potential theory since most 3D-potential-theory codes solve the boundary value associated with the infinite frequency. The method proposed in this paper presents a simpler alternative approach to other methods previously presented in the literature. The advantage of the proposed method is that the same identification procedure can be used to identify the fluid-memory models with or without having access to the infinite-frequency added mass coefficient. Therefore, it provides an extension that puts the two identification problems into the same framework. The method also exploits the constraints related to relative degree and low-frequency asymptotic values of the hydrodynamic coefficients derived from the physics of the problem, which are used as prior information to refine the obtained models.
Resumo:
The book is a joint effort of eight academics and journalists, Europe specialists from six countries (Australia, Germany, Poland, Slovenia, the United Kingdom and the United States). They give sometimes divergent views on the future of the so-called “European Project”, for building a common European economy and society, but agree that cultural changes, especially changes experienced through mass media, are rapidly taking place. One of the central interests of the book is the operation of the large media centre located at the European Commission in Brussels – the world’s largest gallery of permanently accredited correspondents. Jacket notes: The Lisbon Treaty of December 2009 is the latest success of the European Union’s drive to restructure and expand; yet questions persist about how democratic this new Europe might be. Will Brussels’ promotion of the “European idea” produce a common European culture and society? The authors consider it might, as a culture of everyday shared experience, though old ways are cherished, citizens forever thinking twice about committing to an uncertain future. The book focuses on mass media , as a prime agent of change, sometimes used deliberately to promote a “European project”; sometimes acting more naturally as a medium for new agendas. It looks at proposed media models for Europe, ranging from not very successful pan-European television, to the potentials of media systems based on national markets, and new media based on digital formats. It also studies the Brussels media service, the centre operated by the European Commission, which is the world’s largest concentration of journalists; and ways that dominant national media may come to serve the interests of communities now extending across frontiers. Europe and the Media notes change especially as encountered by new EU member countries of central and eastern Europe.
Resumo:
With the advances in computer hardware and software development techniques in the past 25 years, digital computer simulation of train movement and traction systems has been widely adopted as a standard computer-aided engineering tool [1] during the design and development stages of existing and new railway systems. Simulators of different approaches and scales are used extensively to investigate various kinds of system studies. Simulation is now proven to be the cheapest means to carry out performance predication and system behaviour characterisation. When computers were first used to study railway systems, they were mainly employed to perform repetitive but time-consuming computational tasks, such as matrix manipulations for power network solution and exhaustive searches for optimal braking trajectories. With only simple high-level programming languages available at the time, full advantage of the computing hardware could not be taken. Hence, structured simulations of the whole railway system were not very common. Most applications focused on isolated parts of the railway system. It is more appropriate to regard those applications as primarily mechanised calculations rather than simulations. However, a railway system consists of a number of subsystems, such as train movement, power supply and traction drives, which inevitably contains many complexities and diversities. These subsystems interact frequently with each other while the trains are moving; and they have their special features in different railway systems. To further complicate the simulation requirements, constraints like track geometry, speed restrictions and friction have to be considered, not to mention possible non-linearities and uncertainties in the system. In order to provide a comprehensive and accurate account of system behaviour through simulation, a large amount of data has to be organised systematically to ensure easy access and efficient representation; the interactions and relationships among the subsystems should be defined explicitly. These requirements call for sophisticated and effective simulation models for each component of the system. The software development techniques available nowadays allow the evolution of such simulation models. Not only can the applicability of the simulators be largely enhanced by advanced software design, maintainability and modularity for easy understanding and further development, and portability for various hardware platforms are also encouraged. The objective of this paper is to review the development of a number of approaches to simulation models. Attention is, in particular, given to models for train movement, power supply systems and traction drives. These models have been successfully used to enable various ‘what-if’ issues to be resolved effectively in a wide range of applications, such as speed profiles, energy consumption, run times etc.
Resumo:
A time series method for the determination of combustion chamber resonant frequencies is outlined. This technique employs the use of Markov-chain Monte Carlo (MCMC) to infer parameters in a chosen model of the data. The development of the model is included and the resonant frequency is characterised as a function of time. Potential applications for cycle-by-cycle analysis are discussed and the bulk temperature of the gas and the trapped mass in the combustion chamber are evaluated as a function of time from resonant frequency information.
Resumo:
Groundwater is a major resource on Bribie Island and its sustainable management is essential to maintain the natural and modified eco-systems, as well as the human population and the integrity of the island as a sand mass. An effective numerical model is essential to enable predictions, and to test various water use and rainfall/climate scenarios. Such a numerical model must, however, be based on a representative conceptual hydrogeological model to allow incorporation of realistic controls and processes. Here we discuss the various hydrogeological models and parameters, and hydrological properties of the materials forming the island. We discuss the hydrological processes and how they can be incorporated into these models, in an integrated manner. Processes include recharge, discharge to wetlands and along the coastline, abstraction, evapotranspiration and potential seawater intrusion. The types and distributions of groundwater bores and monitoring are considered, as are scenarios for groundwater supply abstraction. Different types of numerical models and their applicability are also considered
Resumo:
Even though titanium dioxide photocatalysis has been promoted as a leading green technology for water purification, many issues have hindered its application on a large commercial scale. For the materials scientist the main issues have centred the synthesis of more efficient materials and the investigation of degradation mechanisms; whereas for the engineers the main issues have been the development of appropriate models and the evaluation of intrinsic kinetics parameters that allow the scale up or re-design of efficient large-scale photocatalytic reactors. In order to obtain intrinsic kinetics parameters the reaction must be analysed and modelled considering the influence of the radiation field, pollutant concentrations and fluid dynamics. In this way, the obtained kinetic parameters are independent of the reactor size and configuration and can be subsequently used for scale-up purposes or for the development of entirely new reactor designs. This work investigates the intrinsic kinetics of phenol degradation over titania film due to the practicality of a fixed film configuration over a slurry. A flat plate reactor was designed in order to be able to control reaction parameters that include the UV irradiance, flow rates, pollutant concentration and temperature. Particular attention was paid to the investigation of the radiation field over the reactive surface and to the issue of mass transfer limited reactions. The ability of different emission models to describe the radiation field was investigated and compared to actinometric measurements. The RAD-LSI model was found to give the best predictions over the conditions tested. Mass transfer issues often limit fixed film reactors. The influence of this phenomenon was investigated with specifically planned sets of benzoic acid experiments and with the adoption of the stagnant film model. The phenol mass transfer coefficient in the system was calculated to be km,phenol=8.5815x10-7Re0.65(ms-1). The data obtained from a wide range of experimental conditions, together with an appropriate model of the system, has enabled determination of intrinsic kinetic parameters. The experiments were performed in four different irradiation levels (70.7, 57.9, 37.1 and 20.4 W m-2) and combined with three different initial phenol concentrations (20, 40 and 80 ppm) to give a wide range of final pollutant conversions (from 22% to 85%). The simple model adopted was able to fit the wide range of conditions with only four kinetic parameters; two reaction rate constants (one for phenol and one for the family of intermediates) and their corresponding adsorption constants. The intrinsic kinetic parameters values were defined as kph = 0.5226 mmol m-1 s-1 W-1, kI = 0.120 mmol m-1 s-1 W-1, Kph = 8.5 x 10-4 m3 mmol-1 and KI = 2.2 x 10-3 m3 mmol-1. The flat plate reactor allowed the investigation of the reaction under two different light configurations; liquid and substrate side illumination. The latter of particular interest for real world applications where light absorption due to turbidity and pollutants contained in the water stream to be treated could represent a significant issue. The two light configurations allowed the investigation of the effects of film thickness and the determination of the catalyst optimal thickness. The experimental investigation confirmed the predictions of a porous medium model developed to investigate the influence of diffusion, advection and photocatalytic phenomena inside the porous titania film, with the optimal thickness value individuated at 5 ìm. The model used the intrinsic kinetic parameters obtained from the flat plate reactor to predict the influence of thickness and transport phenomena on the final observed phenol conversion without using any correction factor; the excellent match between predictions and experimental results provided further proof of the quality of the parameters obtained with the proposed method.
Resumo:
Fruit drying is a process of removing moisture to preserve fruits by preventing microbial spoilage. It increases shelf life, reduce weight and volume thus minimize packing, storage, and transportation cost and enable storage of food under ambient environment. But, it is a complex process which involves combination of heat and mass transfer and physical property change and shrinkage of the material. In this background, the aim of this paper to develop a mathematical model to simulate coupled heat and mass transfer during convective drying of fruit. This model can be used predict the temperature and moisture distribution inside the fruits during drying. Two models were developed considering shrinkage dependent and temperature dependent moisture diffusivity and the results were compared. The governing equations of heat and mass transfer are solved and a parametric study has been done with Comsol Multiphysics 4.3. The predicted results were validated with experimental data.
Resumo:
Lean body mass (LBM) and muscle mass remains difficult to quantify in large epidemiological studies due to non-availability of inexpensive methods. We therefore developed anthropometric prediction equations to estimate the LBM and appendicular lean soft tissue (ALST) using dual energy X-ray absorptiometry (DXA) as a reference method. Healthy volunteers (n= 2220; 36% females; age 18-79 y) representing a wide range of body mass index (14-44 kg/m2) participated in this study. Their LBM including ALST was assessed by DXA along with anthropometric measurements. The sample was divided into prediction (60%) and validation (40%) sets. In the prediction set, a number of prediction models were constructed using DXA measured LBM and ALST estimates as dependent variables and a combination of anthropometric indices as independent variables. These equations were cross-validated in the validation set. Simple equations using age, height and weight explained > 90% variation in the LBM and ALST in both men and women. Additional variables (hip and limb circumferences and sum of SFTs) increased the explained variation by 5-8% in the fully adjusted models predicting LBM and ALST. More complex equations using all the above anthropometric variables could predict the DXA measured LBM and ALST accurately as indicated by low standard error of the estimate (LBM: 1.47 kg and 1.63 kg for men and women, respectively) as well as good agreement by Bland Altman analyses. These equations could be a valuable tool in large epidemiological studies assessing these body compartments in Indians and other population groups with similar body composition.
Resumo:
Earthwork planning has been considered in this article and a generic block partitioning and modelling approach has been devised to provide strategic plans of various levels of detail. Conceptually this approach is more accurate and comprehensive than others, for instance those that are section based. In response to environmental concerns the metric for decision making was fuel consumption and emissions. Haulage distance and gradient are also included as they are important components of these metrics. Advantageously the fuel consumption metric is generic and captures the physical difficulties of travelling over inclines of different gradients, that is consistent across all hauling vehicles. For validation, the proposed models and techniques have been applied to a real world road project. The numerical investigations have demonstrated that the models can be solved with relatively little CPU time. The proposed block models also result in solutions of superior quality, i.e. they have reduced fuel consumption and cost. Furthermore the plans differ considerably from those based solely upon a distance based metric thus demonstrating a need for industry to reflect upon their current practices.
Resumo:
Samples of Forsythia suspensa from raw (Laoqiao) and ripe (Qingqiao) fruit were analyzed with the use of HPLC-DAD and the EIS-MS techniques. Seventeen peaks were detected, and of these, twelve were identified. Most were related to the glucopyranoside molecular fragment. Samples collected from three geographical areas (Shanxi, Henan and Shandong Provinces), were discriminated with the use of hierarchical clustering analysis (HCA), discriminant analysis (DA), and principal component analysis (PCA) models, but only PCA was able to provide further information about the relationships between objects and loadings; eight peaks were related to the provinces of sample origin. The supervised classification models-K-nearest neighbor (KNN), least squares support vector machines (LS-SVM), and counter propagation artificial neural network (CP-ANN) methods, indicated successful classification but KNN produced 100% classification rate. Thus, the fruit were discriminated on the basis of their places of origin.
Resumo:
1.Description of the Work The Fleet Store was devised as a creative output to establish an exhibition linked to a fashion business model where emerging designers were encouraged to research new and innovative strategies for creating design-driven and commercial collections for a public consumer. This was a project that was devised to break down the perceptions of emerging fashion designers that designing commercial collections linked to a sustainable business model is a boring and unnecessary process. The focus was to demystify the business of fashion and to link its importance to a design-driven and public outcome that is more familiar to fashion designers. The criterion for participation was that all designers had to be registered as a business with the Australian Taxation Office. Designers were chosen from the Creative Enterprise Australia Fashion Business Incubator, the QUT fashion graduate alumni and current QUT fashion design and double degree (fashion and business) students with existing businesses. The project evolved from a series of collaborative workshops where designers were introduced to new and innovative creative industries’ business models and the processes, costings and timings involved to create a niche, sustainable business for a public exhibition of design-driven commercial collections. All designers initiated their own business infra-structure but were then introduced to the concept of collaboration for successful and profitable exhibition and business outcomes. Collaborative strategies such as crowd funding, crowd sourcing, peer to peer mentoring and manufacturing were all researched, and strategies for the establishment of the retail exhibition were all devised in a collaborative environment. All participants also took on roles outside their ‘designer’ background to create a retail exhibition that was creative but also had critical mass and aesthetic for the consumer. The Fleet Store ‘popped up’ for 2 weeks (10 days), in a heritage-listed building in an inner city location. Passers-by were important, but the main consumer was enlisted by the use of interest and investment from crowd sourcing, crowd funding, ethical marketing, corporate social responsibility projects and collaborative public relations and social media strategies. The research has furthered discussion on innovative strategies for emerging fashion designers to initiate and maintain sustainable businesses and suggests that collaboration combined with a design-driven and business focus can create a sustainable and economically viable retail exhibition. 2. Research Statement Research Background The research field involved developing a new ethical, design-driven, collaborative and sustainable model for fashion design practice and management. The research asked can a public, design-driven, collaborative retail exhibition create a platform for promoting creative, innovative and sustainable business models for emerging fashion designers. The methodology was primarily practice-led as all participants were designers in their own right and the project manager acted as a mentor and curator to guide the process and analyse the potential of the research question. The Fleet Store offers new knowledge in design practice and management; with the creation of a model where design outcomes and business models are inextricably linked to the success of the creative output. Key innovations include extending the commercialisation of emerging fashion businesses by creating a curated retail gallery for collaborative and sustainable strategies to support niche fashion designer labels. This has contributed to a broader conversation on how to nurture and sustain competitive Australian fashion designers/labels. Research Contribution and Significance The Fleet Store has contributed to a growing body of research into innovative and sustainable business models for niche fashion and creative industries’ practitioners. All participants have maintained their business infra-structure and many are currently growing their businesses, using the strategies tested for the Fleet Store. The exhibition space was visited by over 1,000 people and sales of $27,000 were made in 10 days of opening. (Follow up sales of $3,000 has also been reported.) Three of the designers were ‘discovered’ from the exhibition and have received substantial orders from high profile national buyers and retailers for next season delivery. Several participants have since collaborated to create other pop up retail environments and are now mentoring other emerging designers on the significance of a collaborative retail exhibition to consolidate niche business models for emerging fashion designers.
Resumo:
This paper investigates the reasons why some technologies, defying general expectations and the established models of technological change, may not disappear from the market after having been displaced from their once-dominant status. Our point of departure is that the established models of technological change are not suitable to explain this as they predominantly focus on technological dominance, giving attention to the technologies that display highest performance levels and gain greatest market share. And yet, technological landscapes are rife with technological designs that do not fulfil these conditions. Using the LP record as an empirical case, we propose that the central mechanism at play in the continuing market presence of once-dominant technologies is the recasting of their technological features from the functional-utilitarian to the aesthetic realm, with an additional element concerning communal interaction among users. The findings that emerge from our quantitative textual analysis of over 200,000 posts on a prominent online LP-related discussion forum (between 2002 and 2010) also suggest that the post-dominance technology adopters and users appear to share many key characteristics with the earliest adopters of new technologies, rather than with late-stage adopters which precede them.
Resumo:
Objectives: To examine the association of maternal pregravid body mass index (BMI) and child offspring, all-cause hospitalisations in the first 5 years of life. Methods: Prospective birth cohort study. From 2006 to 2011, 2779 pregnant women (2807 children) were enrolled in the Environments for Healthy Living: Griffith birth cohort study in South-East Queensland, Australia. Hospital delivery record and self-report baseline survey of maternal, household and demographic factors during pregnancy were linked to the Queensland Hospital Admitted Patients Data Collection from 1 November 2006 to 30 June 2012, for child admissions. Maternal pregravid BMI was classified as underweight (<18.5 kg m−2), normal weight (18.5–24.9 kg m−2), overweight (25.0–29.9 kg m−2) or obese (30 kg m−2). Main outcomes were the total number of child hospital admissions and ICD-10-AM diagnostic groupings in the first 5 years of life. Negative binomial regression models were calculated, adjusting for follow-up duration, demographic and health factors. The cohort comprised 8397.9 person years (PYs) follow-up. Results: Children of mothers who were classified as obese had an increased risk of all-cause hospital admissions in the first 5 years of life than the children of mothers with a normal BMI (adjusted rate ratio (RR) =1.48, 95% confidence interval 1.10–1.98). Conditions of the nervous system, infections, metabolic conditions, perinatal conditions, injuries and respiratory conditions were excessive, in both absolute and relative terms, for children of obese mothers, with RRs ranging from 1.3–4.0 (PYs adjusted). Children of mothers who were underweight were 1.8 times more likely to sustain an injury or poisoning than children of normal-weight mothers (PYs adjusted). Conclusion: Results suggest that if the intergenerational impact of maternal obesity (and similarly issues related to underweight) could be addressed, a significant reduction in child health care use, costs and public health burden would be likely.
Resumo:
Background: Undernutrition and physical inactivity are both associated with lower bone mass. Objective: This study aimed to investigate the combined effects of early-life undernutrition and urbanized lifestyles in later life on bone mass accrual in young adults from a rural community in India that is undergoing rapid socioeconomic development. Design: This was a prospective cohort study of participants of the Hyderabad Nutrition Trial (1987–1990), which offered balanced protein-calorie supplementation to pregnant women and preschool children younger than 6 y in the intervention villages. The 2009–2010 follow-up study collected data on current anthropometric measures, bone mineral density (BMD) measured by dual-energy X-ray absorptiometry, blood samples, diet, physical activity, and living standards of the trial participants (n = 1446, aged 18–23 y). Results: Participants were generally lean and had low BMD [mean hip BMD: 0.83 (women), 0.95 (men) g/cm2; lumbar spine: 0.86 (women), 0.93 (men) g/cm2]. In models adjusted for current risk factors, no strong evidence of a positive association was found between BMD and early-life supplementation. On the other hand, current lean mass and weight-bearing physical activity were positively associated with BMD. No strong evidence of an association was found between BMD and current serum 25-hydroxyvitamin D or dietary intake of calcium, protein, or calories. Conclusions: Current lean mass and weight-bearing physical activity were more important determinants of bone mass than was early-life undernutrition in this population. In transitional rural communities from low-income countries, promotion of physical activity may help to mitigate any potential adverse effects of early nutritional disadvantage.
Resumo:
Summary High bone mineral density on routine dual energy X-ray absorptiometry (DXA) may indicate an underlying skeletal dysplasia. Two hundred fifty-eight individuals with unexplained high bone mass (HBM), 236 relatives (41% with HBM) and 58 spouses were studied. Cases could not float, had mandible enlargement, extra bone, broad frames, larger shoe sizes and increased body mass index (BMI). HBM cases may harbour an underlying genetic disorder. Introduction High bone mineral density is a sporadic incidental finding on routine DXA scanning of apparently asymptomatic individuals. Such individuals may have an underlying skeletal dysplasia, as seen in LRP5 mutations. We aimed to characterize unexplained HBM and determine the potential for an underlying skeletal dysplasia. Methods Two hundred fifty-eight individuals with unexplained HBM (defined as L1 Z-score ≥ +3.2 plus total hip Z-score ≥ +1.2, or total hip Z-score ≥ +3.2) were recruited from 15 UK centres, by screening 335,115 DXA scans. Unexplained HBM affected 0.181% of DXA scans. Next 236 relatives were recruited of whom 94 (41%) had HBM (defined as L1 Z-score + total hip Z-score ≥ +3.2). Fifty-eight spouses were also recruited together with the unaffected relatives as controls. Phenotypes of cases and controls, obtained from clinical assessment, were compared using random-effects linear and logistic regression models, clustered by family, adjusted for confounders, including age and sex. Results Individuals with unexplained HBM had an excess of sinking when swimming (7.11 [3.65, 13.84], p < 0.001; adjusted odds ratio with 95% confidence interval shown), mandible enlargement (4.16 [2.34, 7.39], p < 0.001), extra bone at tendon/ligament insertions (2.07 [1.13, 3.78], p = 0.018) and broad frame (3.55 [2.12, 5.95], p < 0.001). HBM cases also had a larger shoe size (mean difference 0.4 [0.1, 0.7] UK sizes, p = 0.009) and increased BMI (mean difference 2.2 [1.3, 3.1] kg/m 2, p < 0.001). Conclusion Individuals with unexplained HBM have an excess of clinical characteristics associated with skeletal dysplasia and their relatives are commonly affected, suggesting many may harbour an underlying genetic disorder affecting bone mass.