466 resultados para Conjugate gradient methods
Resumo:
Three dimensional conjugate heat transfer simulation of a standard parabolic trough thermal collector receiver is performed numerically in order to visualize and analyze the surface thermal characteristics. The computational model is developed in Ansys Fluent environment based on some simplified assumptions. Three test conditions are selected from the existing literature to verify the numerical model directly, and reasonably good agreement between the model and the test results confirms the reliability of the simulation. Solar radiation flux profile around the tube is also approximated from the literature. An in house macro is written to read the input solar flux as a heat flux wall boundary condition for the tube wall. The numerical results show that there is an abrupt variation in the resultant heat flux along the circumference of the receiver. Consequently, the temperature varies throughout the tube surface. The lower half of the horizontal receiver enjoys the maximum solar flux, and therefore, experiences the maximum temperature rise compared to the upper part with almost leveled temperature. Reasonable attributions and suggestions are made on this particular type of conjugate thermal system. The knowledge that gained so far from this study will be used to further the analysis and to design an efficient concentrator photovoltaic collector in near future.
Resumo:
For the timber industry, the ability to simulate the drying of wood is invaluable for manufacturing high quality wood products. Mathematically, however, modelling the drying of a wet porous material, such as wood, is a diffcult task due to its heterogeneous and anisotropic nature, and the complex geometry of the underlying pore structure. The well{ developed macroscopic modelling approach involves writing down classical conservation equations at a length scale where physical quantities (e.g., porosity) can be interpreted as averaged values over a small volume (typically containing hundreds or thousands of pores). This averaging procedure produces balance equations that resemble those of a continuum with the exception that effective coeffcients appear in their deffnitions. Exponential integrators are numerical schemes for initial value problems involving a system of ordinary differential equations. These methods differ from popular Newton{Krylov implicit methods (i.e., those based on the backward differentiation formulae (BDF)) in that they do not require the solution of a system of nonlinear equations at each time step but rather they require computation of matrix{vector products involving the exponential of the Jacobian matrix. Although originally appearing in the 1960s, exponential integrators have recently experienced a resurgence in interest due to a greater undertaking of research in Krylov subspace methods for matrix function approximation. One of the simplest examples of an exponential integrator is the exponential Euler method (EEM), which requires, at each time step, approximation of φ(A)b, where φ(z) = (ez - 1)/z, A E Rnxn and b E Rn. For drying in porous media, the most comprehensive macroscopic formulation is TransPore [Perre and Turner, Chem. Eng. J., 86: 117-131, 2002], which features three coupled, nonlinear partial differential equations. The focus of the first part of this thesis is the use of the exponential Euler method (EEM) for performing the time integration of the macroscopic set of equations featured in TransPore. In particular, a new variable{ stepsize algorithm for EEM is presented within a Krylov subspace framework, which allows control of the error during the integration process. The performance of the new algorithm highlights the great potential of exponential integrators not only for drying applications but across all disciplines of transport phenomena. For example, when applied to well{ known benchmark problems involving single{phase liquid ow in heterogeneous soils, the proposed algorithm requires half the number of function evaluations than that required for an equivalent (sophisticated) Newton{Krylov BDF implementation. Furthermore for all drying configurations tested, the new algorithm always produces, in less computational time, a solution of higher accuracy than the existing backward Euler module featured in TransPore. Some new results relating to Krylov subspace approximation of '(A)b are also developed in this thesis. Most notably, an alternative derivation of the approximation error estimate of Hochbruck, Lubich and Selhofer [SIAM J. Sci. Comput., 19(5): 1552{1574, 1998] is provided, which reveals why it performs well in the error control procedure. Two of the main drawbacks of the macroscopic approach outlined above include the effective coefficients must be supplied to the model, and it fails for some drying configurations, where typical dual{scale mechanisms occur. In the second part of this thesis, a new dual{scale approach for simulating wood drying is proposed that couples the porous medium (macroscale) with the underlying pore structure (microscale). The proposed model is applied to the convective drying of softwood at low temperatures and is valid in the so{called hygroscopic range, where hygroscopically held liquid water is present in the solid phase and water exits only as vapour in the pores. Coupling between scales is achieved by imposing the macroscopic gradient on the microscopic field using suitably defined periodic boundary conditions, which allows the macroscopic ux to be defined as an average of the microscopic ux over the unit cell. This formulation provides a first step for moving from the macroscopic formulation featured in TransPore to a comprehensive dual{scale formulation capable of addressing any drying configuration. Simulation results reported for a sample of spruce highlight the potential and flexibility of the new dual{scale approach. In particular, for a given unit cell configuration it is not necessary to supply the effective coefficients prior to each simulation.
Resumo:
Compression ignition (CI) engine design is subject to many constraints which presents a multi-criteria optimisation problem that the engine researcher must solve. In particular, the modern CI engine must not only be efficient, but must also deliver low gaseous, particulate and life cycle greenhouse gas emissions so that its impact on urban air quality, human health, and global warming are minimised. Consequently, this study undertakes a multi-criteria analysis which seeks to identify alternative fuels, injection technologies and combustion strategies that could potentially satisfy these CI engine design constraints. Three datasets are analysed with the Preference Ranking Organization Method for Enrichment Evaluations and Geometrical Analysis for Interactive Aid (PROMETHEE-GAIA) algorithm to explore the impact of 1): an ethanol fumigation system, 2): alternative fuels (20 % biodiesel and synthetic diesel) and alternative injection technologies (mechanical direct injection and common rail injection), and 3): various biodiesel fuels made from 3 feedstocks (i.e. soy, tallow, and canola) tested at several blend percentages (20-100 %) on the resulting emissions and efficiency profile of the various test engines. The results show that moderate ethanol substitutions (~20 % by energy) at moderate load, high percentage soy blends (60-100 %), and alternative fuels (biodiesel and synthetic diesel) provide an efficiency and emissions profile that yields the most “preferred” solutions to this multi-criteria engine design problem. Further research is, however, required to reduce Reactive Oxygen Species (ROS) emissions with alternative fuels, and to deliver technologies that do not significantly reduce the median diameter of particle emissions.
Resumo:
This paper describes an approach to investigate the adoption of Web 2.0 in the classroom using a mixed methods study. By using a combination of qualitative or quantitative data collection and analysis techniques, we attempt to synergize the results and provide a more valid understanding of Web 2.0 adoption for learning by both teachers and students. This approach is expected to yield a better holistic view on the adoption issues associated with the e-learning 2.0 concept in current higher education as opposed to single method studies done previously. This paper also presents some early findings of e-learning 2.0 adoption using this research method
Resumo:
Purpose. The purpose of this article was to present methods capable of estimating the size and shape of the human eye lens without resorting to phakometry or magnetic resonance imaging (MRI). Methods. Previously published biometry and phakometry data of 66 emmetropic eyes of 66 subjects (age range [18, 63] years, spherical equivalent range [−0.75, +0.75] D) were used to define multiple linear regressions for the radii of curvature and thickness of the lens, from which the lens refractive index could be derived. MRI biometry was also available for a subset of 30 subjects, from which regressions could be determined for the vertex radii of curvature, conic constants, equatorial diameter, volume, and surface area. All regressions were compared with the phakometry and MRI data; the radii of curvature regressions were also compared with a method proposed by Bennett and Royston et al. Results. The regressions were in good agreement with the original measurements. This was especially the case for the regressions of lens thickness, volume, and surface area, which each had an R2 > 0.6. The regression for the posterior radius of curvature had an R2 < 0.2, making this regression unreliable. For all other regressions we found 0.25 < R2 < 0.6. The Bennett-Royston method also produced a good estimation of the radii of curvature, provided its parameters were adjusted appropriately. Conclusions. The regressions presented in this article offer a valuable alternative in case no measured lens biometry values are available; however care must be taken for possible outliers.
Resumo:
Metrics such as passengers per square metre have been developed to define optimum or crowded rail passenger density. Whilst such metrics are important to operational procedures, service evaluation and reporting, they fail to fully capture and convey the ways in which passengers experience crowded situations. This paper reports findings from a two year study of rail passenger crowding in five Australian capital cities which involved a novel mixed-methodology including ethnography, focus groups and an online stated preference choice experiment. The resulting data address the following four fundamental research questions: 1) to what extent are Australian rail passengers concerned by crowding, 2) what conditions exacerbate feelings of crowdedness, 3) what conditions mitigate feelings of crowdedness, and 4) how can we usefully understand passengers’ experiences of crowdedness? It concludes with some observations on the significance and implications of these findings for customer service provision. The findings outlined in this paper demonstrate that the experience of crowdedness (including its tolerance) cannot be understood in isolation from other customer services issues such as interior design, quality of environment, safety and public health concerns. It is hypothesised that tolerance of crowding will increase alongside improvements to overall customer service. This was the first comprehensive study of crowding in the Australian rail industry.
Resumo:
This study uses borehole geophysical log data of sonic velocity and electrical resistivity to estimate permeability in sandstones in the northern Galilee Basin, Queensland. The prior estimates of permeability are calculated according to the deterministic log–log linear empirical correlations between electrical resistivity and measured permeability. Both negative and positive relationships are influenced by the clay content. The prior estimates of permeability are updated in a Bayesian framework for three boreholes using both the cokriging (CK) method and a normal linear regression (NLR) approach to infer the likelihood function. The results show that the mean permeability estimated from the CK-based Bayesian method is in better agreement with the measured permeability when a fairly apparent linear relationship exists between the logarithm of permeability and sonic velocity. In contrast, the NLR-based Bayesian approach gives better estimates of permeability for boreholes where no linear relationship exists between logarithm permeability and sonic velocity.
Resumo:
An increasing body of research is highlighting the involvement of illicit drugs in many road fatalities. Deterrence theory has been a core conceptual framework underpinning traffic enforcement as well as interventions designed to reduce road fatalities. Essentially the effectiveness of deterrence-based approaches is predicated on perceptions of certainty, severity, and swiftness of apprehension. However, much less is known about how the awareness of legal sanctions can impact upon the effectiveness of deterrence mechanisms and whether promoting such detection methods can increase the deterrent effect. Nevertheless, the implicit assumption is that individuals aware of the legal sanctions will be more deterred. This study seeks to explore how awareness of the testing method impacts upon the effectiveness of deterrence-based interventions and intentions to drug drive again in the future. In total, 161 participants who reported drug driving in the previous six months took part in the current study. The results show that awareness of testing had a small effect upon increasing perceptions of the certainty of apprehension and severity of punishment. However, awareness was not a significant predictor of intentions to drug drive again in the future. Importantly, higher levels of drug use were a significant predictor of intentions to drug drive in the future. Whilst awareness does have a small effect on deterrence variables, the influence of levels of drug use seems to reduce any deterrent effect.
Resumo:
Qualitative research methods are widely accepted in Information Systems and multiple approaches have been successfully used in IS qualitative studies over the years. These approaches include narrative analysis, discourse analysis, grounded theory, case study, ethnography and phenomenological analysis. Guided by critical, interpretive and positivist epistemologies (Myers 1997), qualitative methods are continuously growing in importance in our research community. In this special issue, we adopt Van Maanen's (1979: 520) definition of qualitative research as an umbrella term to cover an “array of interpretive techniques that can describe, decode, translate, and otherwise come to terms with the meaning, not the frequency, of certain more or less naturally occurring phenomena in the social world”. In the call for papers, we stated that the aim of the special issue was to provide a forum within which we can present and debate the significant number of issues, results and questions arising from the pluralistic approach to qualitative research in Information Systems. We recognise both the potential and the challenges that qualitative approaches offers for accessing the different layers and dimensions of a complex and constructed social reality (Orlikowski, 1993). The special issue is also a response to the need to showcase the current state of the art in IS qualitative research and highlight advances and issues encountered in the process of continuous learning that includes questions about its ontology, epistemological tenets, theoretical contributions and practical applications.
Resumo:
This paper explores what we are calling “Guerrilla Research Tactics” (GRT): research methods that exploit emerging mobile and cloud based digital technologies. We examine some case studies in the use of this technology to generate research data directly from the physical fabric and the people of the city. We argue that GRT is a new and novel way of engaging public participation in urban, place based research because it facilitates the co- creation of knowledge, with city inhabitants, ‘on the fly’. This paper discusses the potential of these new research techniques and what they have to offer researchers operating in the creative disciplines and beyond. This work builds on and extends Gauntlett’s “new creative methods” (2007) and contributes to the existing body of literature addressing creative and interactive approaches to data collection.
Resumo:
Introduction: Unaccustomed eccentric exercise often results in muscle damage and neutrophil activation. We examined changes in plasma cytokines stress hormones, creatine kinase activity and myoglobin concentration, neutrophil surface receptor expression, degranulation, and the capacity of neutrophils to generate reactive oxygen species in response to in vitro stimulation after downhill running. Methods: Ten well-trained male runners ran downhill on a treadmill at a gradient of -10% for 45 min at 60% V̇O2max. Blood was sampled immediately before (PRE) and after (POST), 1 h (1 h POST), and 24 h (24 h POST) after exercise. Results: At POST, there were significant increases (P < 0.01) in neutrophil count (32%), plasma interleukin (IL)-6 concentration (460%), myoglobin (Mb) concentration (1100%), and creatine kinase (CK) activity (40%). At 1 h POST, there were further increases above preexercise values for neutrophil count (85%), plasma Mb levels (1800%), and CK activity (56%), and plasma IL-6 concentration remained above preexercise values (410%) (P < 0.01). At 24 h POST, neutrophil counts and plasma IL-6 levels had returned to baseline, whereas plasma Mb concentration (100%) and CK activity (420%) were elevated above preexercise values (P < 0.01). There were no significant changes in neutrophil receptor expression, degranulation and respiratory burst activity, and plasma IL-8 and granulocyte-colony stimulating factor concentrations at any time after exercise. Neutrophil count correlated with plasma Mb concentration at POST (r = 0.64, P < 0.05), and with plasma CK activity at POST (r = 0.83, P < 0.01) and 1 h POST (r = 0.78, P < 0.01). Conclusion: Neutrophil activation remains unchanged after downhill running in well-trained runners, despite increases in plasma markers of muscle damage.
Resumo:
Now as in earlier periods of acute change in the media environment, new disciplinary articulations are producing new methods for media and communication research. At the same time, established media and communication studies meth- ods are being recombined, reconfigured, and remediated alongside their objects of study. This special issue of JOBEM seeks to explore the conceptual, political, and practical aspects of emerging methods for digital media research. It does so at the conjuncture of a number of important contemporary trends: the rise of a ‘‘third wave’’ of the Digital Humanities and the ‘‘computational turn’’ (Berry, 2011) associated with natively digital objects and the methods for studying them; the apparently ubiquitous Big Data paradigm—with its various manifestations across academia, business, and government — that brings with it a rapidly increasing interest in social media communication and online ‘‘behavior’’ from the ‘‘hard’’ sciences; along with the multisited, embodied, and emplaced nature of everyday digital media practice.
Resumo:
Fractional mathematical models represent a new approach to modelling complex spatial problems in which there is heterogeneity at many spatial and temporal scales. In this paper, a two-dimensional fractional Fitzhugh-Nagumo-monodomain model with zero Dirichlet boundary conditions is considered. The model consists of a coupled space fractional diffusion equation (SFDE) and an ordinary differential equation. For the SFDE, we first consider the numerical solution of the Riesz fractional nonlinear reaction-diffusion model and compare it to the solution of a fractional in space nonlinear reaction-diffusion model. We present two novel numerical methods for the two-dimensional fractional Fitzhugh-Nagumo-monodomain model using the shifted Grunwald-Letnikov method and the matrix transform method, respectively. Finally, some numerical examples are given to exhibit the consistency of our computational solution methodologies. The numerical results demonstrate the effectiveness of the methods.