14 resultados para Halley’s and Euler-Chebyshev’s Methods
em Greenwich Academic Literature Archive - UK
Resumo:
A novel three-dimensional finite volume (FV) procedure is described in detail for the analysis of geometrically nonlinear problems. The FV procedure is compared with the conventional finite element (FE) Galerkin approach. FV can be considered to be a particular case of the weighted residual method with a unit weighting function, where in the FE Galerkin method we use the shape function as weighting function. A Fortran code has been developed based on the finite volume cell vertex formulation. The formulation is tested on a number of geometrically nonlinear problems. In comparison with FE, the results reveal that FV can reach the FE results in a higher mesh density.
Resumo:
Computational results for the microwave heating of a porous material are presented in this paper. Combined finite difference time domain and finite volume methods were used to solve equations that describe the electromagnetic field and heat and mass transfer in porous media. The coupling between the two schemes is through a change in dielectric properties which were assumed to be dependent both on temperature and moisture content. The model was able to reflect the evolution of temperature and moisture fields as the moisture in the porous medium evaporates. Moisture movement results from internal pressure gradients produced by the internal heating and phase change.
Resumo:
This paper looks at the application of some of the assessment methods in practice with the view to enhance students’ learning in mathematics and statistics. It explores the effective application of assessment methods and highlights the issues or problems, and ways of avoiding them, related to some of the common methods of assessing mathematical and statistical learning. Some observations made by the author on good assessment practice and useful approaches employed at his institution in designing and applying assessment methods are discussed. Successful strategies in implementing assessment methods at different levels are described.
Resumo:
Background: This epidemiological study was carried out to establish the magnitude of the changing incidence of gastric and oesophageal cancer. Methods: Time-trend analyses of subsite-specific cancers of the oesophagus and stomach were performed using data from the Thames Cancer Registry database (1960-1996) for the South Thames Region. The changes in sex ratio and peak age of incidence are reported. Results: In the upper two-thirds of the oesophagus there was no significant change in the incidence rate, but the lower third of the oesophagus showed a marked rise for both sexes (average annual change + 0.05 for men, + 0.009 for women). For the gastric cardia, the incidence in males increased (average annual change + 0.025), while in females it remained unchanged. Cancers of the oesophagogastric junction showed a clear increase for both sexes (average annual change + 0.07 for men, + 0.009 for women). There were changes in the sex ratio and peak age of incidence for all subsite cancers for both sexes. Conclusion: Over a 37-year period the incidence of cancer of the oesophagogastric junction increased threefold, while the incidence of cancers of the other subsites of the stomach decreased. Further studies are needed to investigate the aetiology of these changes.
Resumo:
There have been few genuine success stories about industrial use of formal methods. Perhaps the best known and most celebrated is the use of Z by IBM (in collaboration with Oxford University's Programming Research Group) during the development of CICS/ESA (version 3.1). This work was rewarded with the prestigious Queen's Award for Technological Achievement in 1992 and is especially notable for two reasons: 1) because it is a commercial, rather than safety- or security-critical, system and 2) because the claims made about the effectiveness of Z are quantitative as well as qualitative. The most widely publicized claims are: less than half the normal number of customer-reported errors and a 9% savings in the total development costs of the release. This paper provides an independent assessment of the effectiveness of using Z on CICS based on the set of public domain documents. Using this evidence, we believe that the case study was important and valuable, but that the quantitative claims have not been substantiated. The intellectual arguments and rationale for formal methods are attractive, but their widespread commercial use is ultimately dependent upon more convincing quantitative demonstrations of effectiveness. Despite the pioneering efforts of IBM and PRG, there is still a need for rigorous, measurement-based case studies to assess when and how the methods are most effective. We describe how future similar case studies could be improved so that the results are more rigorous and conclusive.
Resumo:
In this paper, we study a problem of scheduling and batching on two machines in a flow-shop and open-shop environment. Each machine processes operations in batches, and the processing time of a batch is the sum of the processing times of the operations in that batch. A setup time, which depends only on the machine, is required before a batch is processed on a machine, and all jobs in a batch remain at the machine until the entire batch is processed. The aim is to make batching and sequencing decisions, which specify a partition of the jobs into batches on each machine, and a processing order of the batches on each machine, respectively, so that the makespan is minimized. The flow-shop problem is shown to be strongly NP-hard. We demonstrate that there is an optimal solution with the same batches on the two machines; we refer to these as consistent batches. A heuristic is developed that selects the best schedule among several with one, two, or three consistent batches, and is shown to have a worst-case performance ratio of 4/3. For the open-shop, we show that the problem is NP-hard in the ordinary sense. By proving the existence of an optimal solution with one, two or three consistent batches, a close relationship is established with the problem of scheduling two or three identical parallel machines to minimize the makespan. This allows a pseudo-polynomial algorithm to be derived, and various heuristic methods to be suggested.
Resumo:
The dynamic process of melting different materials in a cold crucible is being studied experimentally with parallel numerical modelling work. The numerical simulation uses a variety of complementing models: finite volume, integral equation and pseudo-spectral methods combined to achieve the accurate description of the dynamic melting process. Results show the temperature history of the melting process with a comparison of the experimental and computed heat losses in the various parts of the equipment. The free surface visual observations are compared to the numerically predicted surface shapes.
Resumo:
Fluid structure interaction, as applied to flexible structures, has wide application in diverse areas such as flutter in aircraft, flow in elastic pipes and blood vessels and extrusion of metals through dies. However a comprehensive computational model of these multi-physics phenomena is a considerable challenge. Until recently work in this area focused on one phenomenon and represented the behaviour of the other more simply even to the extent in metal forming, for example, that the deformation of the die is totally ignored. More recently, strategies for solving the full coupling between the fluid and soild mechanics behaviour have developed. Conventionally, the computational modelling of fluid structure interaction is problematical since computational fluid dynamics (CFD) is solved using finite volume (FV) methods and computational structural mechanics (CSM) is based entirely on finite element (FE) methods. In the past the concurrent, but rather disparate, development paths for the finite element and finite volume methods have resulted in numerical software tools for CFD and CSM that are different in almost every respect. Hence, progress is frustrated in modelling the emerging multi-physics problem of fluid structure interaction in a consistent manner. Unless the fluid-structure coupling is either one way, very weak or both, transferring and filtering data from one mesh and solution procedure to another may lead to significant problems in computational convergence. Using a novel three phase technique the full interaction between the fluid and the dynamic structural response are represented. The procedure is demonstrated on some challenging applications in complex three dimensional geometries involving aircraft flutter, metal forming and blood flow in arteries.
Resumo:
In the flip-chip assembly process, no-flow underfill materials have a particular advantage over traditional underfill: the application and curing of the former can be undertaken before and during the reflow process. This advantage can be exploited to increase the flip-chip manufacturing throughput. However, adopting a no-flow underfill process may introduce reliability issues such as underfill entrapment, delamination at interfaces between underfill and other materials, and lower solder joint fatigue life. This paper presents an analysis on the assembly and the reliability of flip-chips with no-flow underfill. The methodology adopted in the work is a combination of experimental and computer-modeling methods. Two types of no-flow underfill materials have been used for the flip chips. The samples have been inspected with X-ray and scanning acoustic microscope inspection systems to find voids and other defects. Eleven samples for each type of underfill material have been subjected to thermal shock test and the number of cycles to failure for these flip chips have been found. In the computer modeling part of the work, a comprehensive parametric study has provided details on the relationship between the material properties and reliability, and on how underfill entrapment may affect the thermal–mechanical fatigue life of flip chips with no-flow underfill.
Resumo:
Computational results for the intensive microwave heating of porous materials are presented in this work. A multi-phase porous media model has been developed to predict the heating mechanism. Combined finite difference time-domain and finite volume methods were used to solve equations that describe the electromagnetic field and heat and mass transfer in porous media. The coupling between the two schemes is through a change in dielectric properties which were assumed to be dependent both on temperature and moisture content. The model was able to reflect the evolution of both temperature and moisture fields as well as energy penetration as the moisture in the porous medium evaporates. Moisture movement results from internal pressure gradients produced by the internal heating and phase change.
Resumo:
Purpose. (1) To investigate the effects of emotional arousal and weapon presence on the completeness and accuracy of police officers' memories; and (2) to better simulate the experience of witnessing a shooting and providing testimony. Methods. A firearms training simulator was used to present 70 experienced police officers with either a shooting or a domestic dispute scenario containing no weapons. Arousal was measured using both self-report and physiological indices. Recall for event details was tested after a 10-minute delay using a structured interview. Identification accuracy was assessed with a photographic line-up. Results. Self-report measures confirmed that the shooting induced greater arousal than did the other scenario. Overall, officers' memories for the event were less complete, but more accurate, when they had witnessed the shooting. The recall and line-up data did not support a weapon focus effect. Conclusions. Police officers' recall performance can be affected both qualitatively and quantitatively by witnessing an arousing event such as a shooting.
Resumo:
There are mainly two known approaches to the representation of temporal information in Computer Science: modal logic approaches (including tense logics and hybrid temporal logics) and predicate logic approaches (including temporal argument methods and reified temporal logics). On one hand, while tense logics, hybrid temporal logics and temporal argument methods enjoy formal theoretical foundations, their expressiveness has been criticised as not power enough for representing general temporal knowledge; on the other hand, although current reified temporal logics provide greater expressive power, most of them lack of complete and sound axiomatic theories. In this paper, we propose a new reified temporal logic with a clear syntax and semantics in terms of a sound and complete axiomatic formalism which retains all the expressive power of the approach of temporal reification.
Resumo:
This paper presents an approach for detecting local damage in large scale frame structures by utilizing regularization methods for ill-posed problems. A direct relationship between the change in stiffness caused by local damage and the measured modal data for the damaged structure is developed, based on the perturbation method for structural dynamic systems. Thus, the measured incomplete modal data can be directly adopted in damage identification without requiring model reduction techniques, and common regularization methods could be effectively employed to solve the developed equations. Damage indicators are appropriately chosen to reflect both the location and severity of local damage in individual components of frame structures such as in brace members and at beam-column joints. The Truncated Singular Value Decomposition solution incorporating the Generalized Cross Validation method is introduced to evaluate the damage indicators for the cases when realistic errors exist in modal data measurements. Results for a 16-story building model structure show that structural damage can be correctly identified at detailed level using only limited information on the measured noisy modal data for the damaged structure.
Resumo:
Collaborative approaches in leadership and management are increasingly acknowledged to play a key role in successful institutions in the learning and skills sector (LSS) (Ofsted, 2004). Such approaches may be important in bridging the potential 'distance' (psychological, cultural, interactional and geographical) (Collinson, 2005) that may exist between 'leaders' and 'followers', fostering more democratic communal solidarity. This paper reports on a 2006-07 research project funded by the Centre for Excellence in Leadership (CEL) that aimed to collect and analyse data on 'collaborative leadership' (CL) in the learning and skills sector. The project investigated collaborative leadership and its potential for benefiting staff through trust and knowledge-sharing in communities of practice (CoPs). The project forms part of longer-term educational research investigating leadership in a collaborative inquiry process (Jameson et al., 2006). The research examined the potential for CL to benefit institutions, analysing respondents' understanding of and resistance to collaborative practices. Quantitative and qualitative data from senior managers and lecturers was analysed using electronic data in SPSS and Tropes Zoom. The project aimed to recommend systems and practices for more inclusive, diverse leadership (Lumby et al., 2005). Collaborative leadership has increasingly gained international prominence as emphasis shifted towards team leadership beyond zero-sum 'leadership'/ 'followership' polarities into more mature conceptions of shared leadership spaces, within which synergistic leadership spaces can be mediated. The relevance of collaboration within the LSS has been highlighted following a spate of recent government-driven policy developments in FE. The promotion of CL addresses concerns about the apparent 'remoteness' of some senior managers, and the 'neo-management' control of professionals which can increase 'distance' between leaders and 'followers' and may de-professionalise staff in an already disempowered sector. Positive benefit from 'collaborative advantage' tends to be assumed in idealistic interpretations of CL, but potential 'collaborative inertia' may be problematic in a sector characterised by rapid top-down policy changes and continuous external audit and surveillance. Constant pressure for achievement against goals leaves little time for democratic group negotiations, despite the desires of leaders to create a more collaborative ethos. Yet prior models of intentional communities of practice potentially offer promise for CL practice to improve group performance despite multiple constraints. The CAMEL CoP model (JISC infoNet, 2006) was linked to the project, providing one practical way of implementing CL within situated professional networks.The project found that a good understanding of CL was demonstrated by most respondents, who thought it could enable staff to share power and work in partnership to build trust and conjoin skills, abilities and experience to achieve common goals for the good of the sector. However, although most respondents expressed agreement with the concept and ideals of CL, many thought this was currently an idealistically democratic, unachievable pipe dream in the LSS. Many respondents expressed concerns with the 'audit culture' and authoritarian management structures in FE. While there was a strong desire to see greater levels of implementation of CL, and 'collaborative advantage' from the 'knowledge sharing benefit potential' of team leadership, respondents also strongly advised against the pitfalls of 'collaborative inertia'. A 'distance' between senior leadership views and those of staff lower down the hierarchy regarding aspects of leadership performance in the sector was reported. Finally, the project found that more research is needed to investigate CL and develop innovative methods of practical implementation within autonomous communities of professional practice.