967 resultados para Average method
Resumo:
A vertex-centred finite volume method (FVM) for the Cahn-Hilliard (CH) and recently proposed Cahn-Hilliard-reaction (CHR) equations is presented. Information at control volume faces is computed using a high-order least-squares approach based on Taylor series approximations. This least-squares problem explicitly includes the variational boundary condition (VBC) that ensures that the discrete equations satisfy all of the boundary conditions. We use this approach to solve the CH and CHR equations in one and two dimensions and show that our scheme satisfies the VBC to at least second order. For the CH equation we show evidence of conservative, gradient stable solutions, however for the CHR equation, strict gradient-stability is more challenging to achieve.
Resumo:
Phenomenology is a term that has been described as a philosophy, a research paradigm, a methodology, and equated with qualitative research. In this paper first we clarify phenomenology by tracing its movement both as a philosophy and as a research method. Next we make a case for the use of phenomenology in empirical investigations of management phenomena. The paper discusses a selection of central concepts pertaining to phenomenology as a scientific research method, which include description, phenomenological reduction and free imaginative variation. In particular, the paper elucidates the efficacy of Giorgi’s descriptive phenomenological research praxis as a qualitative research method and how its utility can be applied in creating a deeper and richer understanding of management practice.
Resumo:
Acoustic sensors provide an effective means of monitoring biodiversity at large spatial and temporal scales. They can continuously and passively record large volumes of data over extended periods, however these data must be analysed to detect the presence of vocal species. Automated analysis of acoustic data for large numbers of species is complex and can be subject to high levels of false positive and false negative results. Manual analysis by experienced users can produce accurate results, however the time and effort required to process even small volumes of data can make manual analysis prohibitive. Our research examined the use of sampling methods to reduce the cost of analysing large volumes of acoustic sensor data, while retaining high levels of species detection accuracy. Utilising five days of manually analysed acoustic sensor data from four sites, we examined a range of sampling rates and methods including random, stratified and biologically informed. Our findings indicate that randomly selecting 120, one-minute samples from the three hours immediately following dawn provided the most effective sampling method. This method detected, on average 62% of total species after 120 one-minute samples were analysed, compared to 34% of total species from traditional point counts. Our results demonstrate that targeted sampling methods can provide an effective means for analysing large volumes of acoustic sensor data efficiently and accurately.
Resumo:
This paper describes a method for measuring the creative potential of computer games. The research approach applies a behavioral and verbal protocol to analyze the factors that influence the creative processes used by people as they play computer games from the puzzle genre. Creative potential is measured by examining task motivation and domain-relevant and creativity-relevant skills. This paper focuses on the reliability of the factors used for measurement, determining those factors that are more strongly related to creativity. The findings show that creative potential may be determined by examining the relationship between skills required and the effect of intrinsic motivation within game play activities.
Resumo:
Today’s highly competitive market influences the manufacturing industry to improve their production systems to become the optimal system in the shortest cycle time as possible. One of most common problems in manufacturing systems is the assembly line balancing problem. The assembly line balancing problem involves task assignments to workstations with optimum line efficiency. The line balancing technique, namely “COMSOAL”, is an abbreviation of “Computer Method for Sequencing Operations for Assembly Lines”. Arcus initially developed the COMSOAL technique in 1966 [1], and it has been mainly applied to solve assembly line balancing problems [6]. The most common purposes of COMSOAL are to minimise idle time, optimise production line efficiency, and minimise the number of workstations. Therefore, this project will implement COMSOAL to balance an assembly line in the motorcycle industry. The new solution by COMSOAL will be used to compare with the previous solution that was developed by Multi‐Started Neighborhood Search Heuristic (MSNSH), which will result in five aspects including cycle time, total idle time, line efficiency, average daily productivity rate, and the workload balance. The journal name “Optimising and simulating the assembly line balancing problem in a motorcycle manufacturing company: a case study” will be used as the case study for this project [5].
Resumo:
This paper studies time integration methods for large stiff systems of ordinary differential equations (ODEs) of the form u'(t) = g(u(t)). For such problems, implicit methods generally outperform explicit methods, since the time step is usually less restricted by stability constraints. Recently, however, explicit so-called exponential integrators have become popular for stiff problems due to their favourable stability properties. These methods use matrix-vector products involving exponential-like functions of the Jacobian matrix, which can be approximated using Krylov subspace methods that require only matrix-vector products with the Jacobian. In this paper, we implement exponential integrators of second, third and fourth order and demonstrate that they are competitive with well-established approaches based on the backward differentiation formulas and a preconditioned Newton-Krylov solution strategy.
Resumo:
3D models of long bones are being utilised for a number of fields including orthopaedic implant design. Accurate reconstruction of 3D models is of utmost importance to design accurate implants to allow achieving a good alignment between two bone fragments. Thus for this purpose, CT scanners are employed to acquire accurate bone data exposing an individual to a high amount of ionising radiation. Magnetic resonance imaging (MRI) has been shown to be a potential alternative to computed tomography (CT) for scanning of volunteers for 3D reconstruction of long bones, essentially avoiding the high radiation dose from CT. In MRI imaging of long bones, the artefacts due to random movements of the skeletal system create challenges for researchers as they generate inaccuracies in the 3D models generated by using data sets containing such artefacts. One of the defects that have been observed during an initial study is the lateral shift artefact occurring in the reconstructed 3D models. This artefact is believed to result from volunteers moving the leg during two successive scanning stages (the lower limb has to be scanned in at least five stages due to the limited scanning length of the scanner). As this artefact creates inaccuracies in the implants designed using these models, it needs to be corrected before the application of 3D models to implant design. Therefore, this study aimed to correct the lateral shift artefact using 3D modelling techniques. The femora of five ovine hind limbs were scanned with a 3T MRI scanner using a 3D vibe based protocol. The scanning was conducted in two halves, while maintaining a good overlap between them. A lateral shift was generated by moving the limb several millimetres between two scanning stages. The 3D models were reconstructed using a multi threshold segmentation method. The correction of the artefact was achieved by aligning the two halves using the robust iterative closest point (ICP) algorithm, with the help of the overlapping region between the two. The models with the corrected artefact were compared with the reference model generated by CT scanning of the same sample. The results indicate that the correction of the artefact was achieved with an average deviation of 0.32 ± 0.02 mm between the corrected model and the reference model. In comparison, the model obtained from a single MRI scan generated an average error of 0.25 ± 0.02 mm when compared with the reference model. An average deviation of 0.34 ± 0.04 mm was seen when the models generated after the table was moved were compared to the reference models; thus, the movement of the table is also a contributing factor to the motion artefacts.
Resumo:
This paper presents a maintenance optimisation method for a multi-state series-parallel system considering economic dependence and state-dependent inspection intervals. The objective function considered in the paper is the average revenue per unit time calculated based on the semi-regenerative theory and the universal generating function (UGF). A new algorithm using the stochastic ordering is also developed in this paper to reduce the search space of maintenance strategies and to enhance the efficiency of optimisation algorithms. A numerical simulation is presented in the study to evaluate the efficiency of the proposed maintenance strategy and optimisation algorithms. The simulation result reveals that maintenance strategies with opportunistic maintenance and state-dependent inspection intervals are more cost-effective when the influence of economic dependence and inspection cost is significant. The study further demonstrates that the optimisation algorithm proposed in this paper has higher computational efficiency than the commonly employed heuristic algorithms.
Resumo:
A high-throughput method of isolating and cloning geminivirus genomes from dried plant material, by combining an Extract-n-Amp™-based DNA isolation technique with rolling circle amplification (RCA) of viral DNA, is presented. Using this method an attempt was made to isolate and clone full geminivirus genomes/genome components from 102 plant samples, including dried leaves stored at room temperature for between 6 months and 10 years, with an average hands-on-time to RCA-ready DNA of 15 min per 20 samples. While storage of dried leaves for up to 6 months did not appreciably decrease cloning success rates relative to those achieved with fresh samples, efficiency of the method decreased with increasing storage time. However, it was still possible to clone virus genomes from 47% of 10-year-old samples. To illustrate the utility of this simple method for high-throughput geminivirus diversity studies, six Maize streak virus genomes, an Abutilon mosaic virus DNA-B component and the DNA-A component of a previously unidentified New Word begomovirus species were fully sequenced. Genomic clones of the 69 other viruses were verified as such by end sequencing. This method should be extremely useful for the study of any circular DNA plant viruses with genome component lengths smaller than the maximum size amplifiable by RCA. © 2008 Elsevier B.V. All rights reserved.
Resumo:
Client owners usually need an estimate or forecast of their likely building costs in advance of detailed design in order to confirm the financial feasibility of their projects. Because of their timing in the project life cycle, these early stage forecasts are characterized by the minimal amount of information available concerning the new (target) project to the point that often only its size and type are known. One approach is to use the mean contract sum of a sample, or base group, of previous projects of a similar type and size to the project for which the estimate is needed. Bernoulli’s law of large numbers implies that this base group should be as large as possible. However, increasing the size of the base group inevitably involves including projects that are less and less similar to the target project. Deciding on the optimal number of base group projects is known as the homogeneity or pooling problem. A method of solving the homogeneity problem is described involving the use of closed form equations to compare three different sampling arrangements of previous projects for their simulated forecasting ability by a cross-validation method, where a series of targets are extracted, with replacement, from the groups and compared with the mean value of the projects in the base groups. The procedure is then demonstrated with 450 Hong Kong projects (with different project types: Residential, Commercial centre, Car parking, Social community centre, School, Office, Hotel, Industrial, University and Hospital) clustered into base groups according to their type and size.
Duration-dependant response of mixed-method pre-cooling for intermittent-sprint exercise in the heat
Resumo:
This study examined the effects of pre-cooling duration on performance and neuromuscular function for self-paced intermittent-sprint shuttle running in the heat. Eight male, team-sport athletes completed two 35-min bouts of intermittent-sprint shuttle running separated by a 15-min recovery on three separate occasions (33°C, 34% relative humidity). Mixed-method pre-cooling was completed for 20 min (COOL20), 10-min (COOL10) or no cooling (CONT) and reapplied for 5-min mid-exercise. Performance was assessed via sprint times, percentage decline and shuttle-running distance covered. Maximal voluntary contractions (MVC), voluntary activation (VA) and evoked twitch properties were recorded pre- and post-intervention and mid- and post-exercise. Core temperature (T c), skin temperature, heart rate, capillary blood metabolites, sweat losses, perceptual exertion and thermal stress were monitored throughout. Venous blood draws pre- and post-exercise were analyzed for muscle damage and inflammation markers. Shuttle-running distances covered were increased 5.2 ± 3.3% following COOL20 (P < 0.05), with no differences observed between COOL10 and CONT (P > 0.05). COOL20 aided in the maintenance of mid- and post-exercise MVC (P < 0.05; d > 0.80), despite no conditional differences in VA (P > 0.05). Pre-exercise T c was reduced by 0.15 ± 0.13°C with COOL20 (P < 0.05; d > 1.10), and remained lower throughout both COOL20 and COOL10 compared to CONT (P < 0.05; d > 0.80). Pre-cooling reduced sweat losses by 0.4 ± 0.3 kg (P < 0.02; d > 1.15), with COOL20 0.2 ± 0.4 kg less than COOL10 (P = 0.19; d = 1.01). Increased pre-cooling duration lowered physiological demands during exercise heat stress and facilitated the maintenance of self-paced intermittent-sprint performance in the heat. Importantly, the dose-response interaction of pre-cooling and sustained neuromuscular responses may explain the improved exercise performance in hot conditions.
Resumo:
This study examined physiological and performance effects of pre-cooling on medium-fast bowling in the heat. Ten, medium-fast bowlers completed two randomised trials involving either cooling (mixed-methods) or control (no cooling) interventions before a 6-over bowling spell in 31.9±2.1°C and 63.5±9.3% relative humidity. Measures included bowling performance (ball speed, accuracy and run-up speeds), physical characteristics (global positioning system monitoring and counter-movement jump height), physiological (heart rate, core temperature, skin temperature and sweat loss), biochemical (serum concentrations of damage, stress and inflammation) and perceptual variables (perceived exertion and thermal sensation). Mean ball speed (114.5±7.1 vs. 114.1±7.2 km · h−1; P = 0.63; d = 0.09), accuracy (43.1±10.6 vs. 44.2±12.5 AU; P = 0.76; d = 0.14) and total run-up speed (19.1±4.1 vs. 19.3±3.8 km · h−1; P = 0.66; d = 0.06) did not differ between pre-cooling and control respectively; however 20-m sprint speed between overs was 5.9±7.3% greater at Over 4 after pre-cooling (P = 0.03; d = 0.75). Pre-cooling reduced skin temperature after the intervention period (P = 0.006; d = 2.28), core temperature and pre-over heart rates throughout (P = 0.01−0.04; d = 0.96−1.74) and sweat loss by 0.4±0.3 kg (P = 0.01; d = 0.34). Mean rating of perceived exertion and thermal sensation were lower during pre-cooling trials (P = 0.004−0.03; d = 0.77−3.13). Despite no observed improvement in bowling performance, pre-cooling maintained between-over sprint speeds and blunted physiological and perceptual demands to ease the thermoregulatory demands of medium-fast bowling in hot conditions.
Resumo:
This investigation examined physiological and performance effects of cooling on recovery of medium-fast bowlers in the heat. Eight, medium-fast bowlers completed two randomised trials, involving two sessions completed on consecutive days (Session 1: 10-overs and Session 2: 4-overs) in 31 ± 3°C and 55 ± 17% relative humidity. Recovery interventions were administered for 20 min (mixed-method cooling vs. control) after Session 1. Measures included bowling performance (ball speed, accuracy, run-up speeds), physical demands (global positioning system, counter-movement jump), physiological (heart rate, core temperature, skin temperature, sweat loss), biochemical (creatine kinase, C-reactive protein) and perceptual variables (perceived exertion, thermal sensation, muscle soreness). Mean ball speed was higher after cooling in Session 2 (118.9 ± 8.1 vs. 115.5 ± 8.6 km · h−1; P = 0.001; d = 0.67), reducing declines in ball speed between sessions (0.24 vs. −3.18 km · h−1; P = 0.03; d = 1.80). Large effects indicated higher accuracy in Session 2 after cooling (46.0 ± 11.2 vs. 39.4 ± 8.6 arbitrary units [AU]; P = 0.13; d = 0.93) without affecting total run-up speed (19.0 ± 3.1 vs. 19.0 ± 2.5 km · h−1; P = 0.97; d = 0.01). Cooling reduced core temperature, skin temperature and thermal sensation throughout the intervention (P = 0.001–0.05; d = 1.31–5.78) and attenuated creatine kinase (P = 0.04; d = 0.56) and muscle soreness at 24-h (P = 0.03; d = 2.05). Accordingly, mixed-method cooling can reduce thermal strain after a 10-over spell and improve markers of muscular damage and discomfort alongside maintained medium-fast bowling performance on consecutive days in hot conditions.
Resumo:
The average structure (CI) of a volcanic plagioclase megacryst with composition Ano, from the Hogarth Ranges, Australia, has been determined using three-dimensional, singlecrystal neutron and X-ray diffraction data. Least squaresr efinements, incorporating anisotropic thermal motion of all atoms and an extinction correction, resulted in weighted R factors (based on intensities) of 0.076 and 0.056, respectively, for the neutron and X-ray data. Very weak e reflections could be detected in long-exposure X-ray and electron diffraction photographs of this crystal, but the refined average structure is believed to be unaffected by the presence of such a weak superstructure. The ratio of the scattering power of Na to that of Ca is different for X ray and neutron radiation, and this radiation-dependence of scattering power has been used to determine the distribution of Na and Ca over a split-atom M site (two sites designated M' and M") in this Ano, plagioclase. Relative peak-height ratios M'/M", revealed in difference Fourier sections calculated from neutron and X-ray data, formed the basis for the cation-distribution analysis. As neutron and X-ray data sets were directly compared in this analysis, it was important that systematic bias between refined neutron and X-ray positional parameters could be demonstrated to be absent. In summary, with an M-site model constrained only by the electron-microprobedetermined bulk composition of the crystal, the following values were obtained for the M-site occupanciesN: ar, : 0.29(7),N ar. : 0.23(7),C ar, : 0.15(4),a nd Car" : 0.33(4). These results indicate that restrictive assumptions about M sites, on which previous plagioclase refinements have been based, are not applicable to this Ano, and possibly not to the entire compositional range. T-site ordering determined by (T-O) bond-length variation-t,o : 0.51(l), trm = t2o = t2m = 0.32(l)-is weak, as might be expectedf rom the volcanic origin of this megacryst.
Resumo:
Despite the prominent use of the Suchey-Brooks (S-B) method of age estimation in forensic anthropological practice, it is subject to intrinsic limitations, with reports of differential inter-population error rates between geographical locations. This study assessed the accuracy of the S-B method to a contemporary adult population in Queensland, Australia and provides robust age parameters calibrated for our population. Three-dimensional surface reconstructions were generated from computed tomography scans of the pubic symphysis of male and female Caucasian individuals aged 15–70 years (n = 195) in Amira® and Rapidform®. Error was analyzed on the basis of bias, inaccuracy and percentage correct classification for left and right symphyseal surfaces. Application of transition analysis and Chi-square statistics demonstrated 63.9% and 69.7% correct age classification associated with the left symphyseal surface of Australian males and females, respectively, using the S-B method. Using Bayesian statistics, probability density distributions for each S-B phase were calculated, providing refined age parameters for our population. Mean inaccuracies of 6.77 (±2.76) and 8.28 (±4.41) years were reported for the left surfaces of males and females, respectively; with positive biases for younger individuals (<55 years) and negative biases in older individuals. Significant sexual dimorphism in the application of the S-B method was observed; and asymmetry in phase classification of the pubic symphysis was a frequent phenomenon. These results recommend that the S-B method should be applied with caution in medico-legal death investigations of Queensland skeletal remains and warrant further investigation of reliable age estimation techniques.