954 resultados para Short Loadlength, Fast Algorithms
Resumo:
The study presents a multi-layer genetic algorithm (GA) approach using correlation-based methods to facilitate damage determination for through-truss bridge structures. To begin, the structure’s damage-suspicious elements are divided into several groups. In the first GA layer, the damage is initially optimised for all groups using correlation objective function. In the second layer, the groups are combined to larger groups and the optimisation starts over at the normalised point of the first layer result. Then the identification process repeats until reaching the final layer where one group includes all structural elements and only minor optimisations are required to fine tune the final result. Several damage scenarios on a complicated through-truss bridge example are nominated to address the proposed approach’s effectiveness. Structural modal strain energy has been employed as the variable vector in the correlation function for damage determination. Simulations and comparison with the traditional single-layer optimisation shows that the proposed approach is efficient and feasible for complicated truss bridge structures when the measurement noise is taken into account.
Resumo:
Optimising the container transfer schedule at the multimodal terminals is known to be NP-hard, which implies that the best solution becomes computationally infeasible as problem sizes increase. Genetic Algorithm (GA) techniques are used to reduce container handling/transfer times and ships' time at the port by speeding up handling operations. The GA is chosen due to the relatively good results that have been reported even with the simplest GA implementations to obtain near-optimal solutions in reasonable time. Also discussed, is the application of the model to assess the consequences of increased scheduled throughput time as well as different strategies such as the alternative plant layouts, storage policies and number of yard machines. A real data set used for the solution and subsequent sensitivity analysis is applied to the alternative plant layouts, storage policies and number of yard machines.
Resumo:
Organizations adopt a Supply Chain Management System (SCMS) expecting benefits to the organization and its functions. However, organizations are facing mounting challenges to realizing benefits through SCMS. Studies suggest a growing dissatisfaction among client organizations due to an increasing gap between expectations and realization of SCMS benefits. Further, reflecting the Enterprise System studies such as Seddon et al. (2010), SCMS benefits are also expected to flow to the organization throughout its lifecycle rather than being realized all at once. This research therefore proposes to derive a lifecycle-wide understanding of SCMS benefits and realization to derive a benefit expectation management framework to attain the full potential of an SCMS. The primary research question of this study is: How can client organizations better manage their benefit expectations of SCM systems? The specific research goals of the current study include: (1) to better understand the misalignment of received and expected benefits of SCM systems; (2) to identify the key factors influencing SCM system expectations and to develop a framework to manage SCMS benefits; (3) to explore how organizational satisfaction is influenced by the lack of SCMS benefit confirmation; and (4) to explore how to improve the realization of SCM system benefits. Expectation-Confirmation Theory (ECT) provides the theoretical underpinning for this study. ECT has been widely used in the consumer behavior literature to study customer satisfaction, post-purchase behavior and service marketing in general. Recently, ECT has been extended into Information Systems (IS) research focusing on individual user satisfaction and IS continuance. However, only a handful of studies have employed ECT to study organizational satisfaction on large-scale IS. The current study will enrich the research stream by extending ECT into organizational-level analysis and verifying the preliminary findings of relevant works by Staples et al. (2002), Nevo and Chan (2007) and Nevo and Wade (2007). Moreover, this study will go further trying to operationalize the constructs of ECT into the context of SCMS. The empirical findings of the study commence with a content analysis, through which 41 vendor reports and academic reports are analyzed yielding sixty expected benefits of SCMS. Then, the expected benefits are compared with the benefits realized at a case organization in the Fast Moving Consumer Goods industry sector that had implemented a SAP Supply Chain Management System seven years earlier. The study develops an SCMS Benefit Expectation Management (SCMS-BEM) Framework. The comparison of benefit expectations and confirmations highlights that, while certain benefits are realized earlier in the lifecycle, other benefits could take almost a decade to realize. Further analysis and discussion on how the developed SCMS-BEM Framework influences ECT when applied in SCMS was also conducted. It is recommended that when establishing their expectations of the SCMS, clients should remember that confirmation of these expectations will have a long lifecycle, as shown in the different time periods in the SCMS-BEM Framework. Moreover, the SCMS-BEM Framework will allow organizations to maintain high levels of satisfaction through careful mitigation and confirming expectations based on the lifecycle phase. In addition, the study reveals that different stakeholder groups have different expectations of the same SCMS. The perspective of multiple stakeholders has significant implications for the application of ECT in the SCMS context. When forming expectations of the SCMS, the collection of organizational benefits of SCMS should represent the perceptions of all stakeholder groups. The same mechanism should be employed in the measurements of received SCMS benefits. Moreover, for SCMS, there exists interdependence of the satisfaction among the various stakeholders. The satisfaction of decision-makers or the authorized staff is not only driven by their own expectation confirmation level, it is also influenced by the confirmation level of other stakeholders‘ expectations in the organization. Satisfaction from any one particular stakeholder group can not reflect the true satisfaction of the client organization. Furthermore, it is inferred from the SCMS-BEM Framework that organizations should place emphasis on the viewpoints of the operational and management staff when evaluating the benefits of SCMS in the short and middle term. At the same time, organizations should be placing more attention on the perspectives of strategic staff when evaluating the performance of the SCMS in the long term.
Resumo:
The conversion of tamarind seeds into pyrolytic oil by fixed bed fire-tube heating reactor has been taken into consideration in this study. The major components of the system were fixed bed fire-tube heating reactor, liquid condenser and collectors. The raw and crushed tamarind seed in particle form was pyrolized in an electrically heated 10 cm diameter and 27 cm high fixed bed reactor. The products are oil, char and gases. The parameters varied were reactor bed temperature, running time, gas flow rate and feed particle size. The parameters were found to influence the product yields significantly. The maximum liquid yield was 45 wt% at 4000C for a feed size of 1.07cm3 at a gas flow rate of 6 liter/min with a running time of 30 minute. The pyrolysis oil was obtained at these optimum process conditions were analyzed for physical and chemical properties to be used as an alternative fuel.
Resumo:
The standard Exeter stem has a length of 150mm with offsets 37.5mm to 56mm. Shorter stems of lengths 95mm, 115mm and 125mm with offsets 35.5mm or less are available for patients with smaller femurs. Concern has been raised regarding the behaviour of the smaller implants. This paper analysed data from the Australian Orthopaedic Association National Joint Replacement Registry comparing survivorship of stems of offset 35.5mm or less with the standard stems of 37.5mm offset or greater. At seven years there was no significant difference in the Cumulative Percent Revision Rate in the short stems (3.4%, 95% CI 2.4-4.8%) compared with the standard length stems (3.5%, 95% CI 3.3-3.8%) despite its use in a greater proportion of potentially more difficult developmental dysplasia of the hip cases.
Resumo:
Multiple choice (MC) examinations are frequently used for the summative assessment of large classes because of their ease of marking and their perceived objectivity. However, traditional MC formats usually lead to a surface approach to learning, and do not allow students to demonstrate the depth of their knowledge or understanding. For these reasons, we have trialled the incorporation of short answer (SA) questions into the final examination of two first year chemistry units, alongside MC questions. Students’ overall marks were expected to improve, because they were able to obtain partial marks for the SA questions. Although large differences in some individual students’ performance in the two sections of their examinations were observed, most students received a similar percentage mark for their MC as for their SA sections and the overall mean scores were unchanged. In-depth analysis of all responses to a specific question, which was used previously as a MC question and in a subsequent semester in SA format, indicates that the SA format can have weaknesses due to marking inconsistencies that are absent for MC questions. However, inclusion of SA questions improved student scores on the MC section in one examination, indicating that their inclusion may lead to different study habits and deeper learning. We conclude that questions asked in SA format must be carefully chosen in order to optimise the use of marking resources, both financial and human, and questions asked in MC format should be very carefully checked by people trained in writing MC questions. These results, in conjunction with an analysis of the different examination formats used in first year chemistry units, have shaped a recommendation on how to reliably and cost-effectively assess first year chemistry, while encouraging higher order learning outcomes.
Resumo:
Proving security of cryptographic schemes, which normally are short algorithms, has been known to be time-consuming and easy to get wrong. Using computers to analyse their security can help to solve the problem. This thesis focuses on methods of using computers to verify security of such schemes in cryptographic models. The contributions of this thesis to automated security proofs of cryptographic schemes can be divided into two groups: indirect and direct techniques. Regarding indirect ones, we propose a technique to verify the security of public-key-based key exchange protocols. Security of such protocols has been able to be proved automatically using an existing tool, but in a noncryptographic model. We show that under some conditions, security in that non-cryptographic model implies security in a common cryptographic one, the Bellare-Rogaway model [11]. The implication enables one to use that existing tool, which was designed to work with a different type of model, in order to achieve security proofs of public-key-based key exchange protocols in a cryptographic model. For direct techniques, we have two contributions. The first is a tool to verify Diffie-Hellmanbased key exchange protocols. In that work, we design a simple programming language for specifying Diffie-Hellman-based key exchange algorithms. The language has a semantics based on a cryptographic model, the Bellare-Rogaway model [11]. From the semantics, we build a Hoare-style logic which allows us to reason about the security of a key exchange algorithm, specified as a pair of initiator and responder programs. The other contribution to the direct technique line is on automated proofs for computational indistinguishability. Unlike the two other contributions, this one does not treat a fixed class of protocols. We construct a generic formalism which allows one to model the security problem of a variety of classes of cryptographic schemes as the indistinguishability between two pieces of information. We also design and implement an algorithm for solving indistinguishability problems. Compared to the two other works, this one covers significantly more types of schemes, but consequently, it can verify only weaker forms of security.
Resumo:
Purpose: The aim of this cross-over study was to investigate the changes in corneal thickness, anterior and posterior corneal topography, corneal refractive power and ocular wavefront aberrations, following the short term use of rigid contact lenses. Method: Fourteen participants wore 4 different types of contact lenses (RGP lenses of 9.5 mm and 10.5 mm diameter, and for comparison a PMMA lens of 9.5 mm diameter and a soft silicone hydrogel lens) on 4 different days for a period of 8 h on each day. Measures were collected before and after contact lens wear and additionally on a baseline day. Results: Anterior corneal curvature generally showed a flattening with both of the RGP lenses and a steepening with the PMMA lens. A significant negative correlation was found between the change in corneal swelling and central and peripheral posterior corneal curvature (all p ≤ 0.001). RGP contact lenses caused a significant decrease in corneal refractive power (hyperopic shift) of approximately 0.5 D. The PMMA contact lenses caused the greatest corneal swelling in both the central (27.92 ± 15.49 μm, p < 0.001) and peripheral (17.78 ± 12.11 μm, p = 0.001) corneal regions, a significant flattening of the posterior cornea and an increase in ocular aberrations (all p ≤ 0.05). Conclusion: The corneal swelling associated with RGP lenses was relatively minor, but there was slight central corneal flattening and a clinically significant hyperopic change in corneal refractive power after the first day of lens wear. The PMMA contact lenses resulted in significant corneal swelling and reduced optical performance of the cornea.
Resumo:
The rank transform is one non-parametric transform which has been applied to the stereo matching problem The advantages of this transform include its invariance to radio metric distortion and its amenability to hardware implementation. This paper describes the derivation of the rank constraint for matching using the rank transform Previous work has shown that this constraint was capable of resolving ambiguous matches thereby improving match reliability A new matching algorithm incorporating this constraint was also proposed. This paper extends on this previous work by proposing a matching algorithm which uses a dimensional match surface in which the match score is computed for every possible template and match window combination. The principal advantage of this algorithm is that the use of the match surface enforces the left�right consistency and uniqueness constraints thus improving the algorithms ability to remove invalid matches Experimental results for a number of test stereo pairs show that the new algorithm is capable of identifying and removing a large number of in incorrect matches particularly in the case of occlusions
Resumo:
This study examined physiological and performance effects of pre-cooling on medium-fast bowling in the heat. Ten, medium-fast bowlers completed two randomised trials involving either cooling (mixed-methods) or control (no cooling) interventions before a 6-over bowling spell in 31.9±2.1°C and 63.5±9.3% relative humidity. Measures included bowling performance (ball speed, accuracy and run-up speeds), physical characteristics (global positioning system monitoring and counter-movement jump height), physiological (heart rate, core temperature, skin temperature and sweat loss), biochemical (serum concentrations of damage, stress and inflammation) and perceptual variables (perceived exertion and thermal sensation). Mean ball speed (114.5±7.1 vs. 114.1±7.2 km · h−1; P = 0.63; d = 0.09), accuracy (43.1±10.6 vs. 44.2±12.5 AU; P = 0.76; d = 0.14) and total run-up speed (19.1±4.1 vs. 19.3±3.8 km · h−1; P = 0.66; d = 0.06) did not differ between pre-cooling and control respectively; however 20-m sprint speed between overs was 5.9±7.3% greater at Over 4 after pre-cooling (P = 0.03; d = 0.75). Pre-cooling reduced skin temperature after the intervention period (P = 0.006; d = 2.28), core temperature and pre-over heart rates throughout (P = 0.01−0.04; d = 0.96−1.74) and sweat loss by 0.4±0.3 kg (P = 0.01; d = 0.34). Mean rating of perceived exertion and thermal sensation were lower during pre-cooling trials (P = 0.004−0.03; d = 0.77−3.13). Despite no observed improvement in bowling performance, pre-cooling maintained between-over sprint speeds and blunted physiological and perceptual demands to ease the thermoregulatory demands of medium-fast bowling in hot conditions.
Resumo:
This investigation examined physiological and performance effects of cooling on recovery of medium-fast bowlers in the heat. Eight, medium-fast bowlers completed two randomised trials, involving two sessions completed on consecutive days (Session 1: 10-overs and Session 2: 4-overs) in 31 ± 3°C and 55 ± 17% relative humidity. Recovery interventions were administered for 20 min (mixed-method cooling vs. control) after Session 1. Measures included bowling performance (ball speed, accuracy, run-up speeds), physical demands (global positioning system, counter-movement jump), physiological (heart rate, core temperature, skin temperature, sweat loss), biochemical (creatine kinase, C-reactive protein) and perceptual variables (perceived exertion, thermal sensation, muscle soreness). Mean ball speed was higher after cooling in Session 2 (118.9 ± 8.1 vs. 115.5 ± 8.6 km · h−1; P = 0.001; d = 0.67), reducing declines in ball speed between sessions (0.24 vs. −3.18 km · h−1; P = 0.03; d = 1.80). Large effects indicated higher accuracy in Session 2 after cooling (46.0 ± 11.2 vs. 39.4 ± 8.6 arbitrary units [AU]; P = 0.13; d = 0.93) without affecting total run-up speed (19.0 ± 3.1 vs. 19.0 ± 2.5 km · h−1; P = 0.97; d = 0.01). Cooling reduced core temperature, skin temperature and thermal sensation throughout the intervention (P = 0.001–0.05; d = 1.31–5.78) and attenuated creatine kinase (P = 0.04; d = 0.56) and muscle soreness at 24-h (P = 0.03; d = 2.05). Accordingly, mixed-method cooling can reduce thermal strain after a 10-over spell and improve markers of muscular damage and discomfort alongside maintained medium-fast bowling performance on consecutive days in hot conditions.
Resumo:
The mining environment, being complex, irregular and time varying, presents a challenging prospect for stereo vision. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This paper assesses the suitability of a number of matching techniques for use in a stereo vision sensor for close range scenes consisting primarily of rocks. These include traditional area-based matching metrics, and non-parametric transforms, in particular, the rank and census transforms. Experimental results show that the rank and census transforms exhibit a number of clear advantages over area-based matching metrics, including their low computational complexity, and robustness to certain types of distortion.
Resumo:
A fundamental problem faced by stereo matching algorithms is the matching or correspondence problem. A wide range of algorithms have been proposed for the correspondence problem. For all matching algorithms, it would be useful to be able to compute a measure of the probability of correctness, or reliability of a match. This paper focuses in particular on one class for matching algorithms, which are based on the rank transform. The interest in these algorithms for stereo matching stems from their invariance to radiometric distortion, and their amenability to fast hardware implementation. This work differs from previous work in that it derives, from first principles, an expression for the probability of a correct match. This method was based on an enumeration of all possible symbols for matching. The theoretical results for disparity error prediction, obtained using this method, were found to agree well with experimental results. However, disadvantages of the technique developed in this chapter are that it is not easily applicable to real images, and also that it is too computationally expensive for practical window sizes. Nevertheless, the exercise provides an interesting and novel analysis of match reliability.
Resumo:
Deciding the appropriate population size and number of is- lands for distributed island-model genetic algorithms is often critical to the algorithm’s success. This paper outlines a method that automatically searches for good combinations of island population sizes and the number of islands. The method is based on a race between competing parameter sets, and collaborative seeding of new parameter sets. This method is applicable to any problem, and makes distributed genetic algorithms easier to use by reducing the number of user-set parameters. The experimental results show that the proposed method robustly and reliably finds population and islands settings that are comparable to those found with traditional trial-and-error approaches.