825 resultados para DELAY
Resumo:
In public places, crowd size may be an indicator of congestion, delay, instability, or of abnormal events, such as a fight, riot or emergency. Crowd related information can also provide important business intelligence such as the distribution of people throughout spaces, throughput rates, and local densities. A major drawback of many crowd counting approaches is their reliance on large numbers of holistic features, training data requirements of hundreds or thousands of frames per camera, and that each camera must be trained separately. This makes deployment in large multi-camera environments such as shopping centres very costly and difficult. In this chapter, we present a novel scene-invariant crowd counting algorithm that uses local features to monitor crowd size. The use of local features allows the proposed algorithm to calculate local occupancy statistics, scale to conditions which are unseen in the training data, and be trained on significantly less data. Scene invariance is achieved through the use of camera calibration, allowing the system to be trained on one or more viewpoints and then deployed on any number of new cameras for testing without further training. A pre-trained system could then be used as a ‘turn-key’ solution for crowd counting across a wide range of environments, eliminating many of the costly barriers to deployment which currently exist.
Resumo:
In 1990 the Dispute Resolution Centres Act, 1990 (Qld) (the Act) was passed by the Queensland Parliament. In the second reading speech for the Dispute Resolution Centres Bill on May 1990 the Hon Dean Wells stated that the proposed legislation would make mediation services available “in a non-coercive, voluntary forum where, with the help of trained mediators, the disputants will be assisted towards their own solutions to their disputes, thereby ensuring that the result is acceptable to the parties” (Hansard, 1990, 1718). It was recognised at that time that a method for resolving disputes was necessary for which “the conventional court system is not always equipped to provide lasting resolution” (Hansard, 1990, 1717). In particular, the lasting resolution of “disputes between people in continuing relationships” was seen as made possible through the new legislation; for example, “domestic disputes, disputes between employees, and neighbourhood disputes relating to such issues as overhanging tree branches, dividing fences, barking dogs, smoke, noise and other nuisances are occurring continually in the community” (Hansard, 1990, 1717). The key features of the proposed form of mediation in the Act were articulated as follows: “attendance of both parties at mediation sessions is voluntary; a party may withdraw at any time; mediation sessions will be conducted with as little formality and technicality as possible; the rules of evidence will not apply; any agreement reached is not enforceable in any court; although it could be made so if the parties chose to proceed that way; and the provisions of the Act do not affect any rights or remedies that a party to a dispute has apart from the Act” (Hansard, 1990, 1718). Since the introduction of the Act, the Alternative Dispute Resolution Branch of the Queensland Department of Justice and Attorney General has offered mediation services through, first the Community Justice Program (CJP), and then the Dispute Resolution Centres (DRCs) for a range of family, neighbourhood, workplace and community disputes. These services have mirrored those available through similar government agencies in other states such as the Community Justice Centres of NSW and the Victorian Dispute Resolution Centres. Since 1990, mediation has become one of the fastest growing forms of alternative dispute resolution (ADR). Sourdin has commented that "In addition to the growth in court-based and community-based dispute resolution schemes, ADR has been institutionalised and has grown within Australia and overseas” (2005, 14). In Australia, in particular, the development of ADR service provision “has been assisted by the creation and growth of professional organisations such as the Leading Edge Alternative Dispute Resolvers (LEADR), the Australian Commercial Dispute Centres (ACDC), Australian Disputes Resolution Association (ADRA), Conflict Resolution Network, and the Institute of Arbitrators and Mediators Australia (IAMA)” (Sourdin, 2005, 14). The increased emphasis on the use of ADR within education contexts (particularly secondary and tertiary contexts) has “also led to an increasing acceptance and understanding of (ADR) processes” (Sourdin, 2005, 14). Proponents of the mediation process, in particular, argue that much of its success derives from the inherent flexibility and creativity of the agreements reached through the mediation process and that it is a relatively low cost option in many cases (Menkel-Meadow, 1997, 417). It is also accepted that one of the main reasons for the success of mediation can be attributed to the high level of participation by the parties involved and thus creating a sense of ownership of, and commitment to, the terms of the agreement (Boulle, 2005, 65). These characteristics are associated with some of the core values of mediation, particularly as practised in community-based models as found at the DRCs. These core values include voluntary participation, party self-determination and party empowerment (Boulle, 2005, 65). For this reason mediation is argued as being an effective approach to resolving disputes, that creates a lasting resolution of the issues. Evaluation of the mediation process, particularly in the context of the growth of ADR, has been an important aspect of the development of the process (Sourdin, 2008). Writing in 2005 for example, Boulle, states that “although there is a constant refrain for more research into mediation practice, there has been a not insignificant amount of mediation measurement, both in Australia and overseas” (Boulle, 2005, 575). The positive claims of mediation have been supported to a significant degree by evaluations of the efficiency and effectiveness of the process. A common indicator of the effectiveness of mediation is the settlement rate achieved. High settlement rates for mediated disputes have been found for Australia (Altobelli, 2003) and internationally (Alexander, 2003). Boulle notes that mediation agreement rates claimed by service providers range from 55% to 92% (Boulle, 2005, 590). The annual reports for the Alternative Dispute Resolution Branch of the Queensland Department of Justice and Attorney-General considered prior to the commencement of this study indicated generally achievement of an approximate settlement figure of 86% by the Queensland Dispute Resolution Centres. More recently, the 2008-2009 annual report states that of the 2291 civil dispute mediated in 2007-2008, 86% reached an agreement. Further, of the 2693 civil disputes mediated in 2008-2009, 73% reached an agreement. These results are noted in the report as indicating “the effectiveness of mediation in resolving disputes” and as reflecting “the high level of agreement achieved for voluntary mediations” (Annual Report, 2008-2009, online). Whilst the settlement rates for the DRCs are strong, parties are rarely contacted for long term follow-up to assess whether agreements reached during mediation lasted to the satisfaction of each party. It has certainly been the case that the Dispute Resolution Centres of Queensland have not been resourced to conduct long-term follow-up assessments of mediation agreements. As Wade notes, "it is very difficult to compare "success" rates” and whilst “politicians want the comparison studies (they) usually do not want the delay and expense of accurate studies" (1998, 114). To date, therefore, it is fair to say that the efficiency of the mediation process has been evaluated but not necessarily its effectiveness. Rather, the practice at the Queensland DRCs has been to evaluate the quality of mediation service provision and of the practice of the mediation process. This has occurred, for example, through follow-up surveys of parties' satisfaction rates with the mediation service. In most other respects it is fair to say that the Centres have relied on the high settlement rates of the mediation process as a sign of the effectiveness of mediation (Annual Reports 1991 - 2010). Research of the mediation literature conducted for the purpose of this thesis has also indicated that there is little evaluative literature that provides an in-depth analysis and assessment of the longevity of mediated agreements. Instead evaluative studies of mediation tend to assess how mediation is conducted, or compare mediation with other conflict resolution options, or assess the agreement rate of mediations, including parties' levels of satisfaction with the service provision of the dispute resolution service provider (Boulle, 2005, Chapter 16).
Resumo:
Train delay is one of the most important indexes to evaluate the service quality of the railway. Because of the interactions of movement among trains, a delayed train may conflict with trains scheduled on other lines at junction area. Train that loses conflict may be forced to stop or slow down because of restrictive signals, which consequently leads to the loss of run-time and probably enlarges more delays. This paper proposes a time-saving train control method to recover delays as soon as possible. In the proposed method, golden section search is adopted to identify the optimal train speed at the expected time of restrictive signal aspect upgrades, which enables the train to depart from the conflicting area as soon as possible. A heuristic method is then developed to attain the advisory train speed profile assisting drivers in train control. Simulation study indicates that the proposed method enables the train to recover delays as soon as possible in case of disturbances at railway junctions, in comparison with the traditional maximum traction strategy and the green wave strategy.
Resumo:
Networked control systems (NCSs) offer many advantages over conventional control; however, they also demonstrate challenging problems such as network-induced delay and packet losses. This paper proposes an approach of predictive compensation for simultaneous network-induced delays and packet losses. Different from the majority of existing NCS control methods, the proposed approach addresses co-design of both network and controller. It also alleviates the requirements of precise process models and full understanding of NCS network dynamics. For a series of possible sensor-to-actuator delays, the controller computes a series of corresponding redundant control values. Then, it sends out those control values in a single packet to the actuator. Once receiving the control packet, the actuator measures the actual sensor-to-actuator delay and computes the control signals from the control packet. When packet dropout occurs, the actuator utilizes past control packets to generate an appropriate control signal. The effectiveness of the approach is demonstrated through examples.
Resumo:
The increasing stock of aging office buildings will see a significant growth in retrofitting projects in Australian capital cities. Stakeholders of refitting works will also need to take on the sustainability challenge and realize tangible outcomes through project delivery. Traditionally, decision making for aged buildings, when facing the alternatives, is typically economically driven and on ad hoc basis. This leads to the tendency to either delay refitting for as long as possible thus causing building conditions to deteriorate, or simply demolish and rebuild with unjust financial burden. The technologies involved are often limited to typical strip-clean and repartition with dry walls and office cubicles. Changing business operational patterns, the efficiency of office space, and the demand on improved workplace environment, will need more innovative and intelligent approaches to refurbishing office buildings. For example, such projects may need to respond to political, social, environmental and financial implications. There is a need for the total consideration of buildings structural assessment, modeling of operating and maintenance costs, new architectural and engineering designs that maximise the utility of the existing structure and resulting productivity improvement, specific construction management procedures including procurement methods, work flow and scheduling and occupational health and safety. Recycling potential and conformance to codes may be other major issues. This paper introduces examples of Australian research projects which provided a more holistic approach to the decision making of refurbishing office space, using appropriate building technologies and products, assessment of residual service life, floor space optimisation and project procurement in order to bring about sustainable outcomes. The paper also discusses a specific case study on critical factors that influence key building components for these projects and issues for integrated decision support when dealing with the refurbishment, and indeed the “re-life”, of office buildings.
Resumo:
Core(polyvinyl neodecanoate-ethylene glycol dimethacrylate)-shell(polyvinyl alcohol) (core (P(VND-EGDMA))-shell(PVA)) microspheres were developed by seeded polymerization with the use of conventional free radical and RAFT/MADIX mediated polymerization. Poly(vinyl pivalate) PVPi was grafted onto microspheres prepared via suspension polymerization of vinylneodecanoate and ethylene glycol dimethacrylate. The amount of grafted polymer was found to be independent from the technique used with conventional free radical polymerization and MADIX polymerization resulting into similar shell thicknesses. Both systems—grafting via free radical polymerization or the MADIX process—were found to follow slightly different kinetics. While the free radical polymerization resulted in a weight gain linear with the monomer consumption in solution the growth in the MADIX controlled system experienced a delay. The core-shell microspheres were obtained by hydrolysis of the poly(vinyl pivalate) surface grafted brushes to form poly(vinyl alcohol). During hydrolysis the microspheres lost a significant amount of weight, consistent with the hydrolysis of 40–70% of all VPi units. Drug loading was found to be independent of the shell layer thickness, suggesting that the drug loading is governed by the amount of bulk material. The shell layer does not appear to represent an obstacle to the drug ingress. Cell testing using colorectal cancer cell lines HT 29 confirm the biocompatibility of the empty microspheres whereas the clofazimine loaded particles lead to 50% cell death, confirming the release of the drug.
Resumo:
Objectives: To investigate if low-dose lithium may counteract the microstructural and metabolic brain changes proposed to occur in individuals at ultra-high risk (UHR) for psychosis. Methods: Hippocampal T2 relaxation time (HT2RT) and proton magnetic resonance spectroscopy (1H-MRS) measurements were performed prior to initiation and following three months of treatment in 11 UHR patients receiving low-dose lithium and 10 UHR patients receiving treatment as usual (TAU). HT2RT and 1H-MRS percentage change scores between scans were compared using one-way ANOVA and correlated with behavioural change scores. Results: Low-dose lithium significantly reduced HT2RT compared to TAU (p=0.018). No significant group by time effects were seen for any brain metabolites as measured with 1H-MRS, although myo-inositol, creatine, choline-containing compounds and NAA increased in the group receiving low-dose lithium and decreased or remained unchanged in subjects receiving TAU. Conclusions: This pilot study suggests that low-dose lithium may protect the microstructure of the hippocampus in UHR states as reflected by significantly decreasing HT2RT. Larger scale replication studies in UHR states using T2 relaxation time as a proxy for emerging brain pathology seem a feasible mean to test neuroprotective strategies such as low-dose lithium as potential treatments to delay or even prevent the progression to full-blown disorder.
Resumo:
Deterministic transit capacity analysis applies to planning, design and operational management of urban transit systems. The Transit Capacity and Quality of Service Manual (1) and Vuchic (2, 3) enable transit performance to be quantified and assessed using transit capacity and productive capacity. This paper further defines important productive performance measures of an individual transit service and transit line. Transit work (p-km) captures the transit task performed over distance. Passenger transmission (p-km/h) captures the passenger task delivered by service at speed. Transit productiveness (p-km/h) captures transit work performed over time. These measures are useful to operators in understanding their services’ or systems’ capabilities and passenger quality of service. This paper accounts for variability in utilized demand by passengers along a line and high passenger load conditions where passenger pass-up delay occurs. A hypothetical case study of an individual bus service’s operation demonstrates the usefulness of passenger transmission in comparing existing and growth scenarios. A hypothetical case study of a bus line’s operation during a peak hour window demonstrates the theory’s usefulness in examining the contribution of individual services to line productive performance. Scenarios may be assessed using this theory to benchmark or compare lines and segments, conditions, or consider improvements.
Resumo:
Current research in secure messaging for Vehicular Ad hoc Networks (VANETs) appears to focus on employing a digital certificate-based Public Key Cryptosystem (PKC) to support security. The security overhead of such a scheme, however, creates a transmission delay and introduces a time-consuming verification process to VANET communications. This paper proposes a non-certificate-based public key management for VANETs. A comprehensive evaluation of performance and scalability of the proposed public key management regime is presented, which is compared to a certificate-based PKC by employing a number of quantified analyses and simulations. Not only does this paper demonstrate that the proposal can maintain security, but it also asserts that it can improve overall performance and scalability at a lower cost, compared to the certificate-based PKC. It is believed that the proposed scheme will add a new dimension to the key management and verification services for VANETs.
Resumo:
In spite of significant research in the development of efficient algorithms for three carrier ambiguity resolution, full performance potential of the additional frequency signals cannot be demonstrated effectively without actual triple frequency data. In addition, all the proposed algorithms showed their difficulties in reliable resolution of the medium-lane and narrow-lane ambiguities in different long-range scenarios. In this contribution, we will investigate the effects of various distance-dependent biases, identifying the tropospheric delay to be the key limitation for long-range three carrier ambiguity resolution. In order to achieve reliable ambiguity resolution in regional networks with the inter-station distances of hundreds of kilometers, a new geometry-free and ionosphere-free model is proposed to fix the integer ambiguities of the medium-lane or narrow-lane observables over just several minutes without distance constraint. Finally, the semi-simulation method is introduced to generate the third frequency signals from dual-frequency GPS data and experimentally demonstrate the research findings of this paper.
Resumo:
Early-stage treatments for osteoarthritis are attracting considerable interest as a means to delay, or avoid altogether, the pain and lack of mobility associated with late-stage disease, and the considerable burden that it places on the community. With the development of these treatments comes a need to assess the tissue to which they are applied, both in trialling of new treatments and as an aid to clinical decision making. Here, we measure a range of mechanical indentation, ultrasound and near-infrared spectroscopy parameters in normal and osteoarthritic bovine joints in vitro to describe the role of different physical phenomena in disease progression, using this as a basis to investigate the potential value of the techniques as clinical tools. Based on 72 samples we found that mechanical and ultrasound parameters showed differences between fibrillated tissue, macroscopically normal tissue in osteoarthritic joints, and normal tissue, yet did were unable to differentiate degradation beyond that which was visible to the naked eye. Near-infrared spectroscopy showed a clear progression of degradation across the visibly normal osteoarthritic joint surface and as such, was the only technique considered useful for clinical application.
Resumo:
It is frequently reported that the actual weight loss achieved through exercise interventions is less than theoretically expected. Amongst other compensatory adjustments that accompany exercise training (e.g., increases in resting metabolic rate and energy intake), a possible cause of the less than expected weight loss is a failure to produce a marked increase in total daily energy expenditure due to a compensatory reduction in non-exercise activity thermogenesis (NEAT). Therefore, there is a need to understand how behaviour is modified in response to exercise interventions. The proposed benefits of exercise training are numerous, including changes to fat oxidation. Given that a diminished capacity to oxidise fat could be a factor in the aetiology of obesity, an exercise training intensity that optimises fat oxidation in overweight/obese individuals would improve impaired fat oxidation, and potentially reduce health risks that are associated with obesity. To improve our understanding of the effectiveness of exercise for weight management, it is important to ensure exercise intensity is appropriately prescribed, and to identify and monitor potential compensatory behavioural changes consequent to exercise training. In line with the gaps in the literature, three studies were performed. The aim of Study 1 was to determine the effect of acute bouts of moderate- and high-intensity walking exercise on NEAT in overweight and obese men. Sixteen participants performed a single bout of either moderate-intensity walking exercise (MIE) or high-intensity walking exercise (HIE) on two separate occasions. The MIE consisted of walking for 60-min on a motorised treadmill at 6 km.h-1. The 60-min HIE session consisted of walking in 5-min intervals at 6 km.h-1 and 10% grade followed by 5-min at 0% grade. NEAT was assessed by accelerometer three days before, on the day of, and three days after the exercise sessions. There was no significant difference in NEAT vector magnitude (counts.min-1) between the pre-exercise period (days 1-3) and the exercise day (day 4) for either protocol. In addition, there was no change in NEAT during the three days following the MIE session, however NEAT increased by 16% on day 7 (post-exercise) compared with the exercise day (P = 0.32). During the post-exercise period following the HIE session, NEAT was increased by 25% on day 7 compared with the exercise day (P = 0.08), and by 30-33% compared with the pre-exercise period (day 1, day 2 and day 3); P = 0.03, 0.03, 0.02, respectively. To conclude, a single bout of either MIE or HIE did not alter NEAT on the exercise day or on the first two days following the exercise session. However, extending the monitoring of NEAT allowed the detection of a 48 hour delay in increased NEAT after performing HIE. A longer-term intervention is needed to determine the effect of accumulated exercise sessions over a week on NEAT. In Study 2, there were two primary aims. The first aim was to test the reliability of a discontinuous incremental exercise protocol (DISCON-FATmax) to identify the workload at which fat oxidation is maximised (FATmax). Ten overweight and obese sedentary male men (mean BMI of 29.5 ¡Ó 4.5 kg/m2 and mean age of 28.0 ¡Ó 5.3 y) participated in this study and performed two identical DISCON-FATmax tests one week apart. Each test consisted of alternate 4-min exercise and 2-min rest intervals on a cycle ergometer. The starting work load of 28 W was increased every 4-min using 14 W increments followed by 2-min rest intervals. When the respiratory exchange ratio was consistently >1.0, the workload was increased by 14 W every 2-min until volitional exhaustion. Fat oxidation was measured by indirect calorimetry. The mean FATmax, ƒtV O2peak, %ƒtV O2peak and %Wmax at which FATmax occurred during the two tests were 0.23 ¡Ó 0.09 and 0.18 ¡Ó 0.08 (g.min-1); 29.7 ¡Ó 7.8 and 28.3 ¡Ó 7.5 (ml.kg-1.min-1); 42.3 ¡Ó 7.2 and 42.6 ¡Ó 10.2 (%ƒtV O2max) and 36.4 ¡Ó 8.5 and 35.4 ¡Ó 10.9 (%), respectively. A paired-samples T-test revealed a significant difference in FATmax (g.min-1) between the tests (t = 2.65, P = 0.03). The mean difference in FATmax was 0.05 (g.min-1) with the 95% confidence interval ranging from 0.01 to 0.18. Paired-samples T-test, however, revealed no significant difference in the workloads (i.e. W) between the tests, t (9) = 0.70, P = 0.4. The intra-class correlation coefficient for FATmax (g.min-1) between the tests was 0.84 (95% confidence interval: 0.36-0.96, P < 0.01). However, Bland-Altman analysis revealed a large disagreement in FATmax (g.min-1) related to W between the two tests; 11 ¡Ó 14 (W) (4.1 ¡Ó 5.3 ƒtV O2peak (%)).These data demonstrate two important phenomena associated with exercise-induced substrate oxidation; firstly, that maximal fat oxidation derived from a discontinuous FATmax protocol differed statistically between repeated tests, and secondly, there was large variability in the workload corresponding with FATmax. The second aim of Study 2 was to test the validity of a DISCON-FATmax protocol by comparing maximal fat oxidation (g.min-1) determined by DISCON-FATmax with fat oxidation (g.min-1) during a continuous exercise protocol using a constant load (CONEX). Ten overweight and obese sedentary males (BMI = 29.5 ¡Ó 4.5 kg/m2; age = 28.0 ¡Ó 4.5 y) with a ƒtV O2max of 29.1 ¡Ó 7.5 ml.kg-1.min-1 performed a DISCON-FATmax test consisting of alternate 4-min exercise and 2-min rest intervals on a cycle ergometer. The 1-h CONEX protocol used the workload from the DISCON-FATmax to determine FATmax. The mean FATmax, ƒtV O2max, %ƒtV O2max and workload at which FATmax occurred during the DISCON-FATmax were 0.23 ¡Ó 0.09 (g.min-1); 29.1 ¡Ó 7.5 (ml.kg-1.min-1); 43.8 ¡Ó 7.3 (%ƒtV O2max) and 58.8 ¡Ó 19.6 (W), respectively. The mean fat oxidation during the 1-h CONEX protocol was 0.19 ¡Ó 0.07 (g.min-1). A paired-samples T-test revealed no significant difference in fat oxidation (g.min-1) between DISCON-FATmax and CONEX, t (9) = 1.85, P = 0.097 (two-tailed). There was also no significant correlation in fat oxidation between the DISCON-FATmax and CONEX (R=0.51, P = 0.14). Bland- Altman analysis revealed a large disagreement in fat oxidation between the DISCONFATmax and CONEX; the upper limit of agreement was 0.13 (g.min-1) and the lower limit of agreement was ¡V0.03 (g.min-1). These data suggest that the CONEX and DISCONFATmax protocols did not elicit different rates of fat oxidation (g.min-1). However, the individual variability in fat oxidation was large, particularly in the DISCON-FATmax test. Further research is needed to ascertain the validity of graded exercise tests for predicting fat oxidation during constant load exercise sessions. The aim of Study 3 was to compare the impact of two different intensities of four weeks of exercise training on fat oxidation, NEAT, and appetite in overweight and obese men. Using a cross-over design 11 participants (BMI = 29 ¡Ó 4 kg/m2; age = 27 ¡Ó 4 y) participated in a training study and were randomly assigned initially to: [1] a lowintensity (45%ƒtV O2max) exercise (LIT) or [2] a high-intensity interval (alternate 30 s at 90%ƒtV O2max followed by 30 s rest) exercise (HIIT) 40-min duration, three times a week. Participants completed four weeks of supervised training and between cross-over had a two week washout period. At baseline and the end of each exercise intervention,ƒtV O2max, fat oxidation, and NEAT were measured. Fat oxidation was determined during a standard 30-min continuous exercise bout at 45%ƒtV O2max. During the steady state exercise expired gases were measured intermittently for 5-min periods and HR was monitored continuously. In each training period, NEAT was measured for seven consecutive days using an accelerometer (RT3) the week before, at week 3 and the week after training. Subjective appetite sensations and food preferences were measured immediately before and after the first exercise session every week for four weeks during both LIT and HIIT. The mean fat oxidation rate during the standard continuous exercise bout at baseline for both LIT and HIIT was 0.14 ¡Ó 0.08 (g.min-1). After four weeks of exercise training, the mean fat oxidation was 0.178 ¡Ó 0.04 and 0.183 ¡Ó 0.04 g.min-1 for LIT and HIIT, respectively. The mean NEAT (counts.min-1) was 45 ¡Ó 18 at baseline, 55 ¡Ó 22 and 44 ¡Ó 16 during training, and 51 ¡Ó 14 and 50 ¡Ó 21 after training for LIT and HIIT, respectively. There was no significant difference in fat oxidation between LIT and HIIT. Moreover, although not statistically significant, there was some evidence to suggest that LIT and HIIT tend to increase fat oxidation during exercise at 45% ƒtV O2max (P = 0.14 and 0.08, respectively). The order of training treatment did not significantly influence changes in fat oxidation, NEAT, and appetite. NEAT (counts.min-1) was not significantly different in the week following training for either LIT or HIIT. Although not statistically significant (P = 0.08), NEAT was 20% lower during week 3 of exercise training in HIIT compared with LIT. Examination of appetite sensations revealed differences in the intensity of hunger, with higher ratings after LIT compared with HIIT. No differences were found in preferences for high-fat sweet foods between LIT and HIIT. In conclusion, the results of this thesis suggest that while fat oxidation during steady state exercise was not affected by the level of exercise intensity, there is strong evidence to suggest that intense exercise could have a debilitative effect on NEAT.
Resumo:
A Multimodal Seaport Container Terminal (MSCT) is a complex system which requires careful planning and control in order to operate efficiently. It consists of a number of subsystems that require optimisation of the operations within them, as well as synchronisation of machines and containers between the various subsystems. Inefficiency in the terminal can delay ships from their scheduled timetables, as well as cause delays in delivering containers to their inland destinations, both of which can be very costly to their operators. The purpose of this PhD thesis is to use Operations Research methodologies to optimise and synchronise these subsystems as an integrated application. An initial model is developed for the overall MSCT; however, due to a large number of assumptions that had to be made, as well as other issues, it is found to be too inaccurate and infeasible for practical use. Instead, a method of developing models for each subsystem is proposed that then be integrated with each other. Mathematical models are developed for the Storage Area System (SAS) and Intra-terminal Transportation System (ITTS). The SAS deals with the movement and assignment of containers to stacks within the storage area, both when they arrive and when they are rehandled to retrieve containers below them. The ITTS deals with scheduling the movement of containers and machines between the storage areas and other sections of the terminal, such as the berth and road/rail terminals. Various constructive heuristics are explored and compared for these models to produce good initial solutions for large-sized problems, which are otherwise impractical to compute by exact methods. These initial solutions are further improved through the use of an innovative hyper-heuristic algorithm that integrates the SAS and ITTS solutions together and optimises them through meta-heuristic techniques. The method by which the two models can interact with each other as an integrated system will be discussed, as well as how this method can be extended to the other subsystems of the MSCT.
Resumo:
This paper presents the benefits and issues related to travel time prediction on urban network. Travel time information quantifies congestion and is perhaps the most important network performance measure. Travel time prediction has been an active area of research for the last five decades. The activities related to ITS have increased the attention of researchers for better and accurate real-time prediction of travel time. Majority of the literature on travel time prediction is applicable to freeways where, under non-incident conditions, traffic flow is not affected by external factors such as traffic control signals and opposing traffic flows. On urban environment the problem is more complicated due to conflicting areas (intersections), mid-link sources and sinks etc. and needs to be addressed.