954 resultados para SIZE CONTROL


Relevância:

20.00% 20.00%

Publicador:

Resumo:

There have been notable advances in learning to control complex robotic systems using methods such as Locally Weighted Regression (LWR). In this paper we explore some potential limits of LWR for robotic applications, particularly investigating its application to systems with a long horizon of temporal dependence. We define the horizon of temporal dependence as the delay from a control input to a desired change in output. LWR alone cannot be used in a temporally dependent system to find meaningful control values from only the current state variables and output, as the relationship between the input and the current state is under-constrained. By introducing a receding horizon of the future output states of the system, we show that sufficient constraint is applied to learn good solutions through LWR. The new method, Receding Horizon Locally Weighted Regression (RH-LWR), is demonstrated through one-shot learning on a real Series Elastic Actuator controlling a pendulum.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A bioassay technique, based on surface-enhanced Raman scattering (SERS) tagged gold nanoparticles encapsulated with a biotin functionalised polymer, has been demonstrated through the spectroscopic detection of a streptavidin binding event. A methodical series of steps preceded these results: synthesis of nanoparticles which were found to give a reproducible SERS signal; design and synthesis of polymers with RAFT-functional end groups able to encapsulate the gold nanoparticle. The polymer also enabled the attachment of a biotin molecule functionalised so that it could be attached to the hybrid nanoparticle through a modular process. Finally, the demonstrations of a positive bioassay for this model construct using streptavidin/biotin binding. The synthesis of silver and gold nanoparticles was performed by using tri-sodium citrate as the reducing agent. The shape of the silver nanoparticles was quite difficult to control. Gold nanoparticles were able to be prepared in more regular shapes (spherical) and therefore gave a more consistent and reproducible SERS signal. The synthesis of gold nanoparticles with a diameter of 30 nm was the most reproducible and these were also stable over the longest periods of time. From the SERS results the optimal size of gold nanoparticles was found to be approximately 30 nm. Obtaining a consistent SERS signal with nanoparticles smaller than this was particularly difficult. Nanoparticles more than 50 nm in diameter were too large to remain suspended for longer than a day or two and formed a precipitate, rendering the solutions useless for our desired application. Gold nanoparticles dispersed in water were able to be stabilised by the addition of as-synthesised polymers dissolved in a water miscible solvent. Polymer stabilised AuNPs could not be formed from polymers synthesised by conventional free radical polymerization, i.e. polymers that did not possess a sulphur containing end-group. This indicated that the sulphur-containing functionality present within the polymers was essential for the self assembly process to occur. Polymer stabilization of the gold colloid was evidenced by a range of techniques including, visible spectroscopy, transmission electron microscopy, Fourier transform infrared spectroscopy, thermogravimetric analysis and Raman spectroscopy. After treatment of the hybrid nanoparticles with a series of SERS tags, focussing on 2-quinolinethiol the SERS signals were found to have comparable signal intensity to the citrate stabilised gold nanoparticles. This finding illustrates that the stabilization process does not interfere with the ability of gold nanoparticles to act as substrates for the SERS effect. Incorporation of a biotin moiety into the hybrid nanoparticles was achieved through a =click‘ reaction between an alkyne-functionalised polymer and an azido-functionalised biotin analogue. This functionalized biotin was prepared through a 4-step synthesis from biotin. Upon exposure of the surface-bound streptavidin to biotin-functionalised polymer hybrid gold nanoparticles, then washing, a SERS signal was obtained from the 2-quinolinethiol which was attached to the gold nanoparticles (positive assay). After exposure to functionalised polymer hybrid gold nanoparticles without biotin present then washing a SERS signal was not obtained as the nanoparticles did not bind to the streptavidin (negative assay). These results illustrate the applicability of the use of SERS active functional-polymer encapsulated gold nanoparticles for bioassay application.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Signal Processing (SP) is a subject of central importance in engineering and the applied sciences. Signals are information-bearing functions, and SP deals with the analysis and processing of signals (by dedicated systems) to extract or modify information. Signal processing is necessary because signals normally contain information that is not readily usable or understandable, or which might be disturbed by unwanted sources such as noise. Although many signals are non-electrical, it is common to convert them into electrical signals for processing. Most natural signals (such as acoustic and biomedical signals) are continuous functions of time, with these signals being referred to as analog signals. Prior to the onset of digital computers, Analog Signal Processing (ASP) and analog systems were the only tool to deal with analog signals. Although ASP and analog systems are still widely used, Digital Signal Processing (DSP) and digital systems are attracting more attention, due in large part to the significant advantages of digital systems over the analog counterparts. These advantages include superiority in performance,s peed, reliability, efficiency of storage, size and cost. In addition, DSP can solve problems that cannot be solved using ASP, like the spectral analysis of multicomonent signals, adaptive filtering, and operations at very low frequencies. Following the recent developments in engineering which occurred in the 1980's and 1990's, DSP became one of the world's fastest growing industries. Since that time DSP has not only impacted on traditional areas of electrical engineering, but has had far reaching effects on other domains that deal with information such as economics, meteorology, seismology, bioengineering, oceanology, communications, astronomy, radar engineering, control engineering and various other applications. This book is based on the Lecture Notes of Associate Professor Zahir M. Hussain at RMIT University (Melbourne, 2001-2009), the research of Dr. Amin Z. Sadik (at QUT & RMIT, 2005-2008), and the Note of Professor Peter O'Shea at Queensland University of Technology. Part I of the book addresses the representation of analog and digital signals and systems in the time domain and in the frequency domain. The core topics covered are convolution, transforms (Fourier, Laplace, Z. Discrete-time Fourier, and Discrete Fourier), filters, and random signal analysis. There is also a treatment of some important applications of DSP, including signal detection in noise, radar range estimation, banking and financial applications, and audio effects production. Design and implementation of digital systems (such as integrators, differentiators, resonators and oscillators are also considered, along with the design of conventional digital filters. Part I is suitable for an elementary course in DSP. Part II (which is suitable for an advanced signal processing course), considers selected signal processing systems and techniques. Core topics covered are the Hilbert transformer, binary signal transmission, phase-locked loops, sigma-delta modulation, noise shaping, quantization, adaptive filters, and non-stationary signal analysis. Part III presents some selected advanced DSP topics. We hope that this book will contribute to the advancement of engineering education and that it will serve as a general reference book on digital signal processing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While recent research has provided valuable information as to the composition of laser printer particles, their formation mechanisms, and explained why some printers are emitters whilst others are low emitters, fundamental questions relating to the potential exposure of office workers remained unanswered. In particular, (i) what impact does the operation of laser printers have on the background particle number concentration (PNC) of an office environment over the duration of a typical working day?; (ii) what is the airborne particle exposure to office workers in the vicinity of laser printers; (iii) what influence does the office ventilation have upon the transport and concentration of particles?; (iv) is there a need to control the generation of, and/or transport of particles arising from the operation of laser printers within an office environment?; (v) what instrumentation and methodology is relevant for characterising such particles within an office location? We present experimental evidence on printer temporal and spatial PNC during the operation of 107 laser printers within open plan offices of five buildings. We show for the first time that the eight-hour time-weighted average printer particle exposure is significantly less than the eight-hour time-weighted local background particle exposure, but that peak printer particle exposure can be greater than two orders of magnitude higher than local background particle exposure. The particle size range is predominantly ultrafine (< 100nm diameter). In addition we have established that office workers are constantly exposed to non-printer derived particle concentrations, with up to an order of magnitude difference in such exposure amongst offices, and propose that such exposure be controlled along with exposure to printer derived particles. We also propose, for the first time, that peak particle reference values be calculated for each office area analogous to the criteria used in Australia and elsewhere for evaluating exposure excursion above occupational hazardous chemical exposure standards. A universal peak particle reference value of 2.0 x 104 particles cm-3 has been proposed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In summary, these results imply that the relationship of adiponectin with lipoproteins is more complex than previously predicted using other methods of lipoprotein fractionation. Higher correlation of adiponectin was shown with large lipoprotein particle size, independent of the apolipoprotein content. Given the small population studied, we could not assess the influence of mild risk factors for venous thrombosis, such as obesity, on the analysis of the results. Thus, we can only state that adiponectin levels appear not to be a strong risk factor for VTE. It is possible that adiponectin deficiency may contribute indirectly to the etiology of VTE by enhancing the inflammatory state. © 2006 International Society on Thrombosis and Haemostasis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The emergence of Twenty20 cricket at the elite level has been marketed on the excitement of the big hitter, where it seems that winning is a result of the muscular batter hitting boundaries at will. This version of the game has captured the imagination of many young players who all want to score runs with “big hits”. However, in junior cricket, boundary hitting is often more difficult due to size limitations of children and games played on outfields where the ball does not travel quickly. As a result, winning is often achieved via a less spectacular route – by scoring more singles than your opponents. However, most standard coaching texts only describe how to play boundary scoring shots (e.g. the drives, pulls, cuts and sweeps) and defensive shots to protect the wicket. Learning to bat appears to have been reduced to extremes of force production, i.e. maximal force production to hit boundaries or minimal force production to stop the ball from hitting the wicket. Initially, this is not a problem because the typical innings of a young player (<12 years) would be based on the concept of “block” or “bash” – they “block” the good balls and “bash” the short balls. This approach works because there are many opportunities to hit boundaries off the numerous inaccurate deliveries of novice bowlers. Most runs are scored behind the wicket by using the pace of the bowler’s delivery to re-direct the ball, because the intrinsic dynamics (i.e. lack of strength) of most children means that they can only create sufficient power by playing shots where the whole body can contribute to force production. This method works well until the novice player comes up against more accurate bowling when they find they have no way of scoring runs. Once batters begin to face “good” bowlers, batters have to learn to score runs via singles. In cricket coaching manuals (e.g. ECB, n.d), running between the wickets is treated as a separate task to batting, and the “basics” of running, such as how to “back- up”, carry the bat, calling and turning and sliding the bat into the crease are “drilled” into players. This task decomposition strategy focussing on techniques is a common approach to skill acquisition in many highly traditional sports, typified in cricket by activities where players hit balls off tees and receive “throw-downs” from coaches. However, the relative usefulness of these approaches in the acquisition of sporting skills is increasingly being questioned (Pinder, Renshaw & Davids, 2009). We will discuss why this is the case in the next section.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Venous leg ulceration is a serious condition affecting 1 – 3% of the population. Decline in the function of the calf muscle pump is correlated with venous ulceration. Many previous studies have reported an improvement in the function of the calf muscle pump, endurance of the calf muscle and increased range of ankle motion after structured exercise programs. However, there is a paucity of published research that assesses if these improvements result in an improvement in the healing rates of venous ulcers. The primary purpose of this pilot study was to establish the feasibility of a homebased progressive resistance exercise program and examine if there was any clinical significance or trend toward healing. The secondary aims were to examine the benefit of a home-based progressive resistance exercise program on calf muscle pump function and physical parameters. The methodology used was a randomised controlled trial where eleven participants were randomised into an intervention (n = 6) or control group (n = 5). Participants who were randomised to receive a 12-week home-based progressive resistance exercise program were instructed through weekly face-to-face consultations during their wound clinic appointment by the author. Control group participants received standard wound care and compression therapy. Changes in ulcer parameters were measured fortnightly at the clinic (number healed at 12 weeks, percentage change in area and pressure ulcer score healing score). An air plethysmography test was performed at baseline and following the 12 weeks of training to determine changes in calf muscle pump function. Functional measures included maximum number of heel raises (endurance), maximal isometric plantar flexion (strength) and range of ankle motion (ROAM); these tests were conducted at baseline, week 6 and week 12. The sample for the study was drawn from the Princess Alexandra Hospital in Brisbane, Australia. Participants with venous leg ulceration who met the inclusion criteria were recruited. The participants were screened via duplex scanning and ankle brachial pressure index (ABPI) to ensure they did not have any arterial complications. Participants were excluded if there was evidence of cellulitis. Demographic data were obtained from each participant and details regarding medical history, quality of life and geriatric depression scores were collected at baseline. Both the intervention and control group were required to complete a weekly exercise diary to monitor activity levels between groups. To test for the effect of the intervention over time, a repeated measures analysis of variance was conducted on the major outcome variables. Group (intervention versus control) was the between subject factor and time (baseline, week 6, week 12) was the within subject or repeated measures factor. Due to the small sample size, further tests were conducted to check the assumptions of the statistical test to be used. The results showed that Mauchly.s Test, the Sphericity assumptions of repeated measures for ANOVA were met. Further tests of homogeneity of variance assumptions also confirmed that this assumption was met. Data analysis was conducted using the software package SPSS for Windows Release 17.0. The pilot study proved feasible with all of the intervention (n=6) participants continuing with the resistance program for the 12 week duration and no deleterious effects noted. Clinical significance was observed in the intervention group with a 32% greater change in ulcer size (p= 0.26) than the control group, and a 10% (p = 0.74) greater difference between the numbers healed compared to the control group. Statistical significance was observed for the ejection fraction (p = 0.05), residual volume fraction (p = 0.04) and ROAM (p = 0.01), which all improved significantly in the intervention group over time. These results are encouraging, nevertheless, further investigations seem warranted to examine the effect exercise has on the healing rates of venous leg ulcers, with a multistudy site, larger sample size and longer follow up period.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The gastrointestinal tract plays an important role in the improved appetite control and weight loss in response to bariatric surgery. Other strategies which similarly alter gastrointestinal responses to food intake could contribute to successful weight management. The aim of this review is to discuss the effects of surgical, pharmacological and behavioural weight loss interventions on gastrointestinal targets of appetite control, including gastric emptying. Gastrointestinal peptides are also discussed because of their integrative relationship in appetite control. This review shows that different strategies exert diverse effects and there is no consensus on the optimal strategy for manipulating gastric emptying to improve appetite control. Emerging evidence from surgical procedures (e.g., sleeve gastrectomy and Roux en-Y gastric bypass) suggests a faster emptying rate and earlier delivery of nutrients to the distal small intestine may improve appetite control. Energy restriction slows gastric emptying, while the effect of exercise-induced weight loss on gastric emptying remains to be established. The limited evidence suggests that chronic exercise is associated with faster gastric emptying which we hypothesise will impact on appetite control and energy balance. Understanding how behavioural weight loss interventions (e.g., diet and exercise) alter gastrointestinal targets of appetite control may be important to improve their success in weight management.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a comprehensive planning methodology is proposed that can minimize the line loss, maximize the reliability and improve the voltage profile in a distribution network. The injected active and reactive power of Distributed Generators (DG) and the installed capacitor sizes at different buses and for different load levels are optimally controlled. The tap setting of HV/MV transformer along with the line and transformer upgrading is also included in the objective function. A hybrid optimization method, called Hybrid Discrete Particle Swarm Optimization (HDPSO), is introduced to solve this nonlinear and discrete optimization problem. The proposed HDPSO approach is a developed version of DPSO in which the diversity of the optimizing variables is increased using the genetic algorithm operators to avoid trapping in local minima. The objective function is composed of the investment cost of DGs, capacitors, distribution lines and HV/MV transformer, the line loss, and the reliability. All of these elements are converted into genuine dollars. Given this, a single-objective optimization method is sufficient. The bus voltage and the line current as constraints are satisfied during the optimization procedure. The IEEE 18-bus test system is modified and employed to evaluate the proposed algorithm. The results illustrate the unavoidable need for optimal control on the DG active and reactive power and capacitors in distribution networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of the study is to establish optimum building aspect ratios and south window sizes of residential buildings from thermal performance point of view. The effects of 6 different building aspect ratios and eight different south window sizes for each building aspect ratio are analyzed for apartments located at intermediate floors of buildings, by the aid of the computer based thermal analysis program SUNCODE-PC in five cities of Turkey: Erzurum, Ankara, Diyarbakir, Izmir, and Antalya. The results are evaluated in terms of annual energy consumption and the optimum values are driven. Comparison of optimum values and the total energy consumption rates is made among the analyzed cities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a comprehensive approach to the planning of distribution networks and the control of microgrids. Firstly, a Modified Discrete Particle Swarm Optimization (MDPSO) method is used to optimally plan a distribution system upgrade over a 20 year planning period. The optimization is conducted at different load levels according to the anticipated load duration curve and integrated over the system lifetime in order to minimize its total lifetime cost. Since the optimal solution contains Distributed Generators (DGs) to maximize reliability, the DG must be able to operate in islanded mode and this leads to the concept of microgrids. Thus the second part of the paper reviews some of the challenges of microgrid control in the presence of both inertial (rotating direct connected) and non-inertial (converter interfaced) DGs. More specifically enhanced control strategies based on frequency droop are proposed for DGs to improve the smooth synchronization and real power sharing minimizing transient oscillations in the microgrid. Simulation studies are presented to show the effectiveness of the control.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A forced landing is an unscheduled event in flight requiring an emergency landing, and is most commonly attributed to engine failure, failure of avionics or adverse weather. Since the ability to conduct a successful forced landing is the primary indicator for safety in the aviation industry, automating this capability for unmanned aerial vehicles (UAVs) will help facilitate their integration into, and subsequent routine operations over civilian airspace. Currently, there is no commercial system available to perform this task; however, a team at the Australian Research Centre for Aerospace Automation (ARCAA) is working towards developing such an automated forced landing system. This system, codenamed Flight Guardian, will operate onboard the aircraft and use machine vision for site identification, artificial intelligence for data assessment and evaluation, and path planning, guidance and control techniques to actualize the landing. This thesis focuses on research specific to the third category, and presents the design, testing and evaluation of a Trajectory Generation and Guidance System (TGGS) that navigates the aircraft to land at a chosen site, following an engine failure. Firstly, two algorithms are developed that adapts manned aircraft forced landing techniques to suit the UAV planning problem. Algorithm 1 allows the UAV to select a route (from a library) based on a fixed glide range and the ambient wind conditions, while Algorithm 2 uses a series of adjustable waypoints to cater for changing winds. A comparison of both algorithms in over 200 simulated forced landings found that using Algorithm 2, twice as many landings were within the designated area, with an average lateral miss distance of 200 m at the aimpoint. These results present a baseline for further refinements to the planning algorithms. A significant contribution is seen in the design of the 3-D Dubins Curves planning algorithm, which extends the elementary concepts underlying 2-D Dubins paths to account for powerless flight in three dimensions. This has also resulted in the development of new methods in testing for path traversability, in losing excess altitude, and in the actual path formation to ensure aircraft stability. Simulations using this algorithm have demonstrated lateral and vertical miss distances of under 20 m at the approach point, in wind speeds of up to 9 m/s. This is greater than a tenfold improvement on Algorithm 2 and emulates the performance of manned, powered aircraft. The lateral guidance algorithm originally developed by Park, Deyst, and How (2007) is enhanced to include wind information in the guidance logic. A simple assumption is also made that reduces the complexity of the algorithm in following a circular path, yet without sacrificing performance. Finally, a specific method of supplying the correct turning direction is also used. Simulations have shown that this new algorithm, named the Enhanced Nonlinear Guidance (ENG) algorithm, performs much better in changing winds, with cross-track errors at the approach point within 2 m, compared to over 10 m using Park's algorithm. A fourth contribution is made in designing the Flight Path Following Guidance (FPFG) algorithm, which uses path angle calculations and the MacCready theory to determine the optimal speed to fly in winds. This algorithm also uses proportional integral- derivative (PID) gain schedules to finely tune the tracking accuracies, and has demonstrated in simulation vertical miss distances of under 2 m in changing winds. A fifth contribution is made in designing the Modified Proportional Navigation (MPN) algorithm, which uses principles from proportional navigation and the ENG algorithm, as well as methods specifically its own, to calculate the required pitch to fly. This algorithm is robust to wind changes, and is easily adaptable to any aircraft type. Tracking accuracies obtained with this algorithm are also comparable to those obtained using the FPFG algorithm. For all three preceding guidance algorithms, a novel method utilising the geometric and time relationship between aircraft and path is also employed to ensure that the aircraft is still able to track the desired path to completion in strong winds, while remaining stabilised. Finally, a derived contribution is made in modifying the 3-D Dubins Curves algorithm to suit helicopter flight dynamics. This modification allows a helicopter to autonomously track both stationary and moving targets in flight, and is highly advantageous for applications such as traffic surveillance, police pursuit, security or payload delivery. Each of these achievements serves to enhance the on-board autonomy and safety of a UAV, which in turn will help facilitate the integration of UAVs into civilian airspace for a wider appreciation of the good that they can provide. The automated UAV forced landing planning and guidance strategies presented in this thesis will allow the progression of this technology from the design and developmental stages, through to a prototype system that can demonstrate its effectiveness to the UAV research and operations community.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study investigates the application of two advanced optimization methods for solving active flow control (AFC) device shape design problem and compares their optimization efficiency in terms of computational cost and design quality. The first optimization method uses hierarchical asynchronous parallel multi-objective evolutionary algorithm and the second uses hybridized evolutionary algorithm with Nash-Game strategies (Hybrid-Game). Both optimization methods are based on a canonical evolution strategy and incorporate the concepts of parallel computing and asynchronous evaluation. One type of AFC device named shock control bump (SCB) is considered and applied to a natural laminar flow (NLF) aerofoil. The concept of SCB is used to decelerate supersonic flow on suction/pressure side of transonic aerofoil that leads to a delay of shock occurrence. Such active flow technique reduces total drag at transonic speeds which is of special interest to commercial aircraft. Numerical results show that the Hybrid-Game helps an EA to accelerate optimization process. From the practical point of view, applying a SCB on the suction and pressure sides significantly reduces transonic total drag and improves lift-to-drag (L/D) value when compared to the baseline design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper considers an aircraft collision avoidance design problem that also incorporates design of the aircraft’s return-to-course flight. This control design problem is formulated as a non-linear optimal-stopping control problem; a formulation that does not require a prior knowledge of time taken to perform the avoidance and return-to-course manoeuvre. A dynamic programming solution to the avoidance and return-to-course problem is presented, before a Markov chain numerical approximation technique is described. Simulation results are presented that illustrate the proposed collision avoidance and return-to-course flight approach.