947 resultados para ACCURATE DOCKING
Resumo:
With the advances in computer hardware and software development techniques in the past 25 years, digital computer simulation of train movement and traction systems has been widely adopted as a standard computer-aided engineering tool [1] during the design and development stages of existing and new railway systems. Simulators of different approaches and scales are used extensively to investigate various kinds of system studies. Simulation is now proven to be the cheapest means to carry out performance predication and system behaviour characterisation. When computers were first used to study railway systems, they were mainly employed to perform repetitive but time-consuming computational tasks, such as matrix manipulations for power network solution and exhaustive searches for optimal braking trajectories. With only simple high-level programming languages available at the time, full advantage of the computing hardware could not be taken. Hence, structured simulations of the whole railway system were not very common. Most applications focused on isolated parts of the railway system. It is more appropriate to regard those applications as primarily mechanised calculations rather than simulations. However, a railway system consists of a number of subsystems, such as train movement, power supply and traction drives, which inevitably contains many complexities and diversities. These subsystems interact frequently with each other while the trains are moving; and they have their special features in different railway systems. To further complicate the simulation requirements, constraints like track geometry, speed restrictions and friction have to be considered, not to mention possible non-linearities and uncertainties in the system. In order to provide a comprehensive and accurate account of system behaviour through simulation, a large amount of data has to be organised systematically to ensure easy access and efficient representation; the interactions and relationships among the subsystems should be defined explicitly. These requirements call for sophisticated and effective simulation models for each component of the system. The software development techniques available nowadays allow the evolution of such simulation models. Not only can the applicability of the simulators be largely enhanced by advanced software design, maintainability and modularity for easy understanding and further development, and portability for various hardware platforms are also encouraged. The objective of this paper is to review the development of a number of approaches to simulation models. Attention is, in particular, given to models for train movement, power supply systems and traction drives. These models have been successfully used to enable various ‘what-if’ issues to be resolved effectively in a wide range of applications, such as speed profiles, energy consumption, run times etc.
Resumo:
Many industrial processes and systems can be modelled mathematically by a set of Partial Differential Equations (PDEs). Finding a solution to such a PDF model is essential for system design, simulation, and process control purpose. However, major difficulties appear when solving PDEs with singularity. Traditional numerical methods, such as finite difference, finite element, and polynomial based orthogonal collocation, not only have limitations to fully capture the process dynamics but also demand enormous computation power due to the large number of elements or mesh points for accommodation of sharp variations. To tackle this challenging problem, wavelet based approaches and high resolution methods have been recently developed with successful applications to a fixedbed adsorption column model. Our investigation has shown that recent advances in wavelet based approaches and high resolution methods have the potential to be adopted for solving more complicated dynamic system models. This chapter will highlight the successful applications of these new methods in solving complex models of simulated-moving-bed (SMB) chromatographic processes. A SMB process is a distributed parameter system and can be mathematically described by a set of partial/ordinary differential equations and algebraic equations. These equations are highly coupled; experience wave propagations with steep front, and require significant numerical effort to solve. To demonstrate the numerical computing power of the wavelet based approaches and high resolution methods, a single column chromatographic process modelled by a Transport-Dispersive-Equilibrium linear model is investigated first. Numerical solutions from the upwind-1 finite difference, wavelet-collocation, and high resolution methods are evaluated by quantitative comparisons with the analytical solution for a range of Peclet numbers. After that, the advantages of the wavelet based approaches and high resolution methods are further demonstrated through applications to a dynamic SMB model for an enantiomers separation process. This research has revealed that for a PDE system with a low Peclet number, all existing numerical methods work well, but the upwind finite difference method consumes the most time for the same degree of accuracy of the numerical solution. The high resolution method provides an accurate numerical solution for a PDE system with a medium Peclet number. The wavelet collocation method is capable of catching up steep changes in the solution, and thus can be used for solving PDE models with high singularity. For the complex SMB system models under consideration, both the wavelet based approaches and high resolution methods are good candidates in terms of computation demand and prediction accuracy on the steep front. The high resolution methods have shown better stability in achieving steady state in the specific case studied in this Chapter.
Resumo:
A composite line source emission (CLSE) model was developed to specifically quantify exposure levels and describe the spatial variability of vehicle emissions in traffic interrupted microenvironments. This model took into account the complexity of vehicle movements in the queue, as well as different emission rates relevant to various driving conditions (cruise, decelerate, idle and accelerate), and it utilised multi-representative segments to capture the accurate emission distribution for real vehicle flow. Hence, this model was able to quickly quantify the time spent in each segment within the considered zone, as well as the composition and position of the requisite segments based on the vehicle fleet information, which not only helped to quantify the enhanced emissions at critical locations, but it also helped to define the emission source distribution of the disrupted steady flow for further dispersion modelling. The model then was applied to estimate particle number emissions at a bi-directional bus station used by diesel and compressed natural gas fuelled buses. It was found that the acceleration distance was of critical importance when estimating particle number emission, since the highest emissions occurred in sections where most of the buses were accelerating and no significant increases were observed at locations where they idled. It was also shown that emissions at the front end of the platform were 43 times greater than at the rear of the platform. Although the CLSE model is intended to be applied in traffic management and transport analysis systems for the evaluation of exposure, as well as the simulation of vehicle emissions in traffic interrupted microenvironments, the bus station model can also be used for the input of initial source definitions in future dispersion models.
Resumo:
We describe the introduction, service growth, benefits and holistic support approach of a centrally supported universitywide online survey tool for researchers at QUT. The online survey service employs the Key Survey software, and has grown into a significant service for QUT researchers since being introduced in 2009. Key benefits of the approach include the ability of QUT to handle important issues relating to data such as security, privacy, integrity, archiving & disposal. The service also incorporates a workflow process that enhances the institution’s ability to ensure survey quality control through controlled approval and pilot testing before any survey is widely released. An important issue is that a tool like this can make it very easy to do very poor research very quickly while creating lots of data, due to the absence of a rigorous methodology designed to reduce errors and collect accurate, comprehensive, timely data. With this in mind, a holistic approach to service provision and support has been taken, which has included the introduction of an integrated system of seminars, tools and workshops to get researchers thinking about the quality of their research while becoming operational quickly. The system of seminars, workshops, checks and approvals we have put in place at QUT is designed to ensure better quality outcomes for QUT’s research and the individual researchers concerned.
Resumo:
Purpose–The aims of this paper are to demonstrate the application of Sen’s theory of well-being, the capability approach; to conceptualise the state of transportation disadvantage; and to underpin a theoretical sounds indicator selection process. Design/methodology/approach–This paper reviews and examines various measurement approaches of transportation disadvantage in order to select indicators and develop an innovative framework of urban transportation disadvantage. Originality/value–The paper provides further understanding of the state of transportation disadvantage from the capability approach perspective. In addition, building from this understanding, a validated and systematic framework is developed to select relevant indicators. Practical implications –The multi-indicator approach has a high tendency to double count for transportation disadvantage, increase the number of TDA population and only accounts each indicator for its individual effects. Instead, indicators that are identified based on a transportation disadvantage scenario will yield more accurate results. Keywords – transport disadvantage, the capability approach, accessibility, measuring urban transportation disadvantage, indicators selection Paper type – Academic Research Paper
Resumo:
The technologies employed for the preparation of conventional tissue engineering scaffolds restrict the materials choice and the extent to which the architecture can be designed. Here we show the versatility of stereolithography with respect to materials and freedom of design. Porous scaffolds are designed with computer software and built with either a poly(d,l-lactide)-based resin or a poly(d,l-lactide-co-ε-caprolactone)-based resin. Characterisation of the scaffolds by micro-computed tomography shows excellent reproduction of the designs. The mechanical properties are evaluated in compression, and show good agreement with finite element predictions. The mechanical properties of scaffolds can be controlled by the combination of material and scaffold pore architecture. The presented technology and materials enable an accurate preparation of tissue engineering scaffolds with a large freedom of design, and properties ranging from rigid and strong to highly flexible and elastic.
Resumo:
Increased industrialisation has brought to the forefront the susceptibility of concrete columns in both buildings and bridges to vehicle impacts. Accurate vulnerability assessments are crucial in the design process due to possible catastrophic nature of the failures that can cause. This chapter reports on research undertaken to investigate the impact capacity of the columns of low to medium raised building designed according to the Australian standards. Numerical simulation techniques were used in the process and validation was done by using experimental results published in the literature. The investigation thus far has confirmed that vulnerability of typical columns in five story buildings located in urban areas to medium velocity car impacts and hence these columns need to be re-designed or retrofitted. In addition, accuracy of the simplified method presented in EN 1991-1-7 to quantify the impact damage was scrutinised. A simplified concept to assess the damage due to all collisions modes was introduced. The research information will be extended to generate a common data base to assess the vulnerability of columns in urban areas against new generation of vehicles.
Resumo:
Recent years have seen an increased uptake of business process management technology in industries. This has resulted in organizations trying to manage large collections of business process models. One of the challenges facing these organizations concerns the retrieval of models from large business process model repositories. For example, in some cases new process models may be derived from existing models, thus finding these models and adapting them may be more effective and less error-prone than developing them from scratch. Since process model repositories may be large, query evaluation may be time consuming. Hence, we investigate the use of indexes to speed up this evaluation process. To make our approach more applicable, we consider the semantic similarity between labels. Experiments are conducted to demonstrate that our approach is efficient.
Resumo:
The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.
Resumo:
OBJECTIVE: The accurate quantification of human diabetic neuropathy is important to define at-risk patients, anticipate deterioration, and assess new therapies. ---------- RESEARCH DESIGN AND METHODS: A total of 101 diabetic patients and 17 age-matched control subjects underwent neurological evaluation, neurophysiology tests, quantitative sensory testing, and evaluation of corneal sensation and corneal nerve morphology using corneal confocal microscopy (CCM). ---------- RESULTS: Corneal sensation decreased significantly (P = 0.0001) with increasing neuropathic severity and correlated with the neuropathy disability score (NDS) (r = 0.441, P < 0.0001). Corneal nerve fiber density (NFD) (P < 0.0001), nerve fiber length (NFL), (P < 0.0001), and nerve branch density (NBD) (P < 0.0001) decreased significantly with increasing neuropathic severity and correlated with NDS (NFD r = −0.475, P < 0.0001; NBD r = −0.511, P < 0.0001; and NFL r = −0.581, P < 0.0001). NBD and NFL demonstrated a significant and progressive reduction with worsening heat pain thresholds (P = 0.01). Receiver operating characteristic curve analysis for the diagnosis of neuropathy (NDS >3) defined an NFD of <27.8/mm2 with a sensitivity of 0.82 (95% CI 0.68–0.92) and specificity of 0.52 (0.40–0.64) and for detecting patients at risk of foot ulceration (NDS >6) defined a NFD cutoff of <20.8/mm2 with a sensitivity of 0.71 (0.42–0.92) and specificity of 0.64 (0.54–0.74). ---------- CONCLUSIONS: CCM is a noninvasive clinical technique that may be used to detect early nerve damage and stratify diabetic patients with increasing neuropathic severity. Established diabetic neuropathy leads to pain and foot ulceration. Detecting neuropathy early may allow intervention with treatments to slow or reverse this condition (1). Recent studies suggested that small unmyelinated C-fibers are damaged early in diabetic neuropathy (2–4) but can only be detected using invasive procedures such as sural nerve biopsy (4,5) or skin-punch biopsy (6–8). Our studies have shown that corneal confocal microscopy (CCM) can identify early small nerve fiber damage and accurately quantify the severity of diabetic neuropathy (9–11). We have also shown that CCM relates to intraepidermal nerve fiber loss (12) and a reduction in corneal sensitivity (13) and detects early nerve fiber regeneration after pancreas transplantation (14). Recently we have also shown that CCM detects nerve fiber damage in patients with Fabry disease (15) and idiopathic small fiber neuropathy (16) when results of electrophysiology tests and quantitative sensory testing (QST) are normal. In this study we assessed corneal sensitivity and corneal nerve morphology using CCM in diabetic patients stratified for the severity of diabetic neuropathy using neurological evaluation, electrophysiology tests, and QST. This enabled us to compare CCM and corneal esthesiometry with established tests of diabetic neuropathy and define their sensitivity and specificity to detect diabetic patients with early neuropathy and those at risk of foot ulceration.
Resumo:
Comorbid depression and anxiety in late life present challenges for geriatric mental health care providers. These challenges include identifying the often complex diagnostic presentations both clinically and in a research context. This potent comorbidity can be conceived as double jeopardy in older adults, further diminishing their quality of life. Geriatric health care providers need to understand psychiatric comorbidity of this type for accurate diagnosis and early referral to specialists, and to coordinate interdisciplinary care. Researchers in the field also need to recognize potential multiple impacts of comorbidities with respect to assessment and treatment domains. This article describes the prevalence of late-life depression and anxiety disorders and reviews studies on this comorbidity in older adults. Risk factors and protective factors for anxiety and depression in later life are reviewed, and information is provided about comparative symptoms, the selection of assessment tools, and challenges to the provision of interdisciplinary, evidence-based care.
Resumo:
Shrinking product lifecycles, tough international competition, swiftly changing technologies, ever increasing customer quality expectation and demanding high variety options are some of the forces that drive next generation of development processes. To overcome these challenges, design cost and development time of product has to be reduced as well as quality to be improved. Design reuse is considered one of the lean strategies to win the race in this competitive environment. design reuse can reduce the product development time, product development cost as well as number of defects which will ultimately influence the product performance in cost, time and quality. However, it has been found that no or little work has been carried out for quantifying the effectiveness of design reuse in product development performance such as design cost, development time and quality. Therefore, in this study we propose a systematic design reuse based product design framework and developed a design leanness index (DLI) as a measure of effectiveness of design reuse. The DLI is a representative measure of reuse effectiveness in cost, development time and quality. Through this index, a clear relationship between reuse measure and product development performance metrics has been established. Finally, a cost based model has been developed to maximise the design leanness index for a product within the given set of constraints achieving leanness in design process.
Resumo:
The paper details the results of the first phase of an on-going research into the sociocultural factors that influence the supervision of higher degrees research (HDR) engineering students in the Faculty of Built Environment and Engineering (BEE) and Faculty of Science and Technology (FaST) at Queensland University of Technology. A quantitative analysis was performed on the results from an online survey that was administered to 179 engineering students. The study reveals that cultural barriers impact their progression and developing confidence in their research programs. We argue that in order to assist international and non-English speaking background (NESB) research students to triumph over such culturally embedded challenges in engineering research, it is important for supervisors to understand this cohort's unique pedagogical needs and develop intercultural sensitivity in their pedagogical practice in postgraduate research supervision. To facilitate this, the governing body (Office of Research) can play a vital role in not only creating the required support structures but also their uniform implementation across the board.
Resumo:
Advances in data mining have provided techniques for automatically discovering underlying knowledge and extracting useful information from large volumes of data. Data mining offers tools for quick discovery of relationships, patterns and knowledge in large complex databases. Application of data mining to manufacturing is relatively limited mainly because of complexity of manufacturing data. Growing self organizing map (GSOM) algorithm has been proven to be an efficient algorithm to analyze unsupervised DNA data. However, it produced unsatisfactory clustering when used on some large manufacturing data. In this paper a data mining methodology has been proposed using a GSOM tool which was developed using a modified GSOM algorithm. The proposed method is used to generate clusters for good and faulty products from a manufacturing dataset. The clustering quality (CQ) measure proposed in the paper is used to evaluate the performance of the cluster maps. The paper also proposed an automatic identification of variables to find the most probable causative factor(s) that discriminate between good and faulty product by quickly examining the historical manufacturing data. The proposed method offers the manufacturers to smoothen the production flow and improve the quality of the products. Simulation results on small and large manufacturing data show the effectiveness of the proposed method.
Resumo:
Tungro is one of the most destructive viral diseases of rice in South and Southeast Asia. It is associated with two viruses---rice tungro bacilliform virus (RTBV) ,and rice tungro spherical virus (RTSV) (Hibino et al 1978). Both viruses are transmitted by the green leafhopper (GLH) Nephotettix virescens (Ling 1979), However, prior acquisition of RTSV is required for Ihe transmission of RTBV alone (Hibino 1983). Plants infected with both viruses show severe stunting and yellowing. Those infected with RTBV alone show mild stunting but no leaf discoloration whereas those infected with RTSV alone do not show any apparent symptoms (Hibino el al 1978). Since the late 1960s, tungro has been mainly managed through varietal resistance (Khush 1989). The instability of resistant varieties in the field (Dahal et .a1 1990) led to a reexamination of the nature of the incorporated sources of resistance and to the adoption of more precise and more accurate screening methods.