957 resultados para Speed up


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents both the theoretical and the experimental approaches of the development of a mathematical model to be used in multi-variable control system designs of an active suspension for a sport utility vehicle (SUV), in this case a light pickup truck. A complete seven-degree-of-freedom model is successfully quickly identified, with very satisfactory results in simulations and in real experiments conducted with the pickup truth. The novelty of the proposed methodology is the use of commercial software in the early stages of the identification to speed up the process and to minimize the need for a large number of costly experiments. The paper also presents major contributions to the identification of uncertainties in vehicle suspension models and in the development of identification methods using the sequential quadratic programming, where an innovation regarding the calculation of the objective function is proposed and implemented. Results from simulations of and practical experiments with the real SUV are presented, analysed, and compared, showing the potential of the method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The new technologies for Knowledge Discovery from Databases (KDD) and data mining promise to bring new insights into a voluminous growing amount of biological data. KDD technology is complementary to laboratory experimentation and helps speed up biological research. This article contains an introduction to KDD, a review of data mining tools, and their biological applications. We discuss the domain concepts related to biological data and databases, as well as current KDD and data mining developments in biology.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The simultaneous design of the steady-state and dynamic performance of a process has the ability to satisfy much more demanding dynamic performance criteria than the design of dynamics only by the connection of a control system. A method for designing process dynamics based on the use of a linearised systems' eigenvalues has been developed. The eigenvalues are associated with system states using the unit perturbation spectral resolution (UPSR), characterising the dynamics of each state. The design method uses a homotopy approach to determine a final design which satisfies both steady-state and dynamic performance criteria. A highly interacting single stage forced circulation evaporator system, including control loops, was designed by this method with the goal of reducing the time taken for the liquid composition to reach steady-state. Initially the system was successfully redesigned to speed up the eigenvalue associated with the liquid composition state, but this did not result in an improved startup performance. Further analysis showed that the integral action of the composition controller was the source of the limiting eigenvalue. Design changes made to speed up this eigenvalue did result in an improved startup performance. The proposed approach provides a structured way to address the design-control interface, giving significant insight into the dynamic behaviour of the system such that a systematic design or redesign of an existing system can be undertaken with confidence.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper discusses an object-oriented neural network model that was developed for predicting short-term traffic conditions on a section of the Pacific Highway between Brisbane and the Gold Coast in Queensland, Australia. The feasibility of this approach is demonstrated through a time-lag recurrent network (TLRN) which was developed for predicting speed data up to 15 minutes into the future. The results obtained indicate that the TLRN is capable of predicting speed up to 5 minutes into the future with a high degree of accuracy (90-94%). Similar models, which were developed for predicting freeway travel times on the same facility, were successful in predicting travel times up to 15 minutes into the future with a similar degree of accuracy (93-95%). These results represent substantial improvements on conventional model performance and clearly demonstrate the feasibility of using the object-oriented approach for short-term traffic prediction. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A scheme is presented to incorporate a mixed potential integral equation (MPIE) using Michalski's formulation C with the method of moments (MoM) for analyzing the scattering of a plane wave from conducting planar objects buried in a dielectric half-space. The robust complex image method with a two-level approximation is used for the calculation of the Green's functions for the half-space. To further speed up the computation, an interpolation technique for filling the matrix is employed. While the induced current distributions on the object's surface are obtained in the frequency domain, the corresponding time domain responses are calculated via the inverse fast Fourier transform (FFT), The complex natural resonances of targets are then extracted from the late time response using the generalized pencil-of-function (GPOF) method. We investigate the pole trajectories as we vary the distance between strips and the depth and orientation of single, buried strips, The variation from the pole position of a single strip in a homogeneous dielectric medium was only a few percent for most of these parameter variations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We conducted a review to establish the range and scope of current telemedicine guidelines and standards. Published guidelines were identified by searching the Medline and Telemedicine Information Exchange (TIE) databases, and by performing a Google search using the term 'telemedicine guidelines'. Three types of guidelines were identified, namely clinical, operational and technical. Clinical guidelines included those for teleradiology, telepsychiatry, home telenursing, minor injuries telemedicine, surgical telemedicine, teledermatology and telepathology. Operational guidelines included those for email communication, Internet access and videoconferencing. Technical guidelines included those from the American Telemedicine Association and the US Office for the Advancement of Telehealth. The main standards relevant to telemedicine include those of the International Telecommunication Union and the DICOM standard. The scarcity of guidelines and standards suggests that telemedicine is not yet near to routine use. If an international telemedicine organization were to take responsibility for defining guidelines, under the direction of clinicians with appropriate telemedicine experience, this might speed up their development.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article presents Monte Carlo techniques for estimating network reliability. For highly reliable networks, techniques based on graph evolution models provide very good performance. However, they are known to have significant simulation cost. An existing hybrid scheme (based on partitioning the time space) is available to speed up the simulations; however, there are difficulties with optimizing the important parameter associated with this scheme. To overcome these difficulties, a new hybrid scheme (based on partitioning the edge set) is proposed in this article. The proposed scheme shows orders of magnitude improvement of performance over the existing techniques in certain classes of network. It also provides reliability bounds with little overhead.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The integration and composition of software systems requires a good architectural design phase to speed up communications between (remote) components. However, during implementation phase, the code to coordinate such components often ends up mixed in the main business code. This leads to maintenance problems, raising the need for, on the one hand, separating the coordination code from the business code, and on the other hand, providing mechanisms for analysis and comprehension of the architectural decisions once made. In this context our aim is at developing a domain-specific language, CoordL, to describe typical coordination patterns. From our point of view, coordination patterns are abstractions, in a graph form, over the composition of coordination statements from the system code. These patterns would allow us to identify, by means of pattern-based graph search strategies, the code responsible for the coordination of the several components in a system. The recovering and separation of the architectural decisions for a better comprehension of the software is the main purpose of this pattern language

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Minimally invasive cardiovascular interventions guided by multiple imaging modalities are rapidly gaining clinical acceptance for the treatment of several cardiovascular diseases. These images are typically fused with richly detailed pre-operative scans through registration techniques, enhancing the intra-operative clinical data and easing the image-guided procedures. Nonetheless, rigid models have been used to align the different modalities, not taking into account the anatomical variations of the cardiac muscle throughout the cardiac cycle. In the current study, we present a novel strategy to compensate the beat-to-beat physiological adaptation of the myocardium. Hereto, we intend to prove that a complete myocardial motion field can be quickly recovered from the displacement field at the myocardial boundaries, therefore being an efficient strategy to locally deform the cardiac muscle. We address this hypothesis by comparing three different strategies to recover a dense myocardial motion field from a sparse one, namely, a diffusion-based approach, thin-plate splines, and multiquadric radial basis functions. Two experimental setups were used to validate the proposed strategy. First, an in silico validation was carried out on synthetic motion fields obtained from two realistic simulated ultrasound sequences. Then, 45 mid-ventricular 2D sequences of cine magnetic resonance imaging were processed to further evaluate the different approaches. The results showed that accurate boundary tracking combined with dense myocardial recovery via interpolation/ diffusion is a potentially viable solution to speed up dense myocardial motion field estimation and, consequently, to deform/compensate the myocardial wall throughout the cardiac cycle. Copyright © 2015 John Wiley & Sons, Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Lossless compression algorithms of the Lempel-Ziv (LZ) family are widely used nowadays. Regarding time and memory requirements, LZ encoding is much more demanding than decoding. In order to speed up the encoding process, efficient data structures, like suffix trees, have been used. In this paper, we explore the use of suffix arrays to hold the dictionary of the LZ encoder, and propose an algorithm to search over it. We show that the resulting encoder attains roughly the same compression ratios as those based on suffix trees. However, the amount of memory required by the suffix array is fixed, and much lower than the variable amount of memory used by encoders based on suffix trees (which depends on the text to encode). We conclude that suffix arrays, when compared to suffix trees in terms of the trade-off among time, memory, and compression ratio, may be preferable in scenarios (e.g., embedded systems) where memory is at a premium and high speed is not critical.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Foi realizado um estudo sobre o efeito do tipo de fluidos na transferência de calor. Pretende-se determinar a influência da concentração da solução de Goma de Xantano, do número de Reynolds, do número de Weissenberg, da temperatura e do tempo de escoamento no coeficiente de transferência de calor, jH. O estudo da transferência de calor foi feito num permutador de tubos duplos concêntricos. Já o estudo da reologia foi realizado num reómetro. Na caracterização reológica das soluções de XG, a viscosidade aumenta com a concentração das soluções, diminui para taxas de deformação crescentes e com o aumento da temperatura para ambas as soluções. Os dados mostram um aumento da intensidade da pseudoplasticidade com a concentração do polímero, sendo os valores representados pelo modelo de Sisco. A degradação da solução de 0,20% de goma de xantano a 25 ºC, com o escoamento, é muito acelerada. Os resultados dos ensaios apresentam uma diminuição da viscosidade de 9,4% a 22,9%, para tempos de escoamento de 12 a 47 horas, respectivamente. Num escoamento turbulento em conduta de secção circular constante os resultados mostram uma redução de arrasto total de 18 para 33%. Para a solução de 0,10 % de XG, verifica-se um aumento do calor transferido de 115% e de 130%, quando a temperatura aumenta de 25 ºC para 36 ºC, respectivamente. A água apresenta valores de calor transferido superiores, cerca de 170%, aos da solução de 0,1 %XG. O factor de correlação empírico de Colbourn (jH), utilizado neste trabalho apresenta valores de acordo com a relação de Cho and Hartnett (1985): jH<2/f. Quando o caudal do fluido quente aumenta verifica-se uma diminuição do factor jH. Em relação ao tempo de escoamento verifica-se uma diminuição de cerca de 70% do coeficiente de transferência de calor ao fim de 47 horas. Finalmente verificamos uma diminuição do factor de transferência de calor com o aumento da temperatura do fluido quente, para ambas as concentrações de goma de xantano. Para as soluções de 0,10 e 0,20% de XG essa diminuição variou entre 38 e 15% e entre 34 e 3%, respectivamente.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper a new simulation environment for a virtual laboratory to educational proposes is presented. The Logisim platform was adopted as the base digital simulation tool, since it has a modular implementation in Java. All the hardware devices used in the laboratory course was designed as components accessible by the simulation tool, and integrated as a library. Moreover, this new library allows the user to access an external interface. This work was motivated by the needed to achieve better learning times on co-design projects, based on hardware and software implementations, and to reduce the laboratory time, decreasing the operational costs of engineer teaching. Furthermore, the use of virtual laboratories in educational environments allows the students to perform functional tests, before they went to a real laboratory. Moreover, these functional tests allow to speed-up the learning when a problem based approach methodology is considered. © 2014 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The main purpose of this work was the development of procedures for the simulation of atmospheric ows over complex terrain, using OpenFOAM. For this aim, tools and procedures were developed apart from this code for the preprocessing and data extraction, which were thereafter applied in the simulation of a real case. For the generation of the computational domain, a systematic method able to translate the terrain elevation model to a native OpenFOAM format (blockMeshDict) was developed. The outcome was a structured mesh, in which the user has the ability to de ne the number of control volumes and its dimensions. With this procedure, the di culties of case set up and the high computation computational e ort reported in literature associated to the use of snappyHexMesh, the OpenFOAM resource explored until then for the accomplishment of this task, were considered to be overwhelmed. Developed procedures for the generation of boundary conditions allowed for the automatic creation of idealized inlet vertical pro les, de nition of wall functions boundary conditions and the calculation of internal eld rst guesses for the iterative solution process, having as input experimental data supplied by the user. The applicability of the generated boundary conditions was limited to the simulation of turbulent, steady-state, incompressible and neutrally strati ed atmospheric ows, always recurring to RaNS (Reynolds-averaged Navier-Stokes) models. For the modelling of terrain roughness, the developed procedure allowed to the user the de nition of idealized conditions, like an uniform aerodynamic roughness length or making its value variable as a function of topography characteristic values, or the using of real site data, and it was complemented by the development of techniques for the visual inspection of generated roughness maps. The absence and the non inclusion of a forest canopy model limited the applicability of this procedure to low aerodynamic roughness lengths. The developed tools and procedures were then applied in the simulation of a neutrally strati ed atmospheric ow over the Askervein hill. In the performed simulations was evaluated the solution sensibility to di erent convection schemes, mesh dimensions, ground roughness and formulations of the k - ε and k - ω models. When compared to experimental data, calculated values showed a good agreement of speed-up in hill top and lee side, with a relative error of less than 10% at a height of 10 m above ground level. Turbulent kinetic energy was considered to be well simulated in the hill windward and hill top, and grossly predicted in the lee side, where a zone of ow separation was also identi ed. Despite the need of more work to evaluate the importance of the downstream recirculation zone in the quality of gathered results, the agreement between the calculated and experimental values and the OpenFOAM sensibility to the tested parameters were considered to be generally in line with the simulations presented in the reviewed bibliographic sources.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fragmentation on dynamically reconfigurable FPGAs is a major obstacle to the efficient management of the logic space in reconfigurable systems. When resource allocation decisions have to be made at run-time a rearrangement may be necessary to release enough contiguous resources to implement incoming functions. The feasibility of run-time relocation depends on the processing time required to set up rearrangements. Moreover, the performance of the relocated functions should not be affected by this process or otherwise the whole system performance, and even its operation, may be at risk. Relocation should take into account not only specific functional issues, but also the FPGA architecture, since these two aspects are normally intertwined. A simple and fast method to assess performance degradation of a function during relocation and to speed up the defragmentation process, based on previous function labelling and on the application of the Euclidian distance concept, is proposed in this paper.