976 resultados para Computational geometry
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The assessment of historical structures is a significant need for the next generations, as historical monuments represent the community’s identity and have an important cultural value to society. Most of historical structures built by using masonry which is one of the oldest and most common construction materials used in the building sector since the ancient time. Also it is considered a complex material, as it is a composition of brick units and mortar, which affects the structural performance of the building by having different mechanical behaviour with respect to different geometry and qualities given by the components.
Resumo:
Single-screw extrusion is one of the widely used processing methods in plastics industry, which was the third largest manufacturing industry in the United States in 2007 [5]. In order to optimize the single-screw extrusion process, tremendous efforts have been devoted for development of accurate models in the last fifty years, especially for polymer melting in screw extruders. This has led to a good qualitative understanding of the melting process; however, quantitative predictions of melting from various models often have a large error in comparison to the experimental data. Thus, even nowadays, process parameters and the geometry of the extruder channel for the single-screw extrusion are determined by trial and error. Since new polymers are developed frequently, finding the optimum parameters to extrude these polymers by trial and error is costly and time consuming. In order to reduce the time and experimental work required for optimizing the process parameters and the geometry of the extruder channel for a given polymer, the main goal of this research was to perform a coordinated experimental and numerical investigation of melting in screw extrusion. In this work, a full three-dimensional finite element simulation of the two-phase flow in the melting and metering zones of a single-screw extruder was performed by solving the conservation equations for mass, momentum, and energy. The only attempt for such a three-dimensional simulation of melting in screw extruder was more than twenty years back. However, that work had only a limited success because of the capability of computers and mathematical algorithms available at that time. The dramatic improvement of computational power and mathematical knowledge now make it possible to run full 3-D simulations of two-phase flow in single-screw extruders on a desktop PC. In order to verify the numerical predictions from the full 3-D simulations of two-phase flow in single-screw extruders, a detailed experimental study was performed. This experimental study included Maddock screw-freezing experiments, Screw Simulator experiments and material characterization experiments. Maddock screw-freezing experiments were performed in order to visualize the melting profile along the single-screw extruder channel with different screw geometry configurations. These melting profiles were compared with the simulation results. Screw Simulator experiments were performed to collect the shear stress and melting flux data for various polymers. Cone and plate viscometer experiments were performed to obtain the shear viscosity data which is needed in the simulations. An optimization code was developed to optimize two screw geometry parameters, namely, screw lead (pitch) and depth in the metering section of a single-screw extruder, such that the output rate of the extruder was maximized without exceeding the maximum temperature value specified at the exit of the extruder. This optimization code used a mesh partitioning technique in order to obtain the flow domain. The simulations in this flow domain was performed using the code developed to simulate the two-phase flow in single-screw extruders.
Resumo:
This paper presents a hand biometric system for contact-less, platform-free scenarios, proposing innovative methods in feature extraction, template creation and template matching. The evaluation of the proposed method considers both the use of three contact-less publicly available hand databases, and the comparison of the performance to two competitive pattern recognition techniques existing in literature: namely Support Vector Machines (SVM) and k-Nearest Neighbour (k-NN). Results highlight the fact that the proposed method outcomes existing approaches in literature in terms of computational cost, accuracy in human identification, number of extracted features and number of samples for template creation. The proposed method is a suitable solution for human identification in contact-less scenarios based on hand biometrics, providing a feasible solution to devices with limited hardware requirements like mobile devices
Resumo:
Esta tesis propone un sistema biométrico de geometría de mano orientado a entornos sin contacto junto con un sistema de detección de estrés capaz de decir qué grado de estrés tiene una determinada persona en base a señales fisiológicas Con respecto al sistema biométrico, esta tesis contribuye con el diseño y la implementación de un sistema biométrico de geometría de mano, donde la adquisición se realiza sin ningún tipo de contacto, y el patrón del usuario se crea considerando únicamente datos del propio individuo. Además, esta tesis propone un algoritmo de segmentación multiescala para solucionar los problemas que conlleva la adquisición de manos en entornos reales. Por otro lado, respecto a la extracción de características y su posterior comparación esta tesis tiene una contribución específica, proponiendo esquemas adecuados para llevar a cabo tales tareas con un coste computacional bajo pero con una alta precisión en el reconocimiento de personas. Por último, este sistema es evaluado acorde a la norma estándar ISO/IEC 19795 considerando seis bases de datos públicas. En relación al método de detección de estrés, esta tesis propone un sistema basado en dos señales fisiológicas, concretamente la tasa cardiaca y la conductancia de la piel, así como la creación de un innovador patrón de estrés que recoge el comportamiento de ambas señales bajo las situaciones de estrés y no-estrés. Además, este sistema está basado en lógica difusa para decidir el grado de estrés de un individuo. En general, este sistema es capaz de detectar estrés de forma precisa y en tiempo real, proporcionando una solución adecuada para sistemas biométricos actuales, donde la aplicación del sistema de detección de estrés es directa para evitar situaciónes donde los individuos sean forzados a proporcionar sus datos biométricos. Finalmente, esta tesis incluye un estudio de aceptabilidad del usuario, donde se evalúa cuál es la aceptación del usuario con respecto a la técnica biométrica propuesta por un total de 250 usuarios. Además se incluye un prototipo implementado en un dispositivo móvil y su evaluación. ABSTRACT: This thesis proposes a hand biometric system oriented to unconstrained and contactless scenarios together with a stress detection method able to elucidate to what extent an individual is under stress based on physiological signals. Concerning the biometric system, this thesis contributes with the design and implementation of a hand-based biometric system, where the acquisition is carried out without contact and the template is created only requiring information from a single individual. In addition, this thesis proposes an algorithm based on multiscale aggregation in order to tackle with the problem of segmentation in real unconstrained environments. Furthermore, feature extraction and matching are also a specific contributions of this thesis, providing adequate schemes to carry out both actions with low computational cost but with certain recognition accuracy. Finally, this system is evaluated according to international standard ISO/IEC 19795 considering six public databases. In relation to the stress detection method, this thesis proposes a system based on two physiological signals, namely heart rate and galvanic skin response, with the creation of an innovative stress detection template which gathers the behaviour of both physiological signals under both stressing and non-stressing situations. Besides, this system is based on fuzzy logic to elucidate the level of stress of an individual. As an overview, this system is able to detect stress accurately and in real-time, providing an adequate solution for current biometric systems, where the application of a stress detection system is direct to avoid situations where individuals are forced to provide the biometric data. Finally, this thesis includes a user acceptability evaluation, where the acceptance of the proposed biometric technique is assessed by a total of 250 individuals. In addition, this thesis includes a mobile implementation prototype and its evaluation.
Resumo:
La motivación de esta tesis es el desarrollo de una herramienta de optimización automática para la mejora del rendimiento de formas aerodinámicas enfocado en la industria aeronáutica. Este trabajo cubre varios aspectos esenciales, desde el empleo de Non-Uniform Rational B-Splines (NURBS), al cálculo de gradientes utilizando la metodología del adjunto continuo, el uso de b-splines volumétricas como parámetros de diseño, el tratamiento de la malla en las intersecciones, y no menos importante, la adaptación de los algoritmos de la dinámica de fluidos computacional (CFD) en arquitecturas hardware de alto paralelismo, como las tarjetas gráficas, para acelerar el proceso de optimización. La metodología adjunta ha posibilitado que los métodos de optimización basados en gradientes sean una alternativa prometedora para la mejora de la eficiencia aerodinámica de los aviones. La formulación del adjunto permite calcular los gradientes de una función de coste, como la resistencia aerodinámica o la sustentación, independientemente del número de variables de diseño, a un coste computacional equivalente a una simulación CFD. Sin embargo, existen problemas prácticos que han imposibilitado su aplicación en la industria, que se pueden resumir en: integrabilidad, rendimiento computacional y robustez de la solución adjunta. Este trabajo aborda estas contrariedades y las analiza en casos prácticos. Como resumen, las contribuciones de esta tesis son: • El uso de NURBS como variables de diseño en un bucle de automático de optimización, aplicado a la mejora del rendimiento aerodinámico de alas en régimen transónico. • El desarrollo de algoritmos de inversión de punto, para calcular las coordenadas paramétricas de las coordenadas espaciales, para ligar los vértices de malla a las NURBS. • El uso y validación de la formulación adjunta para el calculo de los gradientes, a partir de las sensibilidades de la solución adjunta, comparado con diferencias finitas. • Se ofrece una estrategia para utilizar la geometría CAD, en forma de parches NURBS, para tratar las intersecciones, como el ala-fuselaje. • No existen muchas alternativas de librerías NURBS viables. En este trabajo se ha desarrollado una librería, DOMINO NURBS, y se ofrece a la comunidad como código libre y abierto. • También se ha implementado un código CFD en tarjeta gráfica, para realizar una valoración de cómo se puede adaptar un código sobre malla no estructurada a arquitecturas paralelas. • Finalmente, se propone una metodología, basada en la función de Green, como una forma eficiente de paralelizar simulaciones numéricas. Esta tesis ha sido apoyada por las actividades realizadas por el Área de Dinámica da Fluidos del Instituto Nacional de Técnica Aeroespacial (INTA), a través de numerosos proyectos de financiación nacional: DOMINO, SIMUMAT, y CORESFMULAERO. También ha estado en consonancia con las actividades realizadas por el departamento de Métodos y Herramientas de Airbus España y con el grupo Investigación y Tecnología Aeronáutica Europeo (GARTEUR), AG/52. ABSTRACT The motivation of this work is the development of an automatic optimization strategy for large scale shape optimization problems that arise in the aeronautics industry to improve the aerodynamic performance; covering several aspects from the use of Non-Uniform Rational B-Splines (NURBS), the calculation of the gradients with the continuous adjoint formulation, the development of volumetric b-splines parameterization, mesh adaptation and intersection handling, to the adaptation of Computational Fluid Dynamics (CFD) algorithms to take advantage of highly parallel architectures in order to speed up the optimization process. With the development of the adjoint formulation, gradient-based methods for aerodynamic optimization become a promising approach to improve the aerodynamic performance of aircraft designs. The adjoint methodology allows the evaluation the gradients to all design variables of a cost function, such as drag or lift, at the equivalent cost of more or less one CFD simulation. However, some practical problems have been delaying its full implementation to the industry, which can be summarized as: integrability, computer performance, and adjoint robustness. This work tackles some of these issues and analyse them in well-known test cases. As summary, the contributions comprises: • The employment of NURBS as design variables in an automatic optimization loop for the improvement of the aerodynamic performance of aircraft wings in transonic regimen. • The development of point inversion algorithms to calculate the NURBS parametric coordinates from the space coordinates, to link with the computational grid vertex. • The use and validation of the adjoint formulation to calculate the gradients from the surface sensitivities in an automatic optimization loop and evaluate its reliability, compared with finite differences. • This work proposes some algorithms that take advantage of the underlying CAD geometry description, in the form of NURBS patches, to handle intersections and mesh adaptations. • There are not many usable libraries for NURBS available. In this work an open source library DOMINO NURBS has been developed and is offered to the community as free, open source code. • The implementation of a transonic CFD solver from scratch in a graphic card, for an assessment of the implementability of conventional CFD solvers for unstructured grids to highly parallel architectures. • Finally, this research proposes the use of the Green's function as an efficient paralellization scheme of numerical solvers. The presented work has been supported by the activities carried out at the Fluid Dynamics branch of the National Institute for Aerospace Technology (INTA) through national founding research projects: DOMINO, SIMUMAT, and CORESIMULAERO; in line with the activities carried out by the Methods and Tools and Flight Physics department at Airbus and the Group for Aeronautical Research and Technology in Europe (GARTEUR) action group AG/52.
Resumo:
Quantum computers hold great promise for solving interesting computational problems, but it remains a challenge to find efficient quantum circuits that can perform these complicated tasks. Here we show that finding optimal quantum circuits is essentially equivalent to finding the shortest path between two points in a certain curved geometry. By recasting the problem of finding quantum circuits as a geometric problem, we open up the possibility of using the mathematical techniques of Riemannian geometry to suggest new quantum algorithms or to prove limitations on the power of quantum computers.
Resumo:
This thesis presents an effective methodology for the generation of a simulation which can be used to increase the understanding of viscous fluid processing equipment and aid in their development, design and optimisation. The Hampden RAPRA Torque Rheometer internal batch twin rotor mixer has been simulated with a view to establishing model accuracies, limitations, practicalities and uses. As this research progressed, via the analyses several 'snap-shot' analysis of several rotor configurations using the commercial code Polyflow, it was evident that the model was of some worth and its predictions are in good agreement with the validation experiments, however, several major restrictions were identified. These included poor element form, high man-hour requirements for the construction of each geometry and the absence of the transient term in these models. All, or at least some, of these limitations apply to the numerous attempts to model internal mixes by other researchers and it was clear that there was no generally accepted methodology to provide a practical three-dimensional model which has been adequately validated. This research, unlike others, presents a full complex three-dimensional, transient, non-isothermal, generalised non-Newtonian simulation with wall slip which overcomes these limitations using unmatched ridding and sliding mesh technology adapted from CFX codes. This method yields good element form and, since only one geometry has to be constructed to represent the entire rotor cycle, is extremely beneficial for detailed flow field analysis when used in conjunction with user defined programmes and automatic geometry parameterisation (AGP), and improves accuracy for investigating equipment design and operation conditions. Model validation has been identified as an area which has been neglected by other researchers in this field, especially for time dependent geometries, and has been rigorously pursued in terms of qualitative and quantitative velocity vector analysis of the isothermal, full fill mixing of generalised non-Newtonian fluids, as well as torque comparison, with a relatively high degree of success. This indicates that CFD models of this type can be accurate and perhaps have not been validated to this extent previously because of the inherent difficulties arising from most real processes.
Resumo:
Copper(II) complexes of hexadentate ethylenediaminetetracarboxylic acid type ligands Heda3p and Heddadp (Heda3p = ethylenediamine-N-acetic-N,N',N'-tri-3-propionic acid; H eddadp = ethylenediamine-N,N'-diacetic-N,N'-di-3- propionic acid) have been prepared. An octahedral trans(O) geometry (two propionate ligands coordinated in axial positions) has been established crystallographically for the Ba[Cu(eda3p)]·8HO compound, while Ba[Cu(eddadp)]·8HO is proposed to adopt a trans(O ) geometry (two axial acetates) on the basis of density functional theory calculations and comparisons of IR and UV-vis spectral data. Experimental and computed structural data correlating similar copper(II) chelate complexes have been used to better understand the isomerism and departure from regular octahedral geometry within the series. The in-plane O-Cu-N chelate angles show the smallest deviation from the ideal octahedral value of 90°, and hence the lowest strain, for the eddadp complex with two equatorial ß-propionate rings. A linear dependence between tetragonality and the number of five-membered rings has been established. A natural bonding orbital analysis of the series of complexes is also presented.
Resumo:
Novel molecular complexity measures are designed based on the quantum molecular kinematics. The Hamiltonian matrix constructed in a quasi-topological approximation describes the temporal evolution of the modelled electronic system and determined the time derivatives for the dynamic quantities. This allows to define the average quantum kinematic characteristics closely related to the curvatures of the electron paths, particularly, the torsion reflecting the chirality of the dynamic system. A special attention has been given to the computational scheme for this chirality measure. The calculations on realistic molecular systems demonstrate reasonable behaviour of the proposed molecular complexity indices.
Resumo:
Crotonaldehyde (2-butenal) adsorption over gold sub-nanometer particles, and the influence of co-adsorbed oxygen, has been systematically investigated by computational methods. Using density functional theory, the adsorption energetics of crotonaldehyde on bare and oxidised gold clusters (Au , d = 0.8 nm) were determined as a function of oxygen coverage and coordination geometry. At low oxygen coverage, sites are available for which crotonaldehyde adsorption is enhanced relative to bare Au clusters by 10 kJ mol. At higher oxygen coverage, crotonaldehyde is forced to adsorb in close proximity to oxygen weakening adsorption by up to 60 kJ mol relative to bare Au. Bonding geometries, density of states plots and Bader analysis, are used to elucidate crotonaldehyde bonding to gold nanoparticles in terms of partial electron transfer from Au to crotonaldehyde, and note that donation to gold from crotonaldehyde also becomes significant following metal oxidation. At high oxygen coverage we find that all molecular adsorption sites have a neighbouring, destabilising, oxygen adatom so that despite enhanced donation, crotonaldehyde adsorption is always weakened by steric interactions. For a larger cluster (Au, d = 1.1 nm) crotonaldehyde adsorption is destabilized in this way even at a low oxygen coverage. These findings provide a quantitative framework to underpin the experimentally observed influence of oxygen on the selective oxidation of crotyl alcohol to crotonaldehyde over gold and gold-palladium alloys. © 2014 the Partner Organisations.
Resumo:
Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.
A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.
The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.
From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.
Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.
Resumo:
This work explores the use of statistical methods in describing and estimating camera poses, as well as the information feedback loop between camera pose and object detection. Surging development in robotics and computer vision has pushed the need for algorithms that infer, understand, and utilize information about the position and orientation of the sensor platforms when observing and/or interacting with their environment.
The first contribution of this thesis is the development of a set of statistical tools for representing and estimating the uncertainty in object poses. A distribution for representing the joint uncertainty over multiple object positions and orientations is described, called the mirrored normal-Bingham distribution. This distribution generalizes both the normal distribution in Euclidean space, and the Bingham distribution on the unit hypersphere. It is shown to inherit many of the convenient properties of these special cases: it is the maximum-entropy distribution with fixed second moment, and there is a generalized Laplace approximation whose result is the mirrored normal-Bingham distribution. This distribution and approximation method are demonstrated by deriving the analytical approximation to the wrapped-normal distribution. Further, it is shown how these tools can be used to represent the uncertainty in the result of a bundle adjustment problem.
Another application of these methods is illustrated as part of a novel camera pose estimation algorithm based on object detections. The autocalibration task is formulated as a bundle adjustment problem using prior distributions over the 3D points to enforce the objects' structure and their relationship with the scene geometry. This framework is very flexible and enables the use of off-the-shelf computational tools to solve specialized autocalibration problems. Its performance is evaluated using a pedestrian detector to provide head and foot location observations, and it proves much faster and potentially more accurate than existing methods.
Finally, the information feedback loop between object detection and camera pose estimation is closed by utilizing camera pose information to improve object detection in scenarios with significant perspective warping. Methods are presented that allow the inverse perspective mapping traditionally applied to images to be applied instead to features computed from those images. For the special case of HOG-like features, which are used by many modern object detection systems, these methods are shown to provide substantial performance benefits over unadapted detectors while achieving real-time frame rates, orders of magnitude faster than comparable image warping methods.
The statistical tools and algorithms presented here are especially promising for mobile cameras, providing the ability to autocalibrate and adapt to the camera pose in real time. In addition, these methods have wide-ranging potential applications in diverse areas of computer vision, robotics, and imaging.
Resumo:
Thin film adhesion often determines microelectronic device reliability and it is therefore essential to have experimental techniques that accurately and efficiently characterize it. Laser-induced delamination is a novel technique that uses laser-generated stress waves to load thin films at high strain rates and extract the fracture toughness of the film/substrate interface. The effectiveness of the technique in measuring the interface properties of metallic films has been documented in previous studies. The objective of the current effort is to model the effect of residual stresses on the dynamic delamination of thin films. Residual stresses can be high enough to affect the crack advance and the mode mixity of the delimitation event, and must therefore be adequately modeled to make accurate and repeatable predictions of fracture toughness. The equivalent axial force and bending moment generated by the residual stresses are included in a dynamic, nonlinear finite element model of the delaminating film, and the impact of residual stresses on the final extent of the interfacial crack, the relative contribution of shear failure, and the deformed shape of the delaminated film is studied in detail. Another objective of the study is to develop techniques to address issues related to the testing of polymeric films. These type of films adhere well to silicon and the resulting crack advance is often much smaller than for metallic films, making the extraction of the interface fracture toughness more difficult. The use of an inertial layer which enhances the amount of kinetic energy trapped in the film and thus the crack advance is examined. It is determined that the inertial layer does improve the crack advance, although in a relatively limited fashion. The high interface toughness of polymer films often causes the film to fail cohesively when the crack front leaves the weakly bonded region and enters the strong interface. The use of a tapered pre-crack region that provides a more gradual transition to the strong interface is examined. The tapered triangular pre-crack geometry is found to be effective in reducing the stresses induced thereby making it an attractive option. We conclude by studying the impact of modifying the pre-crack geometry to enable the testing of multiple polymer films.
Resumo:
In the standard Vehicle Routing Problem (VRP), we route a fleet of vehicles to deliver the demands of all customers such that the total distance traveled by the fleet is minimized. In this dissertation, we study variants of the VRP that minimize the completion time, i.e., we minimize the distance of the longest route. We call it the min-max objective function. In applications such as disaster relief efforts and military operations, the objective is often to finish the delivery or the task as soon as possible, not to plan routes with the minimum total distance. Even in commercial package delivery nowadays, companies are investing in new technologies to speed up delivery instead of focusing merely on the min-sum objective. In this dissertation, we compare the min-max and the standard (min-sum) objective functions in a worst-case analysis to show that the optimal solution with respect to one objective function can be very poor with respect to the other. The results motivate the design of algorithms specifically for the min-max objective. We study variants of min-max VRPs including one problem from the literature (the min-max Multi-Depot VRP) and two new problems (the min-max Split Delivery Multi-Depot VRP with Minimum Service Requirement and the min-max Close-Enough VRP). We develop heuristics to solve these three problems. We compare the results produced by our heuristics to the best-known solutions in the literature and find that our algorithms are effective. In the case where benchmark instances are not available, we generate instances whose near-optimal solutions can be estimated based on geometry. We formulate the Vehicle Routing Problem with Drones and carry out a theoretical analysis to show the maximum benefit from using drones in addition to trucks to reduce delivery time. The speed-up ratio depends on the number of drones loaded onto one truck and the speed of the drone relative to the speed of the truck.