8 resultados para best estimate method

em Universidad de Alicante


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Light traps have been used widely to sample insect abundance and diversity, but their performance for sampling scarab beetles in tropical forests based on light source type and sampling hours throughout the night has not been evaluated. The efficiency of mercury-vapour lamps, cool white light and ultraviolet light sources in attracting Dynastinae, Melolonthinae and Rutelinae scarab beetles, and the most adequate period of the night to carry out the sampling was tested in different forest areas of Costa Rica. Our results showed that light source wavelengths and hours of sampling influenced scarab beetle catches. No significant differences were observed in trap performance between the ultraviolet light and mercury-vapour traps, whereas these two methods caught significantly more species richness and abundance than cool white light traps. Species composition also varied between methods. Large differences appear between catches in the sampling period, with the first five hours of the night being more effective than the last five hours. Because of their high efficiency and logistic advantages, we recommend ultraviolet light traps deployed during the first hours of the night as the best sampling method for biodiversity studies of those scarab beetles in tropical forests.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Frequently, population ecology of marine organisms uses a descriptive approach in which their sizes and densities are plotted over time. This approach has limited usefulness for design strategies in management or modelling different scenarios. Population projection matrix models are among the most widely used tools in ecology. Unfortunately, for the majority of pelagic marine organisms, it is difficult to mark individuals and follow them over time to determine their vital rates and built a population projection matrix model. Nevertheless, it is possible to get time-series data to calculate size structure and densities of each size, in order to determine the matrix parameters. This approach is known as a “demographic inverse problem” and it is based on quadratic programming methods, but it has rarely been used on aquatic organisms. We used unpublished field data of a population of cubomedusae Carybdea marsupialis to construct a population projection matrix model and compare two different management strategies to lower population to values before year 2008 when there was no significant interaction with bathers. Those strategies were by direct removal of medusae and by reducing prey. Our results showed that removal of jellyfish from all size classes was more effective than removing only juveniles or adults. When reducing prey, the highest efficiency to lower the C. marsupialis population occurred when prey depletion affected prey of all medusae sizes. Our model fit well with the field data and may serve to design an efficient management strategy or build hypothetical scenarios such as removal of individuals or reducing prey. TThis This sdfsdshis method is applicable to other marine or terrestrial species, for which density and population structure over time are available.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present different error measurements with the aim to evaluate the quality of the approximations generated by the GNG3D method for mesh simplification. The first phase of this method consists on the execution of the GNG3D algorithm, described in the paper. The primary goal of this phase is to obtain a simplified set of vertices representing the best approximation of the original 3D object. In the reconstruction phase we use the information provided by the optimization algorithm to reconstruct the faces thus obtaining the optimized mesh. The implementation of three error functions, named Eavg, Emax, Esur, permitts us to control the error of the simplified model, as it is shown in the examples studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Characterization of sound absorbing materials is essential to predict its acoustic behaviour. The most commonly used models to do so consider the flow resistivity, porosity, and average fibre diameter as parameters to determine the acoustic impedance and sound absorbing coefficient. Besides direct experimental techniques, numerical approaches appear to be an alternative to estimate the material’s parameters. In this work an inverse numerical method to obtain some parameters of a fibrous material is presented. Using measurements of the normal incidence sound absorption coefficient and then using the model proposed by Voronina, subsequent application of basic minimization techniques allows one to obtain the porosity, average fibre diameter and density of a sound absorbing material. The numerical results agree fairly well with the experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several recent works deal with 3D data in mobile robotic problems, e.g., mapping. Data comes from any kind of sensor (time of flight, Kinect or 3D lasers) that provide a huge amount of unorganized 3D data. In this paper we detail an efficient approach to build complete 3D models using a soft computing method, the Growing Neural Gas (GNG). As neural models deal easily with noise, imprecision, uncertainty or partial data, GNG provides better results than other approaches. The GNG obtained is then applied to a sequence. We present a comprehensive study on GNG parameters to ensure the best result at the lowest time cost. From this GNG structure, we propose to calculate planar patches and thus obtaining a fast method to compute the movement performed by a mobile robot by means of a 3D models registration algorithm. Final results of 3D mapping are also shown.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thermal degradation of PLA is a complex process since it comprises many simultaneous reactions. The use of analytical techniques, such as differential scanning calorimetry (DSC) and thermogravimetry (TGA), yields useful information but a more sensitive analytical technique would be necessary to identify and quantify the PLA degradation products. In this work the thermal degradation of PLA at high temperatures was studied by using a pyrolyzer coupled to a gas chromatograph with mass spectrometry detection (Py-GC/MS). Pyrolysis conditions (temperature and time) were optimized in order to obtain an adequate chromatographic separation of the compounds formed during heating. The best resolution of chromatographic peaks was obtained by pyrolyzing the material from room temperature to 600 °C during 0.5 s. These conditions allowed identifying and quantifying the major compounds produced during the PLA thermal degradation in inert atmosphere. The strategy followed to select these operation parameters was by using sequential pyrolysis based on the adaptation of mathematical models. By application of this strategy it was demonstrated that PLA is degraded at high temperatures by following a non-linear behaviour. The application of logistic and Boltzmann models leads to good fittings to the experimental results, despite the Boltzmann model provided the best approach to calculate the time at which 50% of PLA was degraded. In conclusion, the Boltzmann method can be applied as a tool for simulating the PLA thermal degradation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background and objective: In this paper, we have tested the suitability of using different artificial intelligence-based algorithms for decision support when classifying the risk of congenital heart surgery. In this sense, classification of those surgical risks provides enormous benefits as the a priori estimation of surgical outcomes depending on either the type of disease or the type of repair, and other elements that influence the final result. This preventive estimation may help to avoid future complications, or even death. Methods: We have evaluated four machine learning algorithms to achieve our objective: multilayer perceptron, self-organizing map, radial basis function networks and decision trees. The architectures implemented have the aim of classifying among three types of surgical risk: low complexity, medium complexity and high complexity. Results: Accuracy outcomes achieved range between 80% and 99%, being the multilayer perceptron method the one that offered a higher hit ratio. Conclusions: According to the results, it is feasible to develop a clinical decision support system using the evaluated algorithms. Such system would help cardiology specialists, paediatricians and surgeons to forecast the level of risk related to a congenital heart disease surgery.