957 resultados para region-based algorithms
Resumo:
Tropical Applications of Meteorology Using Satellite and Ground-Based Observations (TAMSAT) rainfall estimates are used extensively across Africa for operational rainfall monitoring and food security applications; thus, regional evaluations of TAMSAT are essential to ensure its reliability. This study assesses the performance of TAMSAT rainfall estimates, along with the African Rainfall Climatology (ARC), version 2; the Tropical Rainfall Measuring Mission (TRMM) 3B42 product; and the Climate Prediction Center morphing technique (CMORPH), against a dense rain gauge network over a mountainous region of Ethiopia. Overall, TAMSAT exhibits good skill in detecting rainy events but underestimates rainfall amount, while ARC underestimates both rainfall amount and rainy event frequency. Meanwhile, TRMM consistently performs best in detecting rainy events and capturing the mean rainfall and seasonal variability, while CMORPH tends to overdetect rainy events. Moreover, the mean difference in daily rainfall between the products and rain gauges shows increasing underestimation with increasing elevation. However, the distribution in satellite–gauge differences demon- strates that although 75% of retrievals underestimate rainfall, up to 25% overestimate rainfall over all eleva- tions. Case studies using high-resolution simulations suggest underestimation in the satellite algorithms is likely due to shallow convection with warm cloud-top temperatures in addition to beam-filling effects in microwave- based retrievals from localized convective cells. The overestimation by IR-based algorithms is attributed to nonraining cirrus with cold cloud-top temperatures. These results stress the importance of understanding re- gional precipitation systems causing uncertainties in satellite rainfall estimates with a view toward using this knowledge to improve rainfall algorithms.
Resumo:
We investigate several two-dimensional guillotine cutting stock problems and their variants in which orthogonal rotations are allowed. We first present two dynamic programming based algorithms for the Rectangular Knapsack (RK) problem and its variants in which the patterns must be staged. The first algorithm solves the recurrence formula proposed by Beasley; the second algorithm - for staged patterns - also uses a recurrence formula. We show that if the items are not so small compared to the dimensions of the bin, then these algorithms require polynomial time. Using these algorithms we solved all instances of the RK problem found at the OR-LIBRARY, including one for which no optimal solution was known. We also consider the Two-dimensional Cutting Stock problem. We present a column generation based algorithm for this problem that uses the first algorithm above mentioned to generate the columns. We propose two strategies to tackle the residual instances. We also investigate a variant of this problem where the bins have different sizes. At last, we study the Two-dimensional Strip Packing problem. We also present a column generation based algorithm for this problem that uses the second algorithm above mentioned where staged patterns are imposed. In this case we solve instances for two-, three- and four-staged patterns. We report on some computational experiments with the various algorithms we propose in this paper. The results indicate that these algorithms seem to be suitable for solving real-world instances. We give a detailed description (a pseudo-code) of all the algorithms presented here, so that the reader may easily implement these algorithms. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
This document was prepared by the Economic Commission for Latin America and the Caribbean (ECLAC); and offers a description of the main trends in the development of official statistics in Latin America and the Caribbean and the principal challenges in that regard.The first chapter provides an analysis of the state of development of statistical production in the region, based on statistical information for 41 Latin American and Caribbean countries and eight specific areas. The institutional organization of national statistical systems in the region is also described. Chapter II examines the history and current status of mechanisms for regional and subregional coordination and of the Statistical Conference of the Americas of ECLAC. Chapter III describes the main challenges for official statistics in the countries of the region and the strategies that the Statistical Conference of the Americas and ECLAC propose to implement in order to address them.
Resumo:
Solution of structural reliability problems by the First Order method require optimization algorithms to find the smallest distance between a limit state function and the origin of standard Gaussian space. The Hassofer-Lind-Rackwitz-Fiessler (HLRF) algorithm, developed specifically for this purpose, has been shown to be efficient but not robust, as it fails to converge for a significant number of problems. On the other hand, recent developments in general (augmented Lagrangian) optimization techniques have not been tested in aplication to structural reliability problems. In the present article, three new optimization algorithms for structural reliability analysis are presented. One algorithm is based on the HLRF, but uses a new differentiable merit function with Wolfe conditions to select step length in linear search. It is shown in the article that, under certain assumptions, the proposed algorithm generates a sequence that converges to the local minimizer of the problem. Two new augmented Lagrangian methods are also presented, which use quadratic penalties to solve nonlinear problems with equality constraints. Performance and robustness of the new algorithms is compared to the classic augmented Lagrangian method, to HLRF and to the improved HLRF (iHLRF) algorithms, in the solution of 25 benchmark problems from the literature. The new proposed HLRF algorithm is shown to be more robust than HLRF or iHLRF, and as efficient as the iHLRF algorithm. The two augmented Lagrangian methods proposed herein are shown to be more robust and more efficient than the classical augmented Lagrangian method.
Resumo:
The task considered in this paper is performance evaluation of region segmentation algorithms in the ground-truth-based paradigm. Given a machine segmentation and a ground-truth segmentation, performance measures are needed. We propose to consider the image segmentation problem as one of data clustering and, as a consequence, to use measures for comparing clusterings developed in statistics and machine learning. By doing so, we obtain a variety of performance measures which have not been used before in image processing. In particular, some of these measures have the highly desired property of being a metric. Experimental results are reported on both synthetic and real data to validate the measures and compare them with others.
Resumo:
Heuristic optimization algorithms are of great importance for reaching solutions to various real world problems. These algorithms have a wide range of applications such as cost reduction, artificial intelligence, and medicine. By the term cost, one could imply that that cost is associated with, for instance, the value of a function of several independent variables. Often, when dealing with engineering problems, we want to minimize the value of a function in order to achieve an optimum, or to maximize another parameter which increases with a decrease in the cost (the value of this function). The heuristic cost reduction algorithms work by finding the optimum values of the independent variables for which the value of the function (the “cost”) is the minimum. There is an abundance of heuristic cost reduction algorithms to choose from. We will start with a discussion of various optimization algorithms such as Memetic algorithms, force-directed placement, and evolution-based algorithms. Following this initial discussion, we will take up the working of three algorithms and implement the same in MATLAB. The focus of this report is to provide detailed information on the working of three different heuristic optimization algorithms, and conclude with a comparative study on the performance of these algorithms when implemented in MATLAB. In this report, the three algorithms we will take in to consideration will be the non-adaptive simulated annealing algorithm, the adaptive simulated annealing algorithm, and random restart hill climbing algorithm. The algorithms are heuristic in nature, that is, the solution these achieve may not be the best of all the solutions but provide a means to reach a quick solution that may be a reasonably good solution without taking an indefinite time to implement.
Resumo:
Particulate matter concentration and water temperature at 5 m depth level are compared in the Canary upwelling region to the east of the Cape Blanc. It was found that accumulation of particulate matter was timed to hydrofrontal zones. Particle size distributions for particulate matter obtained using the Coulter counter agree with the hyperbolic law (of the Junge type) with double values for the size parameter, which changes for particle diameters of 5-6 microns. Average values for the size parameter in the region of the upwelling are significantly lower than in the open ocean. Specific surface of particulate matter associated with reactivity differs significantly on different sides of the upwelling front and increases beyond the upwelling.
Resumo:
In the present global era in which firms choose the location of their plants beyond national borders, location characteristics are important for attracting multinational enterprises (MNEs). The better access to countries with large market is clearly attractive for MNEs. For example, special treatments on tariffs such as the Generalized System of Preferences (GSP) are beneficial for MNEs whose home country does not have such treatments. Not only such country characteristics but also region characteristics (i.e. province-level or city-level ones) matter, particularly in the case that location characteristics differ widely between a nation's regions. The existence of industrial concentration, that is, agglomeration, is a typical regional characteristic. It is with consideration of these country-level and region-level characteristics that MNEs decide their location abroad. A large number of academic studies have investigated in what kinds of countries MNEs locate, i.e. location choice analysis. Employing the usual new economic geography model (i.e. constant elasticity of substitution (CES) utility function, Dixit-Stiglitz monopolistic competition, and ice-berg trade costs), the literature derives the profit function, of which coefficients are estimated using maximum likelihood procedures. Recent studies are as follows: Head, Rise, and Swenson (1999) for Japanese MNEs in the US; Belderbos and Carree (2002) for Japanese MNEs in China; Head and Mayer (2004) for Japanese MNEs in Europe; Disdier and Mayer (2004) for French MNEs in Europe; Castellani and Zanfei (2004) for large MNEs worldwide; Mayer, Mejean, and Nefussi (2007) for French MNEs worldwide; Crozet, Mayer, and Mucchielli (2004) for MNEs in France; and Basile, Castellani, and Zanfei (2008) for MNEs in Europe. At the present time, three main topics can be found in this literature. The first introduces various location elements as independent variables. The above-mentioned new economic geography model usually yields the profit function, which is a function of market size, productive factor prices, price of intermediate goods, and trade costs. As a proxy for the price of intermediate goods, the measure of agglomeration is often used, particularly the number of manufacturing firms. Some studies employ more disaggregated numbers of manufacturing firms, such as the number of manufacturing firms with the same nationality as the firms choosing the location (e.g., Head et al., 1999; Crozet et al., 2004) or the number of firms belonging to the same firm group (e.g., Belderbos and Carree, 2002). As part of trade costs, some investment climate measures have been examined: free trade zones in the US (Head et al., 1999), special economic zones and opening coastal cities in China (Belderbos and Carree, 2002), and Objective 1 structural funds and cohesion funds in Europe (Basile et al., 2008). Second, the validity of proxy variables for location elements is further examined. Head and Mayer (2004) examine the validity of market potential on location choice. They propose the use of two measures: the Harris market potential index (Harris, 1954) and the Krugman-type index used in Redding and Venables (2004). The Harris-type index is simply the sum of distance-weighted real GDP. They employ the Krugman-type market potential index, which is directly derived from the new economic geography model, as it takes into account the extent of competition (i.e. price index) and is constructed using estimators of importing country dummy variables in the well-known gravity equation, as in Redding and Venables (2004). They find that "theory does not pay", in the sense that the Harris market potential outperforms Krugman's market potential in both the magnitude of its coefficient and the fit of the model to be estimated. The third topic explores the substitution of location by examining inclusive values in the nested-logit model. For example, using firm-level data on French investments both in France and abroad over the 1992-2002 period, Mayer et al. (2007) investigate the determinants of location choice and assess empirically whether the domestic economy has been losing attractiveness over the recent period or not. The estimated coefficient for inclusive value is strongly significant and near unity, indicating that the national economy is not different from the rest of the world in terms of substitution patterns. Similarly, Disdier and Mayer (2004) investigate whether French MNEs consider Western and Eastern Europe as two distinct groups of potential host countries by examining the coefficient for the inclusive value in nested-logit estimation. They confirm the relevance of an East-West structure in the country location decision and furthermore show that this relevance decreases over time. The purpose of this paper is to investigate the location choice of Japanese MNEs in Thailand, Cambodia, Laos, Myanmar, and Vietnam, and is closely related to the third topic mentioned above. By examining region-level location choice with the nested-logit model, I investigate the relative importance of not only country characteristics but also region characteristics. Such investigation is invaluable particularly in the case of location choice in those five countries: industrialization remains immature in those countries which have not yet succeeded in attracting enough MNEs, and as a result, it is expected that there are not yet crucial regional variations for MNEs within such a nation, meaning the country characteristics are still relatively important to attract MNEs. To illustrate, in the case of Cambodia and Laos, one of the crucial elements for Japanese MNEs would be that LDC preferential tariff schemes are available for exports from Cambodia and Laos. On the other hand, in the case of Thailand and Vietnam, which have accepted a relatively large number of MNEs and thus raised the extent of regional inequality, regional characteristics such as the existence of agglomeration would become important elements in location choice. Our sample countries seem, therefore, to offer rich variations for analyzing the relative importance between country characteristics and region characteristics. Our empirical strategy has a further advantage. As in the third topic in the location choice literature, the use of the nested-logit model enables us to examine substitution patterns between country-based and region-based location decisions by MNEs in the concerned countries. For example, it is possible to investigate empirically whether Japanese multinational firms consider Thailand/Vietnam and the other three countries as two distinct groups of potential host countries, by examining the inclusive value parameters in nested-logit estimation. In particular, our sample countries all experienced dramatic changes in, for example, economic growth or trade costs reduction during the sample period. Thus, we will find the dramatic dynamics of such substitution patterns. Our rigorous analysis of the relative importance between country characteristics and region characteristics is invaluable from the viewpoint of policy implications. First, while the former characteristics should be improved mainly by central government in each country, there is sometimes room for the improvement of the latter characteristics by even local governments or smaller institutions such as private agencies. Consequently, it becomes important for these smaller institutions to know just how crucial the improvement of region characteristics is for attracting foreign companies. Second, as economies grow, country characteristics become similar among countries. For example, the LCD preferential tariff schemes are available only when a country is less developed. Therefore, it is important particularly for the least developed countries to know what kinds of regional characteristics become important following economic growth; in other words, after their country characteristics become similar to those of the more developed countries. I also incorporate one important characteristic of MNEs, namely, productivity. The well-known Helpman-Melitz-Yeaple model indicates that only firms with higher productivity can afford overseas entry (Helpman et al., 2004). Beyond this argument, there may be some differences in MNEs' productivity among our sample countries and regions. Such differences are important from the viewpoint of "spillover effects" from MNEs, which are one of the most important results for host countries in accepting their entry. The spillover effects are that the presence of inward foreign direct investment (FDI) aises domestic firms' productivity through various channels such as imitation. Such positive effects might be larger in areas with more productive MNEs. Therefore, it becomes important for host countries to know how much productive firms are likely to invest in them. The rest of this paper is organized as follows. Section 2 takes a brief look at the worldwide distribution of Japanese overseas affiliates. Section 3 provides an empirical model to examine their location choice, and lastly, we discuss future works to estimate our model.
Resumo:
This thesis deals with the problem of efficiently tracking 3D objects in sequences of images. We tackle the efficient 3D tracking problem by using direct image registration. This problem is posed as an iterative optimization procedure that minimizes a brightness error norm. We review the most popular iterative methods for image registration in the literature, turning our attention to those algorithms that use efficient optimization techniques. Two forms of efficient registration algorithms are investigated. The first type comprises the additive registration algorithms: these algorithms incrementally compute the motion parameters by linearly approximating the brightness error function. We centre our attention on Hager and Belhumeur’s factorization-based algorithm for image registration. We propose a fundamental requirement that factorization-based algorithms must satisfy to guarantee good convergence, and introduce a systematic procedure that automatically computes the factorization. Finally, we also bring out two warp functions to register rigid and nonrigid 3D targets that satisfy the requirement. The second type comprises the compositional registration algorithms, where the brightness function error is written by using function composition. We study the current approaches to compositional image alignment, and we emphasize the importance of the Inverse Compositional method, which is known to be the most efficient image registration algorithm. We introduce a new algorithm, the Efficient Forward Compositional image registration: this algorithm avoids the necessity of inverting the warping function, and provides a new interpretation of the working mechanisms of the inverse compositional alignment. By using this information, we propose two fundamental requirements that guarantee the convergence of compositional image registration methods. Finally, we support our claims by using extensive experimental testing with synthetic and real-world data. We propose a distinction between image registration and tracking when using efficient algorithms. We show that, depending whether the fundamental requirements are hold, some efficient algorithms are eligible for image registration but not for tracking.
Resumo:
In this paper, we apply a hierarchical tracking strategy of planar objects (or that can be assumed to be planar) that is based on direct methods for vision-based applications on-board UAVs. The use of this tracking strategy allows to achieve the tasks at real-time frame rates and to overcome problems posed by the challenging conditions of the tasks: e.g. constant vibrations, fast 3D changes, or limited capacity on-board. The vast majority of approaches make use of feature-based methods to track objects. Nonetheless, in this paper we show that although some of these feature-based solutions are faster, direct methods can be more robust under fast 3D motions (fast changes in position), some changes in appearance, constant vibrations (without requiring any specific hardware or software for video stabilization), and situations in which part of the object to track is outside of the field of view of the camera. The performance of the proposed tracking strategy on-board UAVs is evaluated with images from realflight tests using manually-generated ground truth information, accurate position estimation using a Vicon system, and also with simulated data from a simulation environment. Results show that the hierarchical tracking strategy performs better than wellknown feature-based algorithms and well-known configurations of direct methods, and that its performance is robust enough for vision-in-the-loop tasks, e.g. for vision-based landing tasks.
Resumo:
Los algoritmos basados en registros de desplazamiento con realimentación (en inglés FSR) se han utilizado como generadores de flujos pseudoaleatorios en aplicaciones con recursos limitados como los sistemas de apertura sin llave. Se considera canal primario a aquel que se utiliza para realizar una transmisión de información. La aparición de los ataques de canal auxiliar (en inglés SCA), que explotan información filtrada inintencionadamente a través de canales laterales como el consumo, las emisiones electromagnéticas o el tiempo empleado, supone una grave amenaza para estas aplicaciones, dado que los dispositivos son accesibles por un atacante. El objetivo de esta tesis es proporcionar un conjunto de protecciones que se puedan aplicar de forma automática y que utilicen recursos ya disponibles, evitando un incremento sustancial en los costes y alargando la vida útil de aplicaciones que puedan estar desplegadas. Explotamos el paralelismo existente en algoritmos FSR, ya que sólo hay 1 bit de diferencia entre estados de rondas consecutivas. Realizamos aportaciones en tres niveles: a nivel de sistema, utilizando un coprocesador reconfigurable, a través del compilador y a nivel de bit, aprovechando los recursos disponibles en el procesador. Proponemos un marco de trabajo que nos permite evaluar implementaciones de un algoritmo incluyendo los efectos introducidos por el compilador considerando que el atacante es experto. En el campo de los ataques, hemos propuesto un nuevo ataque diferencial que se adapta mejor a las condiciones de las implementaciones software de FSR, en las que el consumo entre rondas es muy similar. SORU2 es un co-procesador vectorial reconfigurable propuesto para reducir el consumo energético en aplicaciones con paralelismo y basadas en el uso de bucles. Proponemos el uso de SORU2, además, para ejecutar algoritmos basados en FSR de forma segura. Al ser reconfigurable, no supone un sobrecoste en recursos, ya que no está dedicado en exclusiva al algoritmo de cifrado. Proponemos una configuración que ejecuta múltiples algoritmos de cifrado similares de forma simultánea, con distintas implementaciones y claves. A partir de una implementación sin protecciones, que demostramos que es completamente vulnerable ante SCA, obtenemos una implementación segura a los ataques que hemos realizado. A nivel de compilador, proponemos un mecanismo para evaluar los efectos de las secuencias de optimización del compilador sobre una implementación. El número de posibles secuencias de optimizaciones de compilador es extremadamente alto. El marco de trabajo propuesto incluye un algoritmo para la selección de las secuencias de optimización a considerar. Debido a que las optimizaciones del compilador transforman las implementaciones, se pueden generar automáticamente implementaciones diferentes combinamos para incrementar la seguridad ante SCA. Proponemos 2 mecanismos de aplicación de estas contramedidas, que aumentan la seguridad de la implementación original sin poder considerarse seguras. Finalmente hemos propuesto la ejecución paralela a nivel de bit del algoritmo en un procesador. Utilizamos la forma algebraica normal del algoritmo, que automáticamente se paraleliza. La implementación sobre el algoritmo evaluado mejora en rendimiento y evita que se filtre información por una ejecución dependiente de datos. Sin embargo, es más vulnerable ante ataques diferenciales que la implementación original. Proponemos una modificación del algoritmo para obtener una implementación segura, descartando parcialmente ejecuciones del algoritmo, de forma aleatoria. Esta implementación no introduce una sobrecarga en rendimiento comparada con las implementaciones originales. En definitiva, hemos propuesto varios mecanismos originales a distintos niveles para introducir aleatoridad en implementaciones de algoritmos FSR sin incrementar sustancialmente los recursos necesarios. ABSTRACT Feedback Shift Registers (FSR) have been traditionally used to implement pseudorandom sequence generators. These generators are used in Stream ciphers in systems with tight resource constraints, such as Remote Keyless Entry. When communicating electronic devices, the primary channel is the one used to transmit the information. Side-Channel Attack (SCA) use additional information leaking from the actual implementation, including power consumption, electromagnetic emissions or timing information. Side-Channel Attacks (SCA) are a serious threat to FSR-based applications, as an attacker usually has physical access to the devices. The main objective of this Ph.D. thesis is to provide a set of countermeasures that can be applied automatically using the available resources, avoiding a significant cost overhead and extending the useful life of deployed systems. If possible, we propose to take advantage of the inherent parallelism of FSR-based algorithms, as the state of a FSR differs from previous values only in 1-bit. We have contributed in three different levels: architecture (using a reconfigurable co-processor), using compiler optimizations, and at bit level, making the most of the resources available at the processor. We have developed a framework to evaluate implementations of an algorithm including the effects introduced by the compiler. We consider the presence of an expert attacker with great knowledge on the application and the device. Regarding SCA, we have presented a new differential SCA that performs better than traditional SCA on software FSR-based algorithms, where the leaked values are similar between rounds. SORU2 is a reconfigurable vector co-processor. It has been developed to reduce energy consumption in loop-based applications with parallelism. In addition, we propose its use for secure implementations of FSR-based algorithms. The cost overhead is discarded as the co-processor is not exclusively dedicated to the encryption algorithm. We present a co-processor configuration that executes multiple simultaneous encryptions, using different implementations and keys. From a basic implementation, which is proved to be vulnerable to SCA, we obtain an implementation where the SCA applied were unsuccessful. At compiler level, we use the framework to evaluate the effect of sequences of compiler optimization passes on a software implementation. There are many optimization passes available. The optimization sequences are combinations of the available passes. The amount of sequences is extremely high. The framework includes an algorithm for the selection of interesting sequences that require detailed evaluation. As existing compiler optimizations transform the software implementation, using different optimization sequences we can automatically generate different implementations. We propose to randomly switch between the generated implementations to increase the resistance against SCA.We propose two countermeasures. The results show that, although they increase the resistance against SCA, the resulting implementations are not secure. At bit level, we propose to exploit bit level parallelism of FSR-based implementations using pseudo bitslice implementation in a wireless node processor. The bitslice implementation is automatically obtained from the Algebraic Normal Form of the algorithm. The results show a performance improvement, avoiding timing information leakage, but increasing the vulnerability against differential SCA.We provide a secure version of the algorithm by randomly discarding part of the data obtained. The overhead in performance is negligible when compared to the original implementations. To summarize, we have proposed a set of original countermeasures at different levels that introduce randomness in FSR-based algorithms avoiding a heavy overhead on the resources required.
Resumo:
Background and objective: In this paper, we have tested the suitability of using different artificial intelligence-based algorithms for decision support when classifying the risk of congenital heart surgery. In this sense, classification of those surgical risks provides enormous benefits as the a priori estimation of surgical outcomes depending on either the type of disease or the type of repair, and other elements that influence the final result. This preventive estimation may help to avoid future complications, or even death. Methods: We have evaluated four machine learning algorithms to achieve our objective: multilayer perceptron, self-organizing map, radial basis function networks and decision trees. The architectures implemented have the aim of classifying among three types of surgical risk: low complexity, medium complexity and high complexity. Results: Accuracy outcomes achieved range between 80% and 99%, being the multilayer perceptron method the one that offered a higher hit ratio. Conclusions: According to the results, it is feasible to develop a clinical decision support system using the evaluated algorithms. Such system would help cardiology specialists, paediatricians and surgeons to forecast the level of risk related to a congenital heart disease surgery.
Resumo:
We have been investigating the cryptographical properties of in nite families of simple graphs of large girth with the special colouring of vertices during the last 10 years. Such families can be used for the development of cryptographical algorithms (on symmetric or public key modes) and turbocodes in error correction theory. Only few families of simple graphs of large unbounded girth and arbitrarily large degree are known. The paper is devoted to the more general theory of directed graphs of large girth and their cryptographical applications. It contains new explicit algebraic constructions of in finite families of such graphs. We show that they can be used for the implementation of secure and very fast symmetric encryption algorithms. The symbolic computations technique allow us to create a public key mode for the encryption scheme based on algebraic graphs.
Resumo:
As is well known, Hessian-based adaptive filters (such as the recursive-least squares algorithm (RLS) for supervised adaptive filtering, or the Shalvi-Weinstein algorithm (SWA) for blind equalization) converge much faster than gradient-based algorithms [such as the least-mean-squares algorithm (LMS) or the constant-modulus algorithm (CMA)]. However, when the problem is tracking a time-variant filter, the issue is not so clear-cut: there are environments for which each family presents better performance. Given this, we propose the use of a convex combination of algorithms of different families to obtain an algorithm with superior tracking capability. We show the potential of this combination and provide a unified theoretical model for the steady-state excess mean-square error for convex combinations of gradient- and Hessian-based algorithms, assuming a random-walk model for the parameter variations. The proposed model is valid for algorithms of the same or different families, and for supervised (LMS and RLS) or blind (CMA and SWA) algorithms.
Resumo:
One of the current frontiers in the clinical management of Pectus Excavatum (PE) patients is the prediction of the surgical outcome prior to the intervention. This can be done through computerized simulation of the Nuss procedure, which requires an anatomically correct representation of the costal cartilage. To this end, we take advantage of the costal cartilage tubular structure to detect it through multi-scale vesselness filtering. This information is then used in an interactive 2D initialization procedure which uses anatomical maximum intensity projections of 3D vesselness feature images to efficiently initialize the 3D segmentation process. We identify the cartilage tissue centerlines in these projected 2D images using a livewire approach. We finally refine the 3D cartilage surface through region-based sparse field level-sets. We have tested the proposed algorithm in 6 noncontrast CT datasets from PE patients. A good segmentation performance was found against reference manual contouring, with an average Dice coefficient of 0.75±0.04 and an average mean surface distance of 1.69±0.30mm. The proposed method requires roughly 1 minute for the interactive initialization step, which can positively contribute to an extended use of this tool in clinical practice, since current manual delineation of the costal cartilage can take up to an hour.