940 resultados para Distance-based techniques


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Ecologists usually estimate means, but devote much less attention to variation. The study of variation is a key aspect to understand natural systems and to make predictions regarding them. In community ecology, most studies focus on local species diversity (alpha diversity), but only in recent decades have ecologists devoted proper attention to variation in community composition among sites (beta diversity). This is in spite of the fact that the first attempts to estimate beta diversity date back to the pioneering work by Koch and Whittaker in the 1950s. Progress in the last decade has been made in the development both of methods and of hypotheses about the origin and maintenance of variation in community composition. For instance, methods are available to partition total diversity in a region (gamma diversity), in a local component (alpha), and several beta diversities, each corresponding to one scale in a hierarchy. The popularization of the so-called raw-data approach (based on partial constrained ordination techniques) and the distance-based approach (based on correlation of dissimilarity/distance matrices) have allowed many ecologists to address current hypotheses about beta diversity patterns. Overall, these hypotheses are based on niche and neutral theory, accounting for the relative roles of environmental and spatial processes (or a combination of them) in shaping metacommunities. Recent studies have addressed these issues on a variety of spatial and temporal scales, habitats and taxonomic groups. Moreover, life history and functional traits of species such as dispersal abilities and rarity have begun to be considered in studies of beta diversity. In this article we briefly review some of these new tools and approaches developed in recent years, and illustrate them by using case studies in aquatic ecosystems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Parkinson's disease (PD) automatic identification has been actively pursued over several works in the literature. In this paper, we deal with this problem by applying evolutionary-based techniques in order to find the subset of features that maximize the accuracy of the Optimum-Path Forest (OPF) classifier. The reason for the choice of this classifier relies on its fast training phase, given that each possible solution to be optimized is guided by the OPF accuracy. We also show results that improved other ones recently obtained in the context of PD automatic identification. © 2011 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Despite the efficacy of minutia-based fingerprint matching techniques for good-quality images captured by optical sensors, minutia-based techniques do not often perform so well on poor-quality images or fingerprint images captured by small solid-state sensors. Solid-state fingerprint sensors are being increasingly deployed in a wide range of applications for user authentication purposes. Therefore, it is necessary to develop new fingerprint-matching techniques that utilize other features to deal with fingerprint images captured by solid-state sensors. This paper presents a new fingerprint matching technique based on fingerprint ridge features. This technique was assessed on the MSU-VERIDICOM database, which consists of fingerprint impressions obtained from 160 users (4 impressions per finger) using a solid-state sensor. The combination of ridge-based matching scores computed by the proposed ridge-based technique with minutia-based matching scores leads to a reduction of the false non-match rate by approximately 1.7% at a false match rate of 0.1%. © 2005 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Although nontechnical losses automatic identification has been massively studied, the problem of selecting the most representative features in order to boost the identification accuracy and to characterize possible illegal consumers has not attracted much attention in this context. In this paper, we focus on this problem by reviewing three evolutionary-based techniques for feature selection, and we also introduce one of them in this context. The results demonstrated that selecting the most representative features can improve a lot of the classification accuracy of possible frauds in datasets composed by industrial and commercial profiles.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the last couple of decades we assisted to a reappraisal of spatial design-based techniques. Usually the spatial information regarding the spatial location of the individuals of a population has been used to develop efficient sampling designs. This thesis aims at offering a new technique for both inference on individual values and global population values able to employ the spatial information available before sampling at estimation level by rewriting a deterministic interpolator under a design-based framework. The achieved point estimator of the individual values is treated both in the case of finite spatial populations and continuous spatial domains, while the theory on the estimator of the population global value covers the finite population case only. A fairly broad simulation study compares the results of the point estimator with the simple random sampling without replacement estimator in predictive form and the kriging, which is the benchmark technique for inference on spatial data. The Monte Carlo experiment is carried out on populations generated according to different superpopulation methods in order to manage different aspects of the spatial structure. The simulation outcomes point out that the proposed point estimator has almost the same behaviour as the kriging predictor regardless of the parameters adopted for generating the populations, especially for low sampling fractions. Moreover, the use of the spatial information improves substantially design-based spatial inference on individual values.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Model based calibration has gained popularity in recent years as a method to optimize increasingly complex engine systems. However virtually all model based techniques are applied to steady state calibration. Transient calibration is by and large an emerging technology. An important piece of any transient calibration process is the ability to constrain the optimizer to treat the problem as a dynamic one and not as a quasi-static process. The optimized air-handling parameters corresponding to any instant of time must be achievable in a transient sense; this in turn depends on the trajectory of the same parameters over previous time instances. In this work dynamic constraint models have been proposed to translate commanded to actually achieved air-handling parameters. These models enable the optimization to be realistic in a transient sense. The air handling system has been treated as a linear second order system with PD control. Parameters for this second order system have been extracted from real transient data. The model has been shown to be the best choice relative to a list of appropriate candidates such as neural networks and first order models. The selected second order model was used in conjunction with transient emission models to predict emissions over the FTP cycle. It has been shown that emission predictions based on air-handing parameters predicted by the dynamic constraint model do not differ significantly from corresponding emissions based on measured air-handling parameters.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There are conflicting results with regard to the use of catheter-based techniques for continuous paravertebral block. Local anaesthetic spread within the paravertebral space is limited and the clinical effect is often variable. Discrepancies between needle tip position and final catheter position can also be problematic. The aim of this proof-of-concept study was to assess the reliability of placing a newly developed coiled catheter in human cadavers. Sixty Tuohy needles and coiled catheters were placed under ultrasound guidance, three on each side of the thoracic vertebral column in 10 human cadavers. Computed tomography was used to assess needle tip and catheter tip locations. No catheter was misplaced into the epidural, pleural or prevertebral spaces. The mean (SD) distance between catheter tips and needle tips was 8.2 (4.9) mm. The median (IQR [range]) caudo-cephalad spread of contrast dye injectate through a subset of 20 catheters was 4 (4-5[3-8]) thoracic segments. All catheters were removed without incident. Precise paravertebral catheter placement can be achieved using ultrasound-guided placement of a coiled catheter.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Minimally invasive vertebral augmentation-based techniques have been used for the treatment of spinal fractures (osteoporotic and malignant) for approximately 25 years. In this review, we try to give an overview of the current spectrum of percutaneous augmentation techniques, safety aspects and indications. Crucial factors for success are careful patient selection, proper technique and choice of the ideal cement augmentation option. Most compression fractures present a favourable natural course, with reduction of pain and regainment of mobility after a few days to several weeks, whereas other patients experience a progressive collapse and persisting pain. In this situation, percutaneous cement augmentation is an effective treatment option with regards to pain and disability reduction, improvement of quality of life and ambulatory and pulmonary function.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The past decade has seen the rise of high resolution datasets. One of the main surprises of analysing such data has been the discovery of a large genetic, phenotypic and behavioural variation and heterogeneous metabolic rates among individuals within natural populations. A parallel discovery from theory and experiments has shown a strong temporal convergence between evolutionary and ecological dynamics, but a general framework to analyse from individual-level processes the convergence between ecological and evolutionary dynamics and its implications for patterns of biodiversity in food webs has been particularly lacking. Here, as a first approximation to take into account intraspecific variability and the convergence between the ecological and evolutionary dynamics in large food webs, we develop a model from population genomics and microevolutionary processes that uses sexual reproduction, genetic-distance-based speciation and trophic interactions. We confront the model with the prey consumption per individual predator, species-level connectance and prey–predator diversity in several environmental situations using a large food web with approximately 25,000 sampled prey and predator individuals. We show higher than expected diversity of abundant species in heterogeneous environmental conditions and strong deviations from the observed distribution of individual prey consumption (i.e. individual connectivity per predator) in all the environmental conditions. The observed large variance in individual prey consumption regardless of the environmental variability collapsed species-level connectance after small increases in sampling effort. These results suggest (1) intraspecific variance in prey–predator interactions has a strong effect on the macroscopic properties of food webs and (2) intraspecific variance is a potential driver regulating the speed of the convergence between ecological and evolutionary dynamics in species-rich food webs. These results also suggest that genetic–ecological drift driven by sexual reproduction, equal feeding rate among predator individuals, mutations and genetic-distance-based speciation can be used as a neutral food web dynamics test to detect the ecological and microevolutionary processes underlying the observed patterns of individual and species-based food webs at local and macroecological scales.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aberrations of the acoustic wave front, caused by spatial variations of the speed-of-sound, are a main limiting factor to the diagnostic power of medical ultrasound imaging. If not accounted for, aberrations result in low resolution and increased side lobe level, over all reducing contrast in deep tissue imaging. Various techniques have been proposed for quantifying aberrations by analysing the arrival time of coherent echoes from so-called guide stars or beacons. In situations where a guide star is missing, aperture-based techniques may give ambiguous results. Moreover, they are conceptually focused on aberrators that can be approximated as a phase screen in front of the probe. We propose a novel technique, where the effect of aberration is detected in the reconstructed image as opposed to the aperture data. The varying local echo phase when changing the transmit beam steering angle directly reflects the varying arrival time of the transmit wave front. This allows sensing the angle-dependent aberration delay in a spatially resolved way, and thus aberration correction for a spatially distributed volume aberrator. In phantoms containing a cylindrical aberrator, we achieved location-independent diffraction-limited resolution as well as accurate display of echo location based on reconstructing the speed-of-sound spatially resolved. First successful volunteer results confirm the clinical potential of the proposed technique.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In computer science, different types of reusable components for building software applications were proposed as a direct consequence of the emergence of new software programming paradigms. The success of these components for building applications depends on factors such as the flexibility in their combination or the facility for their selection in centralised or distributed environments such as internet. In this article, we propose a general type of reusable component, called primitive of representation, inspired by a knowledge-based approach that can promote reusability. The proposal can be understood as a generalisation of existing partial solutions that is applicable to both software and knowledge engineering for the development of hybrid applications that integrate conventional and knowledge based techniques. The article presents the structure and use of the component and describes our recent experience in the development of real-world applications based on this approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El objetivo de esta investigación es desarrollar una metodología para estimar los potenciales impactos económicos y de transporte generados por la aplicación de políticas en el sector transporte. Los departamentos de transporte y otras instituciones gubernamentales relacionadas se encuentran interesadas en estos análisis debido a que son presentados comúnmente de forma errónea por la insuficiencia de datos o por la falta de metodologías adecuadas. La presente investigación tiene por objeto llenar este vacío haciendo un análisis exhaustivo de las técnicas disponibles que coincidan con ese propósito. Se ha realizado un análisis que ha identificado las diferencias cuando son aplicados para la valoración de los beneficios para el usuario o para otros efectos como aspectos sociales. Como resultado de ello, esta investigación ofrece un enfoque integrado que incluye un modelo Input-Output de múltiples regiones basado en la utilidad aleatoria (RUBMRIO), y un modelo de red de transporte por carretera. Este modelo permite la reproducción con mayor detalle y realismo del transporte de mercancías que por medio de su estructura sectorial identifica los vínculos de las compras y ventas inter-industriales dentro de un país utilizando los servicios del transporte de mercancías. Por esta razón, el modelo integrado es aplicable a diversas políticas de transporte. En efecto, el enfoque se ha aplicado para estudiar los efectos macroeconómicos regionales de la implementación de dos políticas diferentes en el sistema de transporte de mercancías de España, tales como la tarificación basada en la distancia recorrida por vehículo-kilómetro (€/km) aplicada a los vehículos del transporte de mercancías, y para la introducción de vehículos más largos y pesados de mercancías en la red de carreteras de España. El enfoque metodológico se ha evaluado caso por caso teniendo en cuenta una selección de la red de carreteras que unen las capitales de las regiones españolas. También se ha tenido en cuenta una dimensión económica a través de una tabla Input-Output de múltiples regiones (MRIO) y la base de datos de conteo de tráfico existente para realizar la validación del modelo. El enfoque integrado reproduce las condiciones de comercio observadas entre las regiones usando el sistema de transporte de mercancías por carretera, y que permite por comparación con los escenarios de políticas, determinar las contribuciones a los cambios distributivos y generativos. Así pues, el análisis estima los impactos económicos en cualquier región considerando los cambios en el Producto Interno Bruto (PIB) y el empleo. El enfoque identifica los cambios en el sistema de transporte a través de todos los caminos de la red de transporte a través de las medidas de efectividad (MOEs). Los resultados presentados en esta investigación proporcionan evidencia sustancial de que en la evaluación de las políticas de transporte, es necesario establecer un vínculo entre la estructura económica de las regiones y de los servicios de transporte. Los análisis muestran que para la mayoría de las regiones del país, los cambios son evidentes para el PIB y el empleo, ya que el comercio se fomenta o se inhibe. El enfoque muestra cómo el tráfico se desvía en ambas políticas, y también determina detalles de las emisiones de contaminantes en los dos escenarios. Además, las políticas de fijación de precios o de regulación de los sistemas de transporte de mercancías por carretera dirigidas a los productores y consumidores en las regiones promoverán transformaciones regionales afectando todo el país, y esto conduce a conclusiones diferentes. Así mismo, este enfoque integrado podría ser útil para evaluar otras políticas y otros países en todo el mundo. The purpose of this research is to develop a methodological approach aimed at assessing the potential economic and transportation impacts of transport policies. Transportation departments and other related government parties are interested in such analysis because it is commonly misrepresented for the insufficiency of data and suitable methodologies available. This research is directed at filling this gap by making a comprehensive analysis of the available techniques that match with that purpose. The differences when they are applied for the valuation of user benefits or for other impacts as social matters have been identified. As a result, this research presents an integrated approach which includes both a random utility-based multiregional Input-Output model (RUBMRIO), and a road transport network model. This model accounts for freight transport with more detail and realism because its commodity-based structure traces the linkages of inter-industry purchases and sales that use freight services within a given country. For this reason, the integrated model is applicable to various transport policies. In fact, the approach is applied to study the regional macroeconomic effects of implementing two different policies in the freight transport system of Spain, such as a distance-based charge in vehicle-kilometer (€/km) for Heavy Goods Vehicles (HGVs), and the introduction of Longer and Heavier Vehicles (LHVs) in the road network of Spain. The methodological approach has been evaluated on a case by case basis considering a selected road network of highways linking the capitals of the Spanish regions. It has also considered an economic dimension through a Multiregional Input Output Table (MRIO) and the existing traffic count database used in the model validation. The integrated approach replicates observed conditions of trade among regions using road freight transport systems that determine contributions to distributional and generative changes by comparison with policy scenarios. Therefore, the model estimates economic impacts in any given area by considering changes in Gross Domestic Product (GDP), employment (jobs), and in the transportation system across all paths of the transport network considering Measures of effectiveness (MOEs). The results presented in this research provide substantive evidence that in the assessment of transport policies it is necessary to establish a link between the economic structure of regions and the transportation services. The analysis shows that for most regions in the country, GDP and employment changes are noticeable when trade is encouraged or discouraged. This approach shows how traffic is diverted in both policies, and also provides details of the pollutant emissions in both scenarios. Furthermore, policies, such as pricing or regulation of road freight transportation systems, directed to producers and consumers in regions will promote different regional transformations across the country, and this lead to different conclusions. In addition, this integrated approach could be useful to assess other policies and countries worldwide.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Systems used for target localization, such as goods, individuals, or animals, commonly rely on operational means to meet the final application demands. However, what would happen if some means were powered up randomly by harvesting systems? And what if those devices not randomly powered had their duty cycles restricted? Under what conditions would such an operation be tolerable in localization services? What if the references provided by nodes in a tracking problem were distorted? Moreover, there is an underlying topic common to the previous questions regarding the transfer of conceptual models to reality in field tests: what challenges are faced upon deploying a localization network that integrates energy harvesting modules? The application scenario of the system studied is a traditional herding environment of semi domesticated reindeer (Rangifer tarandus tarandus) in northern Scandinavia. In these conditions, information on approximate locations of reindeer is as important as environmental preservation. Herders also need cost-effective devices capable of operating unattended in, sometimes, extreme weather conditions. The analyses developed are worthy not only for the specific application environment presented, but also because they may serve as an approach to performance of navigation systems in absence of reasonably accurate references like the ones of the Global Positioning System (GPS). A number of energy-harvesting solutions, like thermal and radio-frequency harvesting, do not commonly provide power beyond one milliwatt. When they do, battery buffers may be needed (as it happens with solar energy) which may raise costs and make systems more dependent on environmental temperatures. In general, given our problem, a harvesting system is needed that be capable of providing energy bursts of, at least, some milliwatts. Many works on localization problems assume that devices have certain capabilities to determine unknown locations based on range-based techniques or fingerprinting which cannot be assumed in the approach considered herein. The system presented is akin to range-free techniques, but goes to the extent of considering very low node densities: most range-free techniques are, therefore, not applicable. Animal localization, in particular, uses to be supported by accurate devices such as GPS collars which deplete batteries in, maximum, a few days. Such short-life solutions are not particularly desirable in the framework considered. In tracking, the challenge may times addressed aims at attaining high precision levels from complex reliable hardware and thorough processing techniques. One of the challenges in this Thesis is the use of equipment with just part of its facilities in permanent operation, which may yield high input noise levels in the form of distorted reference points. The solution presented integrates a kinetic harvesting module in some nodes which are expected to be a majority in the network. These modules are capable of providing power bursts of some milliwatts which suffice to meet node energy demands. The usage of harvesting modules in the aforementioned conditions makes the system less dependent on environmental temperatures as no batteries are used in nodes with harvesters--it may be also an advantage in economic terms. There is a second kind of nodes. They are battery powered (without kinetic energy harvesters), and are, therefore, dependent on temperature and battery replacements. In addition, their operation is constrained by duty cycles in order to extend node lifetime and, consequently, their autonomy. There is, in turn, a third type of nodes (hotspots) which can be static or mobile. They are also battery-powered, and are used to retrieve information from the network so that it is presented to users. The system operational chain starts at the kinetic-powered nodes broadcasting their own identifier. If an identifier is received at a battery-powered node, the latter stores it for its records. Later, as the recording node meets a hotspot, its full record of detections is transferred to the hotspot. Every detection registry comprises, at least, a node identifier and the position read from its GPS module by the battery-operated node previously to detection. The characteristics of the system presented make the aforementioned operation own certain particularities which are also studied. First, identifier transmissions are random as they depend on movements at kinetic modules--reindeer movements in our application. Not every movement suffices since it must overcome a certain energy threshold. Second, identifier transmissions may not be heard unless there is a battery-powered node in the surroundings. Third, battery-powered nodes do not poll continuously their GPS module, hence localization errors rise even more. Let's recall at this point that such behavior is tight to the aforementioned power saving policies to extend node lifetime. Last, some time is elapsed between the instant an identifier random transmission is detected and the moment the user is aware of such a detection: it takes some time to find a hotspot. Tracking is posed as a problem of a single kinetically-powered target and a population of battery-operated nodes with higher densities than before in localization. Since the latter provide their approximate positions as reference locations, the study is again focused on assessing the impact of such distorted references on performance. Unlike in localization, distance-estimation capabilities based on signal parameters are assumed in this problem. Three variants of the Kalman filter family are applied in this context: the regular Kalman filter, the alpha-beta filter, and the unscented Kalman filter. The study enclosed hereafter comprises both field tests and simulations. Field tests were used mainly to assess the challenges related to power supply and operation in extreme conditions as well as to model nodes and some aspects of their operation in the application scenario. These models are the basics of the simulations developed later. The overall system performance is analyzed according to three metrics: number of detections per kinetic node, accuracy, and latency. The links between these metrics and the operational conditions are also discussed and characterized statistically. Subsequently, such statistical characterization is used to forecast performance figures given specific operational parameters. In tracking, also studied via simulations, nonlinear relationships are found between accuracy and duty cycles and cluster sizes of battery-operated nodes. The solution presented may be more complex in terms of network structure than existing solutions based on GPS collars. However, its main gain lies on taking advantage of users' error tolerance to reduce costs and become more environmentally friendly by diminishing the potential amount of batteries that can be lost. Whether it is applicable or not depends ultimately on the conditions and requirements imposed by users' needs and operational environments, which is, as it has been explained, one of the topics of this Thesis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An integrated approach composed of a random utility-based multiregional input-output model and a road transport network model was developed for evaluating the application of a fee to heavy-goods vehicles (HGVs) in Spain. For this purpose, a distance-based charge scenario (in euros per vehicle kilometer) for HGVs was evaluated for a selected motorway network in Spain. Although the aim of this charging policy was to increase the efficiency of transport, the approach strongly identified direct and indirect impacts on the regional economy. Estimates of the magnitude and extent of indirect effects on aggregated macroeconomic indicators (employment and gross domestic product) are provided. The macroeconomic effects of the charging policy were found to be positive for some regions and negative for other regions.