120 resultados para Quadratic Volterra Filters
Resumo:
In this paper, mixed spectral-structural kernel machines are proposed for the classification of very-high resolution images. The simultaneous use of multispectral and structural features (computed using morphological filters) allows a significant increase in classification accuracy of remote sensing images. Subsequently, weighted summation kernel support vector machines are proposed and applied in order to take into account the multiscale nature of the scene considered. Such classifiers use the Mercer property of kernel matrices to compute a new kernel matrix accounting simultaneously for two scale parameters. Tests on a Zurich QuickBird image show the relevance of the proposed method : using the mixed spectral-structural features, the classification accuracy increases of about 5%, achieving a Kappa index of 0.97. The multikernel approach proposed provide an overall accuracy of 98.90% with related Kappa index of 0.985.
Resumo:
In this paper the problem of intensity inhomogeneity athigh magnetic field on magnetic resonance images isaddressed. Specifically, rat brain images at 9.4Tacquired with a surface coil are bias corrected. Wepropose a low- pass frequency model that takes intoaccount not only background-object contours but alsoother important contours inside the image. Twopre-processing filters are proposed: first, to create avolume of interest without contours, and second, toextrapolate the image values of such masked area to thewhole image. Results are assessed quantitatively andvisually in comparison to standard low pass filterapproach, and they show as expected better accuracy inenhancing image intensity.
Resumo:
This paper presents the general regression neural networks (GRNN) as a nonlinear regression method for the interpolation of monthly wind speeds in complex Alpine orography. GRNN is trained using data coming from Swiss meteorological networks to learn the statistical relationship between topographic features and wind speed. The terrain convexity, slope and exposure are considered by extracting features from the digital elevation model at different spatial scales using specialised convolution filters. A database of gridded monthly wind speeds is then constructed by applying GRNN in prediction mode during the period 1968-2008. This study demonstrates that using topographic features as inputs in GRNN significantly reduces cross-validation errors with respect to low-dimensional models integrating only geographical coordinates and terrain height for the interpolation of wind speed. The spatial predictability of wind speed is found to be lower in summer than in winter due to more complex and weaker wind-topography relationships. The relevance of these relationships is studied using an adaptive version of the GRNN algorithm which allows to select the useful terrain features by eliminating the noisy ones. This research provides a framework for extending the low-dimensional interpolation models to high-dimensional spaces by integrating additional features accounting for the topographic conditions at multiple spatial scales. Copyright (c) 2012 Royal Meteorological Society.
Resumo:
Combinatorial optimization involves finding an optimal solution in a finite set of options; many everyday life problems are of this kind. However, the number of options grows exponentially with the size of the problem, such that an exhaustive search for the best solution is practically infeasible beyond a certain problem size. When efficient algorithms are not available, a practical approach to obtain an approximate solution to the problem at hand, is to start with an educated guess and gradually refine it until we have a good-enough solution. Roughly speaking, this is how local search heuristics work. These stochastic algorithms navigate the problem search space by iteratively turning the current solution into new candidate solutions, guiding the search towards better solutions. The search performance, therefore, depends on structural aspects of the search space, which in turn depend on the move operator being used to modify solutions. A common way to characterize the search space of a problem is through the study of its fitness landscape, a mathematical object comprising the space of all possible solutions, their value with respect to the optimization objective, and a relationship of neighborhood defined by the move operator. The landscape metaphor is used to explain the search dynamics as a sort of potential function. The concept is indeed similar to that of potential energy surfaces in physical chemistry. Borrowing ideas from that field, we propose to extend to combinatorial landscapes the notion of the inherent network formed by energy minima in energy landscapes. In our case, energy minima are the local optima of the combinatorial problem, and we explore several definitions for the network edges. At first, we perform an exhaustive sampling of local optima basins of attraction, and define weighted transitions between basins by accounting for all the possible ways of crossing the basins frontier via one random move. Then, we reduce the computational burden by only counting the chances of escaping a given basin via random kick moves that start at the local optimum. Finally, we approximate network edges from the search trajectory of simple search heuristics, mining the frequency and inter-arrival time with which the heuristic visits local optima. Through these methodologies, we build a weighted directed graph that provides a synthetic view of the whole landscape, and that we can characterize using the tools of complex networks science. We argue that the network characterization can advance our understanding of the structural and dynamical properties of hard combinatorial landscapes. We apply our approach to prototypical problems such as the Quadratic Assignment Problem, the NK model of rugged landscapes, and the Permutation Flow-shop Scheduling Problem. We show that some network metrics can differentiate problem classes, correlate with problem non-linearity, and predict problem hardness as measured from the performances of trajectory-based local search heuristics.
Resumo:
Astrocytes emerge as key players in motor neuron degeneration in Amyotrophic Lateral Sclerosis (ALS). Whether astrocytes cause direct damage by releasing toxic factors or contribute indirectly through the loss of physiological functions is unclear. Here we identify in the hSOD1(G93A) transgenic mouse model of ALS a degenerative process of the astrocytes, restricted to those directly surrounding spinal motor neurons. This phenomenon manifests with an early onset and becomes significant concomitant with the loss of motor cells and the appearance of clinical symptoms. Contrary to wild-type astrocytes, mutant hSOD1-expressing astrocytes are highly vulnerable to glutamate and undergo cell death mediated by the metabotropic type-5 receptor (mGluR5). Blocking mGluR5 in vivo slows down astrocytic degeneration, delays the onset of the disease and slightly extends survival in hSOD1(G93A) transgenic mice. We propose that excitotoxicity in ALS affects both motor neurons and astrocytes, favouring their local interactive degeneration. This new mechanistic hypothesis has implications for therapeutic interventions.Cell Death and Differentiation advance online publication, 11 July 2008; doi:10.1038/cdd.2008.99.
Resumo:
The identification of the presence of active signaling between astrocytes and neurons in a process termed gliotransmission has caused a paradigm shift in our thinking about brain function. However, we are still in the early days of the conceptualization of how astrocytes influence synapses, neurons, networks, and ultimately behavior. In this Perspective, our goal is to identify emerging principles governing gliotransmission and consider the specific properties of this process that endow the astrocyte with unique functions in brain signal integration. We develop and present hypotheses aimed at reconciling confounding reports and define open questions to provide a conceptual framework for future studies. We propose that astrocytes mainly signal through high-affinity slowly desensitizing receptors to modulate neurons and perform integration in spatiotemporal domains complementary to those of neurons.
Resumo:
Patients undergoing spinal surgery are at risk of developing thromboembolic complications even though lower incidences have been reported as compared to joint arthroplasty surgery. Deep vein thrombosis (DVT) has been studied extensively in the context of spinal surgery but symptomatic pulmonary embolism (PE) has engaged less attention. We prospectively followed a consecutive cohort of 270 patients undergoing spinal surgery at a single institution. From these patients, only 26 were simple discectomies, while the largest proportion (226) was fusions. All patients received both low molecular weight heparin (LMWH) initiated after surgery and compressive stockings. PE was diagnosed with spiral chest CT. Six patients developed symptomatic PE, five during their hospital stay. In three of the six patients the embolic event occurred during the first 3 postoperative days. They were managed by the temporary insertion of an inferior vena cava (IVC) filter thus allowing for a delay in full-dose anticoagulation until removal of the filter. None of the PE patients suffered any bleeding complication as a result of the introduction of full anticoagulation. Two patients suffered postoperative haematomas, without development of neurological symptoms or signs, requiring emergency evacuation. The overall incidence of PE was 2.2% rising to 2.5% after exclusion of microdiscectomy cases. The incidence of PE was highest in anterior or combined thoracolumbar/lumbar procedures (4.2%). There is a large variation in the reported incidence of PE in the spinal literature. Results from the only study found in the literature specifically monitoring PE suggest an incidence of PE as high as 2.5%. Our study shows a similar incidence despite the use of LMWH. In the absence of randomized controlled trials (RCT) it is uncertain if this type of prophylaxis lowers the incidence of PE. However, other studies show that the morbidity of LMWH is very low. Since PE can be a life-threatening complication, LMWH may be a worthwhile option to consider for prophylaxis. RCTs are necessary in assessing the efficacy of DVT and PE prophylaxis in spinal patients.
Resumo:
We propose a finite element approximation of a system of partial differential equations describing the coupling between the propagation of electrical potential and large deformations of the cardiac tissue. The underlying mathematical model is based on the active strain assumption, in which it is assumed that a multiplicative decomposition of the deformation tensor into a passive and active part holds, the latter carrying the information of the electrical potential propagation and anisotropy of the cardiac tissue into the equations of either incompressible or compressible nonlinear elasticity, governing the mechanical response of the biological material. In addition, by changing from an Eulerian to a Lagrangian configuration, the bidomain or monodomain equations modeling the evolution of the electrical propagation exhibit a nonlinear diffusion term. Piecewise quadratic finite elements are employed to approximate the displacements field, whereas for pressure, electrical potentials and ionic variables are approximated by piecewise linear elements. Various numerical tests performed with a parallel finite element code illustrate that the proposed model can capture some important features of the electromechanical coupling, and show that our numerical scheme is efficient and accurate.
Resumo:
Abstract The main objective of this work is to show how the choice of the temporal dimension and of the spatial structure of the population influences an artificial evolutionary process. In the field of Artificial Evolution we can observe a common trend in synchronously evolv¬ing panmictic populations, i.e., populations in which any individual can be recombined with any other individual. Already in the '90s, the works of Spiessens and Manderick, Sarma and De Jong, and Gorges-Schleuter have pointed out that, if a population is struc¬tured according to a mono- or bi-dimensional regular lattice, the evolutionary process shows a different dynamic with respect to the panmictic case. In particular, Sarma and De Jong have studied the selection pressure (i.e., the diffusion of a best individual when the only selection operator is active) induced by a regular bi-dimensional structure of the population, proposing a logistic modeling of the selection pressure curves. This model supposes that the diffusion of a best individual in a population follows an exponential law. We show that such a model is inadequate to describe the process, since the growth speed must be quadratic or sub-quadratic in the case of a bi-dimensional regular lattice. New linear and sub-quadratic models are proposed for modeling the selection pressure curves in, respectively, mono- and bi-dimensional regu¬lar structures. These models are extended to describe the process when asynchronous evolutions are employed. Different dynamics of the populations imply different search strategies of the resulting algorithm, when the evolutionary process is used to solve optimisation problems. A benchmark of both discrete and continuous test problems is used to study the search characteristics of the different topologies and updates of the populations. In the last decade, the pioneering studies of Watts and Strogatz have shown that most real networks, both in the biological and sociological worlds as well as in man-made structures, have mathematical properties that set them apart from regular and random structures. In particular, they introduced the concepts of small-world graphs, and they showed that this new family of structures has interesting computing capabilities. Populations structured according to these new topologies are proposed, and their evolutionary dynamics are studied and modeled. We also propose asynchronous evolutions for these structures, and the resulting evolutionary behaviors are investigated. Many man-made networks have grown, and are still growing incrementally, and explanations have been proposed for their actual shape, such as Albert and Barabasi's preferential attachment growth rule. However, many actual networks seem to have undergone some kind of Darwinian variation and selection. Thus, how these networks might have come to be selected is an interesting yet unanswered question. In the last part of this work, we show how a simple evolutionary algorithm can enable the emrgence o these kinds of structures for two prototypical problems of the automata networks world, the majority classification and the synchronisation problems. Synopsis L'objectif principal de ce travail est de montrer l'influence du choix de la dimension temporelle et de la structure spatiale d'une population sur un processus évolutionnaire artificiel. Dans le domaine de l'Evolution Artificielle on peut observer une tendence à évoluer d'une façon synchrone des populations panmictiques, où chaque individu peut être récombiné avec tout autre individu dans la population. Déjà dans les année '90, Spiessens et Manderick, Sarma et De Jong, et Gorges-Schleuter ont observé que, si une population possède une structure régulière mono- ou bi-dimensionnelle, le processus évolutionnaire montre une dynamique différente de celle d'une population panmictique. En particulier, Sarma et De Jong ont étudié la pression de sélection (c-à-d la diffusion d'un individu optimal quand seul l'opérateur de sélection est actif) induite par une structure régulière bi-dimensionnelle de la population, proposant une modélisation logistique des courbes de pression de sélection. Ce modèle suppose que la diffusion d'un individu optimal suit une loi exponentielle. On montre que ce modèle est inadéquat pour décrire ce phénomène, étant donné que la vitesse de croissance doit obéir à une loi quadratique ou sous-quadratique dans le cas d'une structure régulière bi-dimensionnelle. De nouveaux modèles linéaires et sous-quadratique sont proposés pour des structures mono- et bi-dimensionnelles. Ces modèles sont étendus pour décrire des processus évolutionnaires asynchrones. Différentes dynamiques de la population impliquent strategies différentes de recherche de l'algorithme résultant lorsque le processus évolutionnaire est utilisé pour résoudre des problèmes d'optimisation. Un ensemble de problèmes discrets et continus est utilisé pour étudier les charactéristiques de recherche des différentes topologies et mises à jour des populations. Ces dernières années, les études de Watts et Strogatz ont montré que beaucoup de réseaux, aussi bien dans les mondes biologiques et sociologiques que dans les structures produites par l'homme, ont des propriétés mathématiques qui les séparent à la fois des structures régulières et des structures aléatoires. En particulier, ils ont introduit la notion de graphe sm,all-world et ont montré que cette nouvelle famille de structures possède des intéressantes propriétés dynamiques. Des populations ayant ces nouvelles topologies sont proposés, et leurs dynamiques évolutionnaires sont étudiées et modélisées. Pour des populations ayant ces structures, des méthodes d'évolution asynchrone sont proposées, et la dynamique résultante est étudiée. Beaucoup de réseaux produits par l'homme se sont formés d'une façon incrémentale, et des explications pour leur forme actuelle ont été proposées, comme le preferential attachment de Albert et Barabàsi. Toutefois, beaucoup de réseaux existants doivent être le produit d'un processus de variation et sélection darwiniennes. Ainsi, la façon dont ces structures ont pu être sélectionnées est une question intéressante restée sans réponse. Dans la dernière partie de ce travail, on montre comment un simple processus évolutif artificiel permet à ce type de topologies d'émerger dans le cas de deux problèmes prototypiques des réseaux d'automates, les tâches de densité et de synchronisation.
Resumo:
Aim: Modelling species at the assemblage level is required to make effective forecast of global change impacts on diversity and ecosystem functioning. Community predictions may be achieved using macroecological properties of communities (MEM), or by stacking of individual species distribution models (S-SDMs). To obtain more realistic predictions of species assemblages, the SESAM framework suggests applying successive filters to the initial species source pool, by combining different modelling approaches and rules. Here we provide a first test of this framework in mountain grassland communities. Location: The western Swiss Alps. Methods: Two implementations of the SESAM framework were tested: a "Probability ranking" rule based on species richness predictions and rough probabilities from SDMs, and a "Trait range" rule that uses the predicted upper and lower bound of community-level distribution of three different functional traits (vegetative height, specific leaf area and seed mass) to constraint a pool of environmentally filtered species from binary SDMs predictions. Results: We showed that all independent constraints expectedly contributed to reduce species richness overprediction. Only the "Probability ranking" rule allowed slightly but significantly improving predictions of community composition. Main conclusion: We tested various ways to implement the SESAM framework by integrating macroecological constraints into S-SDM predictions, and report one that is able to improve compositional predictions. We discuss possible improvements, such as further improving the causality and precision of environmental predictors, using other assembly rules and testing other types of ecological or functional constraints.
Resumo:
BACKGROUND: The structure and organisation of ecological interactions within an ecosystem is modified by the evolution and coevolution of the individual species it contains. Understanding how historical conditions have shaped this architecture is vital for understanding system responses to change at scales from the microbial upwards. However, in the absence of a group selection process, the collective behaviours and ecosystem functions exhibited by the whole community cannot be organised or adapted in a Darwinian sense. A long-standing open question thus persists: Are there alternative organising principles that enable us to understand and predict how the coevolution of the component species creates and maintains complex collective behaviours exhibited by the ecosystem as a whole? RESULTS: Here we answer this question by incorporating principles from connectionist learning, a previously unrelated discipline already using well-developed theories on how emergent behaviours arise in simple networks. Specifically, we show conditions where natural selection on ecological interactions is functionally equivalent to a simple type of connectionist learning, 'unsupervised learning', well-known in neural-network models of cognitive systems to produce many non-trivial collective behaviours. Accordingly, we find that a community can self-organise in a well-defined and non-trivial sense without selection at the community level; its organisation can be conditioned by past experience in the same sense as connectionist learning models habituate to stimuli. This conditioning drives the community to form a distributed ecological memory of multiple past states, causing the community to: a) converge to these states from any random initial composition; b) accurately restore historical compositions from small fragments; c) recover a state composition following disturbance; and d) to correctly classify ambiguous initial compositions according to their similarity to learned compositions. We examine how the formation of alternative stable states alters the community's response to changing environmental forcing, and we identify conditions under which the ecosystem exhibits hysteresis with potential for catastrophic regime shifts. CONCLUSIONS: This work highlights the potential of connectionist theory to expand our understanding of evo-eco dynamics and collective ecological behaviours. Within this framework we find that, despite not being a Darwinian unit, ecological communities can behave like connectionist learning systems, creating internal conditions that habituate to past environmental conditions and actively recalling those conditions. REVIEWERS: This article was reviewed by Prof. Ricard V Solé, Universitat Pompeu Fabra, Barcelona and Prof. Rob Knight, University of Colorado, Boulder.
Resumo:
The oxidative potential (OP) of particulate matter has been proposed as a toxicologically relevant metric. This concept is already frequently used for hazard characterization of ambient particles but it is still seldom applied in the occupational field. The objective of this study was to assess the OP in two different types of workplaces and to investigate the relationship between the OP and the physicochemical characteristics of the collected particles. At a toll station, at the entrance of a tunnel ('Tunnel' site), and at three different mechanical yards ('Depot' sites), we assessed particle mass (PM4 and PM2.5 and size distribution), number and surface area, organic and elemental carbon, polycyclic aromatic hydrocarbon (PAH), and four quinones as well as iron and copper concentration. The OP was determined directly on filters without extraction by using the dithiothreitol assay (DTT assay-OP(DTT)). The averaged mass concentration of respirable particles (PM4) at the Tunnel site was about twice the one at the Depot sites (173±103 and 90±36 µg m(-3), respectively), whereas the OP(DTT) was practically identical for all the sites (10.6±7.2 pmol DTT min(-1) μg(-1) at the Tunnel site; 10.4±4.6 pmol DTT min(-1) μg(-1) at the Depot sites). The OP(DTT) of PM4 was mostly present on the smallest PM2.5 fraction (OP(DTT) PM2.5: 10.2±8.1 pmol DTT min(-1) μg(-1); OP(DTT) PM4: 10.5±5.8 pmol DTT min(-1) μg(-1) for all sites), suggesting the presence of redox inactive components in the PM2.5-4 fraction. Although the reactivity was similar at the Tunnel and Depot sites irrespective of the metric chosen (OP(DTT) µg(-1) or OP(DTT) m(-3)), the chemicals associated with OP(DTT) were different between the two types of workplaces. The organic carbon, quinones, and/or metal content (Fe, Cu) were strongly associated with the DTT reactivity at the Tunnel site whereas only Fe and PAH were associated (positively and negatively, respectively) with this reactivity at the Depot sites. These results demonstrate the feasibility of measuring of the OP(DTT) in occupational environments and suggest that the particulate OP(DTT) is integrative of different physicochemical properties. This parameter could be a potentially useful exposure proxy for investigating particle exposure-related oxidative stress and its consequences. Further research is needed mostly to demonstrate the association of OP(DTT) with relevant oxidative endpoints in humans exposed to particles.
Resumo:
Aim The aim of this study was to test different modelling approaches, including a new framework, for predicting the spatial distribution of richness and composition of two insect groups. Location The western Swiss Alps. Methods We compared two community modelling approaches: the classical method of stacking binary prediction obtained fromindividual species distribution models (binary stacked species distribution models, bS-SDMs), and various implementations of a recent framework (spatially explicit species assemblage modelling, SESAM) based on four steps that integrate the different drivers of the assembly process in a unique modelling procedure. We used: (1) five methods to create bS-SDM predictions; (2) two approaches for predicting species richness, by summing individual SDM probabilities or by modelling the number of species (i.e. richness) directly; and (3) five different biotic rules based either on ranking probabilities from SDMs or on community co-occurrence patterns. Combining these various options resulted in 47 implementations for each taxon. Results Species richness of the two taxonomic groups was predicted with good accuracy overall, and in most cases bS-SDM did not produce a biased prediction exceeding the actual number of species in each unit. In the prediction of community composition bS-SDM often also yielded the best evaluation score. In the case of poor performance of bS-SDM (i.e. when bS-SDM overestimated the prediction of richness) the SESAM framework improved predictions of species composition. Main conclusions Our results differed from previous findings using community-level models. First, we show that overprediction of richness by bS-SDM is not a general rule, thus highlighting the relevance of producing good individual SDMs to capture the ecological filters that are important for the assembly process. Second, we confirm the potential of SESAM when richness is overpredicted by bS-SDM; limiting the number of species for each unit and applying biotic rules (here using the ranking of SDM probabilities) can improve predictions of species composition
Resumo:
The occurrence of cognitive disturbances upon CNS inflammation or infection has been correlated with increased levels of the cytokine tumor necrosis factor-α (TNFα). To date, however, no specific mechanism via which this cytokine could alter cognitive circuits has been demonstrated. Here, we show that local increase of TNFα in the hippocampal dentate gyrus activates astrocyte TNF receptor type 1 (TNFR1), which in turn triggers an astrocyte-neuron signaling cascade that results in persistent functional modification of hippocampal excitatory synapses. Astrocytic TNFR1 signaling is necessary for the hippocampal synaptic alteration and contextual learning-memory impairment observed in experimental autoimmune encephalitis (EAE), an animal model of multiple sclerosis (MS). This process may contribute to the pathogenesis of cognitive disturbances in MS, as well as in other CNS conditions accompanied by inflammatory states or infections.
Resumo:
Following their detection and seizure by police and border guard authorities, false identity and travel documents are usually scanned, producing digital images. This research investigates the potential of these images to classify false identity documents, highlight links between documents produced by a same modus operandi or same source, and thus support forensic intelligence efforts. Inspired by previous research work about digital images of Ecstasy tablets, a systematic and complete method has been developed to acquire, collect, process and compare images of false identity documents. This first part of the article highlights the critical steps of the method and the development of a prototype that processes regions of interest extracted from images. Acquisition conditions have been fine-tuned in order to optimise reproducibility and comparability of images. Different filters and comparison metrics have been evaluated and the performance of the method has been assessed using two calibration and validation sets of documents, made up of 101 Italian driving licenses and 96 Portuguese passports seized in Switzerland, among which some were known to come from common sources. Results indicate that the use of Hue and Edge filters or their combination to extract profiles from images, and then the comparison of profiles with a Canberra distance-based metric provides the most accurate classification of documents. The method appears also to be quick, efficient and inexpensive. It can be easily operated from remote locations and shared amongst different organisations, which makes it very convenient for future operational applications. The method could serve as a first fast triage method that may help target more resource-intensive profiling methods (based on a visual, physical or chemical examination of documents for instance). Its contribution to forensic intelligence and its application to several sets of false identity documents seized by police and border guards will be developed in a forthcoming article (part II).