975 resultados para temporal process


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The most biologically-inspired artificial neurons are those of the third generation, and are termed spiking neurons, as individual pulses or spikes are the means by which stimuli are communicated. In essence, a spike is a short-term change in electrical potential and is the basis of communication between biological neurons. Unlike previous generations of artificial neurons, spiking neurons operate in the temporal domain, and exploit time as a resource in their computation. In 1952, Alan Lloyd Hodgkin and Andrew Huxley produced the first model of a spiking neuron; their model describes the complex electro-chemical process that enables spikes to propagate through, and hence be communicated by, spiking neurons. Since this time, improvements in experimental procedures in neurobiology, particularly with in vivo experiments, have provided an increasingly more complex understanding of biological neurons. For example, it is now well-understood that the propagation of spikes between neurons requires neurotransmitter, which is typically of limited supply. When the supply is exhausted neurons become unresponsive. The morphology of neurons, number of receptor sites, amongst many other factors, means that neurons consume the supply of neurotransmitter at different rates. This in turn produces variations over time in the responsiveness of neurons, yielding various computational capabilities. Such improvements in the understanding of the biological neuron have culminated in a wide range of different neuron models, ranging from the computationally efficient to the biologically realistic. These models enable the modeling of neural circuits found in the brain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The most biologically-inspired artificial neurons are those of the third generation, and are termed spiking neurons, as individual pulses or spikes are the means by which stimuli are communicated. In essence, a spike is a short-term change in electrical potential and is the basis of communication between biological neurons. Unlike previous generations of artificial neurons, spiking neurons operate in the temporal domain, and exploit time as a resource in their computation. In 1952, Alan Lloyd Hodgkin and Andrew Huxley produced the first model of a spiking neuron; their model describes the complex electro-chemical process that enables spikes to propagate through, and hence be communicated by, spiking neurons. Since this time, improvements in experimental procedures in neurobiology, particularly with in vivo experiments, have provided an increasingly more complex understanding of biological neurons. For example, it is now well understood that the propagation of spikes between neurons requires neurotransmitter, which is typically of limited supply. When the supply is exhausted neurons become unresponsive. The morphology of neurons, number of receptor sites, amongst many other factors, means that neurons consume the supply of neurotransmitter at different rates. This in turn produces variations over time in the responsiveness of neurons, yielding various computational capabilities. Such improvements in the understanding of the biological neuron have culminated in a wide range of different neuron models, ranging from the computationally efficient to the biologically realistic. These models enable the modelling of neural circuits found in the brain. In recent years, much of the focus in neuron modelling has moved to the study of the connectivity of spiking neural networks. Spiking neural networks provide a vehicle to understand from a computational perspective, aspects of the brain’s neural circuitry. This understanding can then be used to tackle some of the historically intractable issues with artificial neurons, such as scalability and lack of variable binding. Current knowledge of feed-forward, lateral, and recurrent connectivity of spiking neurons, and the interplay between excitatory and inhibitory neurons is beginning to shed light on these issues, by improved understanding of the temporal processing capabilities and synchronous behaviour of biological neurons. This research topic aims to amalgamate current research aimed at tackling these phenomena.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La révision du code est un procédé essentiel quelque soit la maturité d'un projet; elle cherche à évaluer la contribution apportée par le code soumis par les développeurs. En principe, la révision du code améliore la qualité des changements de code (patches) avant qu'ils ne soient validés dans le repertoire maître du projet. En pratique, l'exécution de ce procédé n'exclu pas la possibilité que certains bugs passent inaperçus. Dans ce document, nous présentons une étude empirique enquétant la révision du code d'un grand projet open source. Nous investissons les relations entre les inspections des reviewers et les facteurs, sur les plans personnel et temporel, qui pourraient affecter la qualité de telles inspections.Premiérement, nous relatons une étude quantitative dans laquelle nous utilisons l'algorithme SSZ pour détecter les modifications et les changements de code favorisant la création de bogues (bug-inducing changes) que nous avons lié avec l'information contenue dans les révisions de code (code review information) extraites du systéme de traçage des erreurs (issue tracking system). Nous avons découvert que les raisons pour lesquelles les réviseurs manquent certains bogues était corrélées autant à leurs caractéristiques personnelles qu'aux propriétés techniques des corrections en cours de revue. Ensuite, nous relatons une étude qualitative invitant les développeurs de chez Mozilla à nous donner leur opinion concernant les attributs favorables à la bonne formulation d'une révision de code. Les résultats de notre sondage suggèrent que les développeurs considèrent les aspects techniques (taille de la correction, nombre de chunks et de modules) autant que les caractéristiques personnelles (l'expérience et review queue) comme des facteurs influant fortement la qualité des revues de code.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

YBa2Cu307 target was laser ablated, and the time-of-flight (TOF) distributions of Y, Y+., and YO in the resultant plasma were investigated as functions of distance from the target and laser energy density using emission spectroscopy. Up to a short distance from the target (-1.5 cm), TOF distributions show twin peaks for Y and YO, while only single-peak distribution is observed for Y+. At greater distances (>1.5 cm) all of them exhibit single-peak distribution. The twin peaks are assigned to species corresponding to those generated directly/m the vicinity of target surface and to those generated from collisional/recombination process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research on autonomous intelligent systems has focused on how robots can robustly carry out missions in uncertain and harsh environments with very little or no human intervention. Robotic execution languages such as RAPs, ESL, and TDL improve robustness by managing functionally redundant procedures for achieving goals. The model-based programming approach extends this by guaranteeing correctness of execution through pre-planning of non-deterministic timed threads of activities. Executing model-based programs effectively on distributed autonomous platforms requires distributing this pre-planning process. This thesis presents a distributed planner for modelbased programs whose planning and execution is distributed among agents with widely varying levels of processor power and memory resources. We make two key contributions. First, we reformulate a model-based program, which describes cooperative activities, into a hierarchical dynamic simple temporal network. This enables efficient distributed coordination of robots and supports deployment on heterogeneous robots. Second, we introduce a distributed temporal planner, called DTP, which solves hierarchical dynamic simple temporal networks with the assistance of the distributed Bellman-Ford shortest path algorithm. The implementation of DTP has been demonstrated successfully on a wide range of randomly generated examples and on a pursuer-evader challenge problem in simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the many models developed for phosphorus concentration prediction at differing spatial and temporal scales, there has been little effort to quantify uncertainty in their predictions. Model prediction uncertainty quantification is desirable, for informed decision-making in river-systems management. An uncertainty analysis of the process-based model, integrated catchment model of phosphorus (INCA-P), within the generalised likelihood uncertainty estimation (GLUE) framework is presented. The framework is applied to the Lugg catchment (1,077 km2), a River Wye tributary, on the England–Wales border. Daily discharge and monthly phosphorus (total reactive and total), for a limited number of reaches, are used to initially assess uncertainty and sensitivity of 44 model parameters, identified as being most important for discharge and phosphorus predictions. This study demonstrates that parameter homogeneity assumptions (spatial heterogeneity is treated as land use type fractional areas) can achieve higher model fits, than a previous expertly calibrated parameter set. The model is capable of reproducing the hydrology, but a threshold Nash-Sutcliffe co-efficient of determination (E or R 2) of 0.3 is not achieved when simulating observed total phosphorus (TP) data in the upland reaches or total reactive phosphorus (TRP) in any reach. Despite this, the model reproduces the general dynamics of TP and TRP, in point source dominated lower reaches. This paper discusses why this application of INCA-P fails to find any parameter sets, which simultaneously describe all observed data acceptably. The discussion focuses on uncertainty of readily available input data, and whether such process-based models should be used when there isn’t sufficient data to support the many parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Inadvertent drilling on the ossicular chain is one of the causes of sensorineural hearing loss (HL) that may follow tympanomastoid surgery. A high-frequency HL is most frequently observed. It is speculated that the HL is a result of vibration of the ossicular chain resembling acoustic noise trauma. It is generally considered that using a large cutting burr is more likely to cause damage than a small diamond burr. Aim: The aim was to investigate the equivalent noise level and its frequency characteristics generated by drilling onto the short process of the incus in fresh human temporal bones. Methods and Materials: Five fresh cadaveric temporal bones were used. Stapes displacement was measured using laser Doppler vibrometry during short drilling episodes. Diamond. and cutting burrs of different diameters were used. The effect of the drilling on stapes footplate displacement was compared with that generated by an acoustic signal. The equivalent noise level (dB sound pressure level equivalent [SPL eq]) was thus calculated. Results: The equivalent noise levels generated ranged from 93 to 125 dB SPL eq. For a 1-mm cutting burr, the highest equivalent noise level was 108 dB SPL eq, whereas a 2.3-mm cutting burr produced a maximal level of 125 dB SPL eq. Diamond burrs generated less noise than their cutting counterparts, with a 2.3-mm diamond burr producing a highest equivalent noise level of 102, dB SPL eq. The energy of the noise increased at the higher end of the frequency spectrum, with a 2.3-mm cutting burr producing a noise level of 105 dB SPL eq at 1 kHz and 125 dB SPL eq at 8 kHz. In contrast, the same sized diamond burr produced 96 dB SPL eq at 1 kHz and 99 dB at 8 kHz. Conclusion:This study suggests that drilling on the ossicular chain can produce vibratory force that is analogous with noise levels known to produce acoustic trauma. For the same type of burr, the larger the diameter, the greater the vibratory force, and for the same size of burr, the cutting burr creates more vibratory force than the diamond burr. The cutting burr produces greater high-frequency than lower-frequency vibratory energy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Concentrations of dissolved organic carbon have increased in many, but not all, surface waters across acid impacted areas of Europe and North America over the last two decades. Over the last eight years several hypotheses have been put forward to explain these increases, but none are yet accepted universally. Research in this area appears to have reached a stalemate between those favouring declining atmospheric deposition, climate change or land management as the key driver of long-term DOC trends. While it is clear that many of these factors influence DOC dynamics in soil and stream waters, their effect varies over different temporal and spatial scales. We argue that regional differences in acid deposition loading may account for the apparent discrepancies between studies. DOC has shown strong monotonic increases in areas which have experienced strong downward trends in pollutant sulphur and/or seasalt deposition. Elsewhere climatic factors, that strongly influence seasonality, have also dominated inter-annual variability, and here long-term monotonic DOC trends are often difficult to detect. Furthermore, in areas receiving similar acid loadings, different catchment characteristics could have affected the site specific sensitivity to changes in acidity and therefore the magnitude of DOC release in response to changes in sulphur deposition. We suggest that confusion over these temporal and spatial scales of investigation has contributed unnecessarily to the disagreement over the main regional driver(s) of DOC trends, and that the data behind the majority of these studies is more compatible than is often conveyed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Drought characterisation is an intrinsically spatio-temporal problem. A limitation of previous approaches to characterisation is that they discard much of the spatio-temporal information by reducing events to a lower-order subspace. To address this, an explicit 3-dimensional (longitude, latitude, time) structure-based method is described in which drought events are defined by a spatially and temporarily coherent set of points displaying standardised precipitation below a given threshold. Geometric methods can then be used to measure similarity between individual drought structures. Groupings of these similarities provide an alternative to traditional methods for extracting recurrent space-time signals from geophysical data. The explicit consideration of structure encourages the construction of summary statistics which relate to the event geometry. Example measures considered are the event volume, centroid, and aspect ratio. The utility of a 3-dimensional approach is demonstrated by application to the analysis of European droughts (15 °W to 35°E, and 35 °N to 70°N) for the period 1901–2006. Large-scale structure is found to be abundant with 75 events identified lasting for more than 3 months and spanning at least 0.5 × 106 km2. Near-complete dissimilarity is seen between the individual drought structures, and little or no regularity is found in the time evolution of even the most spatially similar drought events. The spatial distribution of the event centroids and the time evolution of the geographic cross-sectional areas strongly suggest that large area, sustained droughts result from the combination of multiple small area (∼106 km2) short duration (∼3 months) events. The small events are not found to occur independently in space. This leads to the hypothesis that local water feedbacks play an important role in the aggregation process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a statistical analysis of the time evolution of ground magnetic fluctuations in three (12–48 s, 24–96 s and 48–192 s) period bands during nightside auroral activations. We use an independently derived auroral activation list composed of both substorms and pseudo-breakups to provide an estimate of the activation times of nightside aurora during periods with comprehensive ground magnetometer coverage. One hundred eighty-one events in total are studied to demonstrate the statistical nature of the time evolution of magnetic wave power during the ∼30 min surrounding auroral activations. We find that the magnetic wave power is approximately constant before an auroral activation, starts to grow up to 90 s prior to the optical onset time, maximizes a few minutes after the auroral activation, then decays slightly to a new, and higher, constant level. Importantly, magnetic ULF wave power always remains elevated after an auroral activation, whether it is a substorm or a pseudo-breakup. We subsequently divide the auroral activation list into events that formed part of ongoing auroral activity and events that had little preceding geomagnetic activity. We find that the evolution of wave power in the ∼10–200 s period band essentially behaves in the same manner through auroral onset, regardless of event type. The absolute power across ULF wave bands, however, displays a power law-like dependency throughout a 30 min period centered on auroral onset time. We also find evidence of a secondary maximum in wave power at high latitudes ∼10 min following isolated substorm activations. Most significantly, we demonstrate that magnetic wave power levels persist after auroral activations for ∼10 min, which is consistent with recent findings of wave-driven auroral precipitation during substorms. This suggests that magnetic wave power and auroral particle precipitation are intimately linked and key components of the substorm onset process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective. Therapeutic alliance, modality, and ability to engage with the process of therapy have been the main focus of research into what makes psychotherapy successful. Individuals with complex trauma histories or schizophrenia are suggested to be more difficult to engage and may be less likely to benefit from therapy. This study aimed to track the in-session ‘process’ of working alliance and emotional processing of trauma memories for individuals with schizophrenia. Design. The study utilized session recordings from the treatment arm of an open randomized clinical trial investigating trauma-focused cognitive behavioural therapy (TF-CBT) for individuals with schizophrenia (N = 26). Method. Observer measures of working alliance, emotional processing, and affect arousal were rated at early and late phases of therapy. Correlation analysis was undertaken for process measures. Temporal analysis of expressed emotions was also reported. Results. Working alliance was established and maintained throughout the therapy; however, agreement on goals reduced at the late phase. The participants appeared to be able to engage in emotional processing, but not to the required level for successful cognitive restructuring. Conclusion. This study undertook novel exploration of process variables not usually explored in CBT. It is also the first study of process for TF-CBT with individuals with schizophrenia. This complex clinical sample showed no difficulty in engagement; however, they may not be able to fully undertake the cognitive–emotional demands of this type of therapy. Clinical and research implications and potential limitations of these methods are considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate assessment of the fate of salts, nutrients, and pollutants in natural, heterogeneous soils requires a proper quantification of both spatial and temporal solute spreading during solute movement. The number of experiments with multisampler devices that measure solute leaching as a function of space and time is increasing. The breakthrough curve (BTC) can characterize the temporal aspect of solute leaching, and recently the spatial solute distribution curve (SSDC) was introduced to describe the spatial solute distribution. We combined and extended both concepts to develop a tool for the comprehensive analysis of the full spatio-temporal behavior of solute leaching. The sampling locations are ranked in order of descending amount of total leaching (defined as the cumulative leaching from an individual compartment at the end of the experiment), thus collapsing both spatial axes of the sampling plane into one. The leaching process can then be described by a curved surface that is a function of the single spatial coordinate and time. This leaching surface is scaled to integrate to unity, and termed S can efficiently represent data from multisampler solute transport experiments or simulation results from multidimensional solute transport models. The mathematical relationships between the scaled leaching surface S, the BTC, and the SSDC are established. Any desired characteristic of the leaching process can be derived from S. The analysis was applied to a chloride leaching experiment on a lysimeter with 300 drainage compartments of 25 cm2 each. The sandy soil monolith in the lysimeter exhibited fingered flow in the water-repellent top layer. The observed S demonstrated the absence of a sharp separation between fingers and dry areas, owing to diverging flow in the wettable soil below the fingers. Times-to-peak, maximum solute fluxes, and total leaching varied more in high-leaching than in low-leaching compartments. This suggests a stochastic–convective transport process in the high-flow streamtubes, while convection–dispersion is predominant in the low-flow areas. S can be viewed as a bivariate probability density function. Its marginal distributions are the BTC of all sampling locations combined, and the SSDC of cumulative solute leaching at the end of the experiment. The observed S cannot be represented by assuming complete independence between its marginal distributions, indicating that S contains information about the leaching process that cannot be derived from the combination of the BTC and the SSDC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although considerable efforts have been made to develop and validate etiological models of male sexual offending, no theory is available to guide research or practice with female sexual offenders (FSOs). In this study, the authors developed a descriptive, offense process model of female sexual offending. Systematic qualitative analyses (i.e., grounded theory) of 22 FSOs' offense interviews were used to develop a temporal model documenting the contributory roles of cognitive, behavioral, affective, and contextual factors in female sexual abuse. The model highlights notable similarities and divergences between male and female sexual offenders' vulnerability factors and offense styles. In particular, the model incorporates male co-offender and group co-offender influences and describes how these interact with vulnerability factors to generate female sexual offending. The gender-specific research and clinical implications of the model are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The existing techniques for shot partitioning either process each shot boundary independently or proceed sequentially. The sequential process assumes the last shot boundary is correctly detected and utilizes the shot length distribution to adapt the threshold for detecting the next boundary. These techniques are only locally optimal and suffer from the strong assumption about the correct detection of the last boundary. Addressing these fundamental issues, in this paper, we aim to find the global optimal shot partitioning by utilizing Bayesian principles to model the probability of a particular video partition being the shot partition. A computationally efficient algorithm based on Dynamic Programming is then formulated. The experimental results on a large movie set show that our algorithm performs consistently better than the best adaptive-thresholding technique commonly used for the task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data acquired from multiple sensors can be fused at a variety of levels: the raw data level, the feature level, or the decision level. An additional dimension to the fusion process is temporal fusion, which is fusion of data or information acquired from multiple sensors of different types over a period of time. We propose a technique that can perform such temporal fusion. The core of the system is the fusion processor that uses Dynamic Time Warping (DTW) to perform temporal fusion. We evaluate the performance of the fusion system on two real world datasets: 1) accelerometer data acquired from performing two hand gestures and 2) NOKIA’s benchmark dataset for context recognition. The results of the first experiment show that the system can perform temporal fusion on both raw data and features derived from the raw data. The system can also recognize the same class of multisensor temporal sequences even though they have different lengths e.g. the same human gestures can be performed at different speeds. In addition, the fusion processor can infer decisions from the temporal sequences fast and accurately. The results of the second experiment show that the system can perform fusion on temporal sequences that have large dimensions and are a mix of discrete and continuous variables. The proposed fusion system achieved good classification rates efficiently in both experiments