928 resultados para Input Distance Function
Resumo:
A new structure of Radial Basis Function (RBF) neural network called the Dual-orthogonal RBF Network (DRBF) is introduced for nonlinear time series prediction. The hidden nodes of a conventional RBF network compare the Euclidean distance between the network input vector and the centres, and the node responses are radially symmetrical. But in time series prediction where the system input vectors are lagged system outputs, which are usually highly correlated, the Euclidean distance measure may not be appropriate. The DRBF network modifies the distance metric by introducing a classification function which is based on the estimation data set. Training the DRBF networks consists of two stages. Learning the classification related basis functions and the important input nodes, followed by selecting the regressors and learning the weights of the hidden nodes. In both cases, a forward Orthogonal Least Squares (OLS) selection procedure is applied, initially to select the important input nodes and then to select the important centres. Simulation results of single-step and multi-step ahead predictions over a test data set are included to demonstrate the effectiveness of the new approach.
Resumo:
The influence of curing tip distance and storage time in the kinetics of water diffusion (water sorption-W SP, solubility-W SB, and net water uptake) and color stability of a composite were evaluated. Composite samples were polymerized at different distances (5, 10, and 15 mm) and compared to a control group (0 mm). After desiccation, the specimens were stored in distilled water to evaluate the water diffusion over a 120-day period. Net water uptake was calculated (sum of WSP and WSB). The color stability after immersion in a grape juice was compared to distilled water. Data were submitted to three-way ANOVA/Tukey's test (α = 5%). The higher distances caused higher net water uptake (p < 0.05). The immersion in the juice caused significantly higher color change as a function of curing tip distance and the time (p < 0.05). The distance of photoactivation and storage time provide the color alteration and increased net water uptake of the resin composite tested.
Resumo:
The estimation of the average travel distance in a low-level picker-to-part order picking system can be done by analytical methods in most cases. Often a uniform distribution of the access frequency over all bin locations is assumed in the storage system. This only applies if the bin location assignment is done randomly. If the access frequency of the articles is considered in the bin location assignment to reduce the average total travel distance of the picker, the access frequency over the bin locations of one aisle can be approximated by an exponential density function or any similar density function. All known calculation methods assume that the average number of orderlines per order is greater than the number of aisles of the storage system. In case of small orders this assumption is often invalid. This paper shows a new approach for calculating the average total travel distance taking into account that the average number of orderlines per order is lower than the total number of aisles in the storage system and the access frequency over the bin locations of an aisle can be approximated by any density function.
Resumo:
DCE-MRI is an important technique in the study of small animal cancer models because its sensitivity to vascular changes opens the possibility of quantitative assessment of early therapeutic response. However, extraction of physiologically descriptive parameters from DCE-MRI data relies upon measurement of the vascular input function (VIF), which represents the contrast agent concentration time course in the blood plasma. This is difficult in small animal models due to artifacts associated with partial volume, inflow enhancement, and the limited temporal resolution achievable with MR imaging. In this work, the development of a suite of techniques for high temporal resolution, artifact resistant measurement of the VIF in mice is described. One obstacle in VIF measurement is inflow enhancement, which decreases the sensitivity of the MR signal to the presence of contrast agent. Because the traditional techniques used to suppress inflow enhancement degrade the achievable spatiotemporal resolution of the pulse sequence, improvements can be achieved by reducing the time required for the suppression. Thus, a novel RF pulse which provides spatial presaturation contemporaneously with the RF excitation was implemented and evaluated. This maximizes the achievable temporal resolution by removing the additional RF and gradient pulses typically required for suppression of inflow enhancement. A second challenge is achieving the temporal resolution required for accurate characterization of the VIF, which exceeds what can be achieved with conventional imaging techniques while maintaining adequate spatial resolution and tumor coverage. Thus, an anatomically constrained reconstruction strategy was developed that allows for sampling of the VIF at extremely high acceleration factors, permitting capture of the initial pass of the contrast agent in mice. Simulation, phantom, and in vivo validation of all components were performed. Finally, the two components were used to perform VIF measurement in the murine heart. An in vivo study of the VIF reproducibility was performed, and an improvement in the measured injection-to-injection variation was observed. This will lead to improvements in the reliability of quantitative DCE-MRI measurements and increase their sensitivity.
Resumo:
Context. The ESA Rosetta spacecraft, currently orbiting around cornet 67P/Churyumov-Gerasimenko, has already provided in situ measurements of the dust grain properties from several instruments, particularly OSIRIS and GIADA. We propose adding value to those measurements by combining them with ground-based observations of the dust tail to monitor the overall, time-dependent dust-production rate and size distribution. Aims. To constrain the dust grain properties, we take Rosetta OSIRIS and GIADA results into account, and combine OSIRIS data during the approach phase (from late April to early June 2014) with a large data set of ground-based images that were acquired with the ESO Very Large Telescope (VLT) from February to November 2014. Methods. A Monte Carlo dust tail code, which has already been used to characterise the dust environments of several comets and active asteroids, has been applied to retrieve the dust parameters. Key properties of the grains (density, velocity, and size distribution) were obtained from. Rosetta observations: these parameters were used as input of the code to considerably reduce the number of free parameters. In this way, the overall dust mass-loss rate and its dependence on the heliocentric distance could be obtained accurately. Results. The dust parameters derived from the inner coma measurements by OSIRIS and GIADA and from distant imaging using VLT data are consistent, except for the power index of the size-distribution function, which is alpha = -3, instead of alpha = -2, for grains smaller than 1 mm. This is possibly linked to the presence of fluffy aggregates in the coma. The onset of cometary activity occurs at approximately 4.3 AU, with a dust production rate of 0.5 kg/s, increasing up to 15 kg/s at 2.9 AU. This implies a dust-to-gas mass ratio varying between 3.8 and 6.5 for the best-fit model when combined with water-production rates from the MIRO experiment.
Resumo:
Improving the representation of the hydrological cycle in Atmospheric General Circulation Models (AGCMs) is one of the main challenges in modeling the Earth's climate system. One way to evaluate model performance is to simulate the transport of water isotopes. Among those available, tritium (HTO) is an extremely valuable tracer, because its content in the different reservoirs involved in the water cycle (stratosphere, troposphere, ocean) varies by order of magnitude. Previous work incorporated natural tritium into LMDZ-iso, a version of the LMDZ general circulation model enhanced by water isotope diagnostics. Here for the first time, the anthropogenic tritium injected by each of the atmospheric nuclear-bomb tests between 1945 and 1980 has been first estimated and further implemented in the model; it creates an opportunity to evaluate certain aspects of LDMZ over several decades by following the bomb-tritium transient signal through the hydrological cycle. Simulations of tritium in water vapor and precipitation for the period 1950-2008, with both natural and anthropogenic components, are presented in this study. LMDZ-iso satisfactorily reproduces the general shape of the temporal evolution of tritium. However, LMDZ-iso simulates too high a bomb-tritium peak followed by too strong a decrease of tritium in precipitation. The too diffusive vertical advection in AGCMs crucially affects the residence time of tritium in the stratosphere. This insight into model performance demonstrates that the implementation of tritium in an AGCM provides a new and valuable test of the modeled atmospheric transport, complementing water stable isotope modeling.
Resumo:
La machine à vecteurs de support à une classe est un algorithme non-supervisé qui est capable d’apprendre une fonction de décision à partir de données d’une seule classe pour la détection d’anomalie. Avec les données d’entraînement d’une seule classe, elle peut identifier si une nouvelle donnée est similaire à l’ensemble d’entraînement. Dans ce mémoire, nous nous intéressons à la reconnaissance de forme de dynamique de frappe par la machine à vecteurs de support à une classe, pour l’authentification d’étudiants dans un système d’évaluation sommative à distance à l’Université Laval. Comme chaque étudiant à l’Université Laval possède un identifiant court, unique qu’il utilise pour tout accès sécurisé aux ressources informatiques, nous avons choisi cette chaîne de caractères comme support à la saisie de dynamique de frappe d’utilisateur pour construire notre propre base de données. Après avoir entraîné un modèle pour chaque étudiant avec ses données de dynamique de frappe, on veut pouvoir l’identifier et éventuellement détecter des imposteurs. Trois méthodes pour la classification ont été testées et discutées. Ainsi, nous avons pu constater les faiblesses de chaque méthode dans ce système. L’évaluation des taux de reconnaissance a permis de mettre en évidence leur dépendance au nombre de signatures ainsi qu’au nombre de caractères utilisés pour construire les signatures. Enfin, nous avons montré qu’il existe des corrélations entre le taux de reconnaissance et la dispersion dans les distributions des caractéristiques des signatures de dynamique de frappe.
Resumo:
A Positive Buck- Boost (PBB) converter is a known DC-DC converter that can operate in step up and step down modes. Unlike Buck, Boost, and Inverting Buck Boost converters, the inductor current of a PBB can be controlled independently of its voltage conversion ratio. In other words, the inductor of PBB can be utilised as an energy storage unit in addition to its main function of energy transfer. In this paper, the capability of PBB to store energy has been utilised to achieve robustness against input voltage fluctuations and output current changes. The control strategy has been developed to keep accuracy, affordability, and simplicity acceptable. To improve the efficiency of the system a Smart Load Controller (SLC) has been suggested. Applying SLC extra current storage occurs when there is sudden loads change otherwise little extra current is stored.
Resumo:
Two different methods to measure binocular longitudinal corneal apex movements were synchronously applied. High-speed videokeratoscopy at a sampling frequency of 15 Hz and a customdesigned ultrasound distance sensor at 100 Hz were used for the left and the right eye, respectively. Four healthy subjects participated in the study. Simultaneously, cardiac electric cycle (ECG) was registered for each subject at 100 Hz. Each measurement took 20 s. Subjects were asked to suppress blinking during the measurements. A rigid headrest and a bite-bar were used to minimize undesirable head movements. Time, frequency and time-frequency representations of the acquired signals were obtained to establish their temporal and spectral contents. Coherence analysis was used to estimate the correlation between the measured signals. The results showed close correlation between both corneal apex movements and the cardiopulmonary system. Unraveling these relationships could lead to better understanding of interactions between ocular biomechanics and vision. The advantages and disadvantages of the two methods in the context of measuring longitudinal movements of the corneal apex are outlined.
Resumo:
In this paper, we present a ∑GIi/D/1/∞ queue with heterogeneous input/output slot times. This queueing model can be regarded as an extension of the ordinary GI/D/1/∞ model. For this ∑GIi/D/1/∞ queue, we assume that several input streams arrive at the system according to different slot times. In other words, there are different slot times for different input/output processes in the queueing model. The queueing model can therefore be used for an ATM multiplexer with heterogeneous input/output link capacities. Several cases of the queueing model are discussed to reflect different relationships among the input/output link capacities of an ATM multiplexer. In the queueing analysis, two approaches: the Markov model and the probability generating function technique, are adopted to develop the queue length distributions observed at different epochs. This model is particularly useful in the performance analysis of ATM multiplexers with heterogeneous input/output link capacities.
Resumo:
Objective: To investigate how age-related declines in vision (particularly contrast sensitivity), simulated using cataract-goggles and low-contrast stimuli, influence the accuracy and speed of cognitive test performance in older adults. An additional aim was to investigate whether declines in vision differentially affect secondary more than primary memory. Method: Using a fully within-subjects design, 50 older drivers aged 66-87 years completed two tests of cognitive performance - letter matching (perceptual speed) and symbol recall (short-term memory) - under different viewing conditions that degraded visual input (low-contrast stimuli, cataract-goggles, and low-contrast stimuli combined with cataract-goggles, compared with normal viewing). However, presentation time was also manipulated for letter matching. Visual function, as measured using standard charts, was taken into account in statistical analyses. Results: Accuracy and speed for cognitive tasks were significantly impaired when visual input was degraded. Furthermore, cognitive performance was positively associated with contrast sensitivity. Presentation time did not influence cognitive performance, and visual gradation did not differentially influence primary and secondary memory. Conclusion: Age-related declines in visual function can impact on the accuracy and speed of cognitive performance, and therefore the cognitive abilities of older adults may be underestimated in neuropsychological testing. It is thus critical that visual function be assessed prior to testing, and that stimuli be adapted to older adults' sensory capabilities (e.g., by maximising stimuli contrast).
Resumo:
Most research on numerical development in children is behavioural, focusing on accuracy and response time in different problem formats. However, Temple and Posner (1998) used ERPs and the numerical distance task with 5-year-olds to show that the development of numerical representations is difficult to disentangle from the development of the executive components of response organization and execution. Here we use the numerical Stroop paradigm (NSP) and ERPs to study possible executive interference in numerical processing tasks in 6–8-year-old children. In the NSP, the numerical magnitude of the digits is task-relevant and the physical size of the digits is task-irrelevant. We show that younger children are highly susceptible to interference from irrelevant physical information such as digit size, but that access to the numerical representation is almost as fast in young children as in adults. We argue that the developmental trajectories for executive function and numerical processing may act together to determine numerical development in young children.
Resumo:
Damage detection in structures has become increasingly important in recent years. While a number of damage detection and localization methods have been proposed, few attempts have been made to explore the structure damage with frequency response functions (FRFs). This paper illustrates the damage identification and condition assessment of a beam structure using a new frequency response functions (FRFs) based damage index and Artificial Neural Networks (ANNs). In practice, usage of all available FRF data as an input to artificial neural networks makes the training and convergence impossible. Therefore one of the data reduction techniques Principal Component Analysis (PCA) is introduced in the algorithm. In the proposed procedure, a large set of FRFs are divided into sub-sets in order to find the damage indices for different frequency points of different damage scenarios. The basic idea of this method is to establish features of damaged structure using FRFs from different measurement points of different sub-sets of intact structure. Then using these features, damage indices of different damage cases of the structure are identified after reconstructing of available FRF data using PCA. The obtained damage indices corresponding to different damage locations and severities are introduced as input variable to developed artificial neural networks. Finally, the effectiveness of the proposed method is illustrated and validated by using the finite element modal of a beam structure. The illustrated results show that the PCA based damage index is suitable and effective for structural damage detection and condition assessment of building structures.