994 resultados para Processing times


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Laser Shock Processing is developing as a key technology for the improvement of surface mechanical and corrosion resistance properties of metals due to its ability to introduce intense compressive residual stresses fields into high elastic limit materials by means of an intense laser driven shock wave generated by laser with intensities exceeding the 109 W/cm2 threshold, pulse energies in the range of 1 Joule and interaction times in the range of several ns. However, because of the relatively difficult-to-describe physics of shock wave formation in plasma following laser-matter interaction in solid state, only limited knowledge is available in the way of full comprehension and predictive assessment of the characteristic physical processes and material transformations with a specific consideration of real material properties. In the present paper, an account of the physical issues dominating the development of LSP processes from a moderately high intensity laser-matter interaction point of view is presented along with the theoretical and computational methods developed by the authors for their predictive assessment and new experimental contrast results obtained at laboratory scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, an architecture based on a scalable and flexible set of Evolvable Processing arrays is presented. FPGA-native Dynamic Partial Reconfiguration (DPR) is used for evolution, which is done intrinsically, letting the system to adapt autonomously to variable run-time conditions, including the presence of transient and permanent faults. The architecture supports different modes of operation, namely: independent, parallel, cascaded or bypass mode. These modes of operation can be used during evolution time or during normal operation. The evolvability of the architecture is combined with fault-tolerance techniques, to enhance the platform with self-healing features, making it suitable for applications which require both high adaptability and reliability. Experimental results show that such a system may benefit from accelerated evolution times, increased performance and improved dependability, mainly by increasing fault tolerance for transient and permanent faults, as well as providing some fault identification possibilities. The evolvable HW array shown is tailored for window-based image processing applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an approach to create what we have called a Unified Sentiment Lexicon (USL). This approach aims at aligning, unifying, and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. One problem related to the task of the automatic unification of different scores of sentiment lexicons is that there are multiple lexical entries for which the classification of positive, negative, or neutral {P, Z, N} depends on the unit of measurement used in the annotation methodology of the source sentiment lexicon. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and -1, where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and -1 means they are perfectly inversely correlated and so is the UnifiedMetrics procedure for CPU and GPU, respectively. Another problem is the high processing time required for computing all the lexical entries in the unification task. Thus, the USL approach computes a subset of lexical entries in each of the 1344 GPU cores and uses parallel processing in order to unify 155802 lexical entries. The results of the analysis conducted using the USL approach show that the USL has 95.430 lexical entries, out of which there are 35.201 considered to be positive, 22.029 negative, and 38.200 neutral. Finally, the runtime was 10 minutes for 95.430 lexical entries; this allows a reduction of the time computing for the UnifiedMetrics by 3 times.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we have realized plasma diagnosis produced by Laser (LPP), by means of emission spectroscopy in a Laser Shock Processing (LSP). The LSP has been proposed as an alternative technology, competitive with classical surface treatments. The ionic species present in the plasma together with electron density and its temperature provide significant indicators of the degree of surface effect of the treated material. In order to analyze these indicators, we have realized spectroscopic studies of optical emission in the laser-generated plasmas in different situations. We have worked focusing on an aluminum sample (Al2024) in air and/or in LSP conditions (water flow) a Q-switched laser of Nd:YAG (λ = 1.06 μm, 10 ns of pulse duration, running at 10 Hz repetition rate). The pulse energy was set at 2,5 J per pulse. The electron density has been measured using, in every case, the Stark broadening of H Balmer α line (656.27 nm). In the case of the air, this measure has been contrasted with the value obtained with the line of 281.62 nm of Al II. Special attention has been paid to the self-absorption of the spectral lines used. The measures were realized with different delay times after the pulse of the laser (1–8 μs) and with a time window of 1 μs. In LSP the electron density obtained was between 1017 cm−3 for the shortest delays (4–6 μs), and 1016 cm−3 for the greatest delays (7,8 μs).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Gaia-ESO Survey is a large public spectroscopic survey that aims to derive radial velocities and fundamental parameters of about 105 Milky Way stars in the field and in clusters. Observations are carried out with the multi-object optical spectrograph FLAMES, using simultaneously the medium-resolution (R ~ 20 000) GIRAFFE spectrograph and the high-resolution (R ~ 47 000) UVES spectrograph. In this paper we describe the methods and the software used for the data reduction, the derivation of the radial velocities, and the quality control of the FLAMES-UVES spectra. Data reduction has been performed using a workflow specifically developed for this project. This workflow runs the ESO public pipeline optimizing the data reduction for the Gaia-ESO Survey, automatically performs sky subtraction, barycentric correction and normalisation, and calculates radial velocities and a first guess of the rotational velocities. The quality control is performed using the output parameters from the ESO pipeline, by a visual inspection of the spectra and by the analysis of the signal-to-noise ratio of the spectra. Using the observations of the first 18 months, specifically targets observed multiple times at different epochs, stars observed with both GIRAFFE and UVES, and observations of radial velocity standards, we estimated the precision and the accuracy of the radial velocities. The statistical error on the radial velocities is σ ~ 0.4 km s-1 and is mainly due to uncertainties in the zero point of the wavelength calibration. However, we found a systematic bias with respect to the GIRAFFE spectra (~0.9 km s-1) and to the radial velocities of the standard stars (~0.5 km s-1) retrieved from the literature. This bias will be corrected in the future data releases, when a common zero point for all the set-ups and instruments used for the survey is be established.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multiresolution Triangular Mesh (MTM) models are widely used to improve the performance of large terrain visualization by replacing the original model with a simplified one. MTM models, which consist of both original and simplified data, are commonly stored in spatial database systems due to their size. The relatively slow access speed of disks makes data retrieval the bottleneck of such terrain visualization systems. Existing spatial access methods proposed to address this problem rely on main-memory MTM models, which leads to significant overhead during query processing. In this paper, we approach the problem from a new perspective and propose a novel MTM called direct mesh that is designed specifically for secondary storage. It supports available indexing methods natively and requires no modification to MTM structure. Experiment results, which are based on two real-world data sets, show an average performance improvement of 5-10 times over the existing methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantile computation has many applications including data mining and financial data analysis. It has been shown that an is an element of-approximate summary can be maintained so that, given a quantile query d (phi, is an element of), the data item at rank [phi N] may be approximately obtained within the rank error precision is an element of N over all N data items in a data stream or in a sliding window. However, scalable online processing of massive continuous quantile queries with different phi and is an element of poses a new challenge because the summary is continuously updated with new arrivals of data items. In this paper, first we aim to dramatically reduce the number of distinct query results by grouping a set of different queries into a cluster so that they can be processed virtually as a single query while the precision requirements from users can be retained. Second, we aim to minimize the total query processing costs. Efficient algorithms are developed to minimize the total number of times for reprocessing clusters and to produce the minimum number of clusters, respectively. The techniques are extended to maintain near-optimal clustering when queries are registered and removed in an arbitrary fashion against whole data streams or sliding windows. In addition to theoretical analysis, our performance study indicates that the proposed techniques are indeed scalable with respect to the number of input queries as well as the number of items and the item arrival rate in a data stream.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current study examined the contribution of phonological processing abilities and ADHD-like behaviours to first-grade word reading ability. 136 children were tested at the beginning and end of first grade. At both times, teachers rated children on hyperactive, inattentive, and oppositional behaviour. Children were given tests of letter knowledge at T1 and tests of word reading, phonological sensitivity, phonological memory, rapid automatised naming, and vocabulary at T1 and T2. Regression analyses revealed that, of the behavioural measures, inattention made the strongest contribution to T2 reading, even after controlling for the effects of T1 reading, hyperactivity, and oppositional behaviour. Hyperactivity did not explain variance in T2 reading once the effect of inattention was controlled. Inattention predicted 4.7% independent variance in T2 word reading ability, even after the effects of T1 reading, vocabulary, and phonological processing were controlled. Although phonological processing predicted 9.3% independent variance in T2 word reading, even after the effects of reading, vocabulary, and inattention were controlled, the effects of phonological processing may have been partly mediated by inattention. This research indicates that inattention contributes to the prediction of early reading development in unselected populations, and that this influence is independent of other key cognitive predictors of reading ability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The physical implementation of quantum information processing is one of the major challenges of current research. In the last few years, several theoretical proposals and experimental demonstrations on a small number of qubits have been carried out, but a quantum computing architecture that is straightforwardly scalable, universal, and realizable with state-of-the-art technology is still lacking. In particular, a major ultimate objective is the construction of quantum simulators, yielding massively increased computational power in simulating quantum systems. Here we investigate promising routes towards the actual realization of a quantum computer, based on spin systems. The first one employs molecular nanomagnets with a doublet ground state to encode each qubit and exploits the wide chemical tunability of these systems to obtain the proper topology of inter-qubit interactions. Indeed, recent advances in coordination chemistry allow us to arrange these qubits in chains, with tailored interactions mediated by magnetic linkers. These act as switches of the effective qubit-qubit coupling, thus enabling the implementation of one- and two-qubit gates. Molecular qubits can be controlled either by uniform magnetic pulses, either by local electric fields. We introduce here two different schemes for quantum information processing with either global or local control of the inter-qubit interaction and demonstrate the high performance of these platforms by simulating the system time evolution with state-of-the-art parameters. The second architecture we propose is based on a hybrid spin-photon qubit encoding, which exploits the best characteristic of photons, whose mobility is exploited to efficiently establish long-range entanglement, and spin systems, which ensure long coherence times. The setup consists of spin ensembles coherently coupled to single photons within superconducting coplanar waveguide resonators. The tunability of the resonators frequency is exploited as the only manipulation tool to implement a universal set of quantum gates, by bringing the photons into/out of resonance with the spin transition. The time evolution of the system subject to the pulse sequence used to implement complex quantum algorithms has been simulated by numerically integrating the master equation for the system density matrix, thus including the harmful effects of decoherence. Finally a scheme to overcome the leakage of information due to inhomogeneous broadening of the spin ensemble is pointed out. Both the proposed setups are based on state-of-the-art technological achievements. By extensive numerical experiments we show that their performance is remarkably good, even for the implementation of long sequences of gates used to simulate interesting physical models. Therefore, the here examined systems are really promising buildingblocks of future scalable architectures and can be used for proof-of-principle experiments of quantum information processing and quantum simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis represents a significant part of the research activity conducted during the PhD program in Information Technologies, supported by Selta S.p.A, Cadeo, Italy, focused on the analysis and design of a Power Line Communications (PLC) system. In recent times the PLC technologies have been considered for integration in Smart Grids architectures, as they are used to exploit the existing power line infrastructure for information transmission purposes on low, medium and high voltage lines. The characterization of a reliable PLC system is a current object of research as well as it is the design of modems for communications over the power lines. In this thesis, the focus is on the analysis of a full-duplex PLC modem for communication over high-voltage lines, and, in particular, on the design of the echo canceller device and innovative channel coding schemes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adults show great variation in their auditory skills, such as being able to discriminate between foreign speech-sounds. Previous research has demonstrated that structural features of auditory cortex can predict auditory abilities; here we are interested in the maturation of 2-Hz frequency-modulation (FM) detection, a task thought to tap into mechanisms underlying language abilities. We hypothesized that an individual's FM threshold will correlate with gray-matter density in left Heschl's gyrus, and that this function-structure relationship will change through adolescence. To test this hypothesis, we collected anatomical magnetic resonance imaging data from participants who were tested and scanned at three time points: at 10, 11.5 and 13 years of age. Participants judged which of two tones contained FM; the modulation depth was adjusted using an adaptive staircase procedure and their threshold was calculated based on the geometric mean of the last eight reversals. Using voxel-based morphometry, we found that FM threshold was significantly correlated with gray-matter density in left Heschl's gyrus at the age of 10 years, but that this correlation weakened with age. While there were no differences between girls and boys at Times 1 and 2, at Time 3 there was a relationship between gray-matter density in left Heschl's gyrus in boys but not in girls. Taken together, our results confirm that the structure of the auditory cortex can predict temporal processing abilities, namely that gray-matter density in left Heschl's gyrus can predict 2-Hz FM detection threshold. This ability is dependent on the processing of sounds changing over time, a skill believed necessary for speech processing. We tested this assumption and found that FM threshold significantly correlated with spelling abilities at Time 1, but that this correlation was found only in boys. This correlation decreased at Time 2, and at Time 3 we found a significant correlation between reading and FM threshold, but again, only in boys. We examined the sex differences in both the imaging and behavioral data taking into account pubertal stages, and found that the correlation between FM threshold and spelling was strongest pre-pubertally, and the correlation between FM threshold and gray-matter density in left Heschl's gyrus was strongest mid-pubertally.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although advertising is pervasive in our daily, it proves to be not necessarily efficient all the times due to bad conditions or bad contexts of reception. Indeed, the communication process might be jeopardized at its very last stage because of advertising exposure quality. However critical it may be, ad exposure quality is not very much examined by researchers or practitioners. In this paper, we investigate how tiredness combined with ad complexity might influence the way consumers extract and process ad elements. Investigating tiredness is useful because it is a common daily state experienced by everyone at various moments of the day. And although it might drastically alter ad reception, it has not been studied in advertising for the moment. In this regards, we observe eye movement patterns of consumers viewing simple or complex advertisements being tired or not. We surprisingly find that tired subjects viewing complex ads don’t adopt a lessening effort visual strategy. They rather use a resource demanding one. We assume that the Sustained Attention strategy occurring is a kind of adaptive strategy allowing to deal with an anticipated lack of resource.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study the impact of senescence and harvest time in Miscanthus on the quality of fast pyrolysis liquid (bio-oil) was investigated. Bio-oil was produced using a 1kgh fast pyrolysis reactor to obtain a quantity of bio-oil comparable with existing industrial reactors. Bio-oil stability was measured using viscosity, water content, pH and heating value changes under specific conditions. Plant developmental characteristics were significantly different (P=0.05) between all harvest points. The stage of crop senescence was correlated with nutrient remobilisation (N, P, K; r=0.9043, r=0.9920, r=0.9977 respectively) and affected bio-oil quality. Harvest time and senescence impacted bio-oil quality and stability. For fast pyrolysis processing of Miscanthus, the harvest time of Miscanthus can be extended to cover a wider harvest window whilst still maintaining bio-oil quality but this may impact mineral depletion in, and long term sustainability of, the crop unless these minerals can be recycled. © 2012 Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report the performance of a group of adult dyslexics and matched controls in an array-matching task where two strings of either consonants or symbols are presented side by side and have to be judged to be the same or different. The arrays may differ either in the order or identity of two adjacent characters. This task does not require naming – which has been argued to be the cause of dyslexics’ difficulty in processing visual arrays – but, instead, has a strong serial component as demonstrated by the fact that, in both groups, Reaction times (RTs) increase monotonically with position of a mismatch. The dyslexics are clearly impaired in all conditions and performance in the identity conditions predicts performance across orthographic tasks even after age, performance IQ and phonology are partialled out. Moreover, the shapes of serial position curves are revealing of the underlying impairment. In the dyslexics, RTs increase with position at the same rate as in the controls (lines are parallel) ruling out reduced processing speed or difficulties in shifting attention. Instead, error rates show a catastrophic increase for positions which are either searched later or more subject to interference. These results are consistent with a reduction in the attentional capacity needed in a serial task to bind together identity and positional information. This capacity is best seen as a reduction in the number of spotlights into which attention can be split to process information at different locations rather than as a more generic reduction of resources which would also affect processing the details of single objects.