965 resultados para parallel optical data storage


Relevância:

40.00% 40.00%

Publicador:

Resumo:

It is generally assumed that the variability of neuronal morphology has an important effect on both the connectivity and the activity of the nervous system, but this effect has not been thoroughly investigated. Neuroanatomical archives represent a crucial tool to explore structure–function relationships in the brain. We are developing computational tools to describe, generate, store and render large sets of three–dimensional neuronal structures in a format that is compact, quantitative, accurate and readily accessible to the neuroscientist. Single–cell neuroanatomy can be characterized quantitatively at several levels. In computer–aided neuronal tracing files, a dendritic tree is described as a series of cylinders, each represented by diameter, spatial coordinates and the connectivity to other cylinders in the tree. This ‘Cartesian’ description constitutes a completely accurate mapping of dendritic morphology but it bears little intuitive information for the neuroscientist. In contrast, a classical neuroanatomical analysis characterizes neuronal dendrites on the basis of the statistical distributions of morphological parameters, e.g. maximum branching order or bifurcation asymmetry. This description is intuitively more accessible, but it only yields information on the collective anatomy of a group of dendrites, i.e. it is not complete enough to provide a precise ‘blueprint’ of the original data. We are adopting a third, intermediate level of description, which consists of the algorithmic generation of neuronal structures within a certain morphological class based on a set of ‘fundamental’, measured parameters. This description is as intuitive as a classical neuroanatomical analysis (parameters have an intuitive interpretation), and as complete as a Cartesian file (the algorithms generate and display complete neurons). The advantages of the algorithmic description of neuronal structure are immense. If an algorithm can measure the values of a handful of parameters from an experimental database and generate virtual neurons whose anatomy is statistically indistinguishable from that of their real counterparts, a great deal of data compression and amplification can be achieved. Data compression results from the quantitative and complete description of thousands of neurons with a handful of statistical distributions of parameters. Data amplification is possible because, from a set of experimental neurons, many more virtual analogues can be generated. This approach could allow one, in principle, to create and store a neuroanatomical database containing data for an entire human brain in a personal computer. We are using two programs, L–NEURON and ARBORVITAE, to investigate systematically the potential of several different algorithms for the generation of virtual neurons. Using these programs, we have generated anatomically plausible virtual neurons for several morphological classes, including guinea pig cerebellar Purkinje cells and cat spinal cord motor neurons. These virtual neurons are stored in an online electronic archive of dendritic morphology. This process highlights the potential and the limitations of the ‘computational neuroanatomy’ strategy for neuroscience databases.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Global communicationrequirements andloadimbalanceof someparalleldataminingalgorithms arethe major obstacles to exploitthe computational power of large-scale systems. This work investigates how non-uniform data distributions can be exploited to remove the global communication requirement and to reduce the communication costin parallel data mining algorithms and, in particular, in the k-means algorithm for cluster analysis. In the straightforward parallel formulation of the k-means algorithm, data and computation loads are uniformly distributed over the processing nodes. This approach has excellent load balancing characteristics that may suggest it could scale up to large and extreme-scale parallel computing systems. However, at each iteration step the algorithm requires a global reduction operationwhichhinders thescalabilityoftheapproach.Thisworkstudiesadifferentparallelformulation of the algorithm where the requirement of global communication is removed, while maintaining the same deterministic nature ofthe centralised algorithm. The proposed approach exploits a non-uniform data distribution which can be either found in real-world distributed applications or can be induced by means ofmulti-dimensional binary searchtrees. The approachcanalso be extended to accommodate an approximation error which allows a further reduction ofthe communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing element

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Exascale systems are the next frontier in high-performance computing and are expected to deliver a performance of the order of 10^18 operations per second using massive multicore processors. Very large- and extreme-scale parallel systems pose critical algorithmic challenges, especially related to concurrency, locality and the need to avoid global communication patterns. This work investigates a novel protocol for dynamic group communication that can be used to remove the global communication requirement and to reduce the communication cost in parallel formulations of iterative data mining algorithms. The protocol is used to provide a communication-efficient parallel formulation of the k-means algorithm for cluster analysis. The approach is based on a collective communication operation for dynamic groups of processes and exploits non-uniform data distributions. Non-uniform data distributions can be either found in real-world distributed applications or induced by means of multidimensional binary search trees. The analysis of the proposed dynamic group communication protocol has shown that it does not introduce significant communication overhead. The parallel clustering algorithm has also been extended to accommodate an approximation error, which allows a further reduction of the communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing elements.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Advances in hardware technologies allow to capture and process data in real-time and the resulting high throughput data streams require novel data mining approaches. The research area of Data Stream Mining (DSM) is developing data mining algorithms that allow us to analyse these continuous streams of data in real-time. The creation and real-time adaption of classification models from data streams is one of the most challenging DSM tasks. Current classifiers for streaming data address this problem by using incremental learning algorithms. However, even so these algorithms are fast, they are challenged by high velocity data streams, where data instances are incoming at a fast rate. This is problematic if the applications desire that there is no or only a very little delay between changes in the patterns of the stream and absorption of these patterns by the classifier. Problems of scalability to Big Data of traditional data mining algorithms for static (non streaming) datasets have been addressed through the development of parallel classifiers. However, there is very little work on the parallelisation of data stream classification techniques. In this paper we investigate K-Nearest Neighbours (KNN) as the basis for a real-time adaptive and parallel methodology for scalable data stream classification tasks.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: There is a metabolic pathway by which mammals can convert the omega-3 (n-3) essential fatty acid α-linolenic acid (ALA) into longer-chain n-3 polyunsaturated fatty acids (LC n-3 PUFA) including eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA). As far as we know there are currently no studies that have specifically examined sex differences in the LC n-3 PUFA response to increased dietary ALA intake in humans, although acute studies with isotope-labelled ALA identified that women have a significantly greater capacity to synthesise EPA and DHA from ALA compared to men. Findings: Available data from a placebo-controlled, randomised study were re-examined to identify whether there are sex differences in the LC n-3 PUFA response to increased dietary ALA intake in humans. There was a significant difference between sexes in the response to increased dietary ALA, with women having a significantly greater increase in the EPA content of plasma phospholipids (mean +2.0% of total fatty acids) after six months of an ALA-rich diet compared to men (mean +0.7%, P = 0.039). Age and BMI were identified as predictors of response to dietary ALA among women. Conclusions: Women show a greater increase in circulating EPA than men during increased dietary ALA consumption. Further understanding of individual variation in the response to dietary ALA could inform nutrition advice, with recommendations being specifically tailored according to habitual diet, sex, age and BMI.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper reviews the literature concerning the practice of using Online Analytical Processing (OLAP) systems to recall information stored by Online Transactional Processing (OLTP) systems. Such a review provides a basis for discussion on the need for the information that are recalled through OLAP systems to maintain the contexts of transactions with the data captured by the respective OLTP system. The paper observes an industry trend involving the use of OLTP systems to process information into data, which are then stored in databases without the business rules that were used to process information and data stored in OLTP databases without associated business rules. This includes the necessitation of a practice, whereby, sets of business rules are used to extract, cleanse, transform and load data from disparate OLTP systems into OLAP databases to support the requirements for complex reporting and analytics. These sets of business rules are usually not the same as business rules used to capture data in particular OLTP systems. The paper argues that, differences between the business rules used to interpret these same data sets, risk gaps in semantics between information captured by OLTP systems and information recalled through OLAP systems. Literature concerning the modeling of business transaction information as facts with context as part of the modelling of information systems were reviewed to identify design trends that are contributing to the design quality of OLTP and OLAP systems. The paper then argues that; the quality of OLTP and OLAP systems design has a critical dependency on the capture of facts with associated context, encoding facts with contexts into data with business rules, storage and sourcing of data with business rules, decoding data with business rules into the facts with the context and recall of facts with associated contexts. The paper proposes UBIRQ, a design model to aid the co-design of data with business rules storage for OLTP and OLAP purposes. The proposed design model provides the opportunity for the implementation and use of multi-purpose databases, and business rules stores for OLTP and OLAP systems. Such implementations would enable the use of OLTP systems to record and store data with executions of business rules, which will allow for the use of OLTP and OLAP systems to query data with business rules used to capture the data. Thereby ensuring information recalled via OLAP systems preserves the contexts of transactions as per the data captured by the respective OLTP system.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we describe the development of a program that aims at the optimal integration of observed data in an oceanographic model describ

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We report results on the electronic, vibrational, and optical properties of SnO(2) obtained using first-principles calculations performed within the density functional theory. All the calculated phonon frequencies, real and imaginary parts of complex dielectric function, the energy-loss spectrum, the refractive index, the extinction, and the absorption coefficients show good agreement with experimental results. Based on our calculations, the SnO(2) electron and hole effective masses were found to be strongly anisotropic. The lattice contribution to the low-frequency region of the SnO(2) dielectric function arising from optical phonons was also determined resulting the values of E > (1aSyen) (latt) (0) = 14.6 and E > (1ayen) (latt) (0) = 10.7 for directions perpendicular and parallel to the tetragonal c-axis, respectively. This is in excellent agreement with the available experimental data. After adding the electronic contribution to the lattice contribution, a total average value of E >(1)(0) = 18.2 is predicted for the static permittivity constant of SnO(2).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Red, blue and green emitting rare earth compounds (RE(3+) = Eu(3+), Gd(3+) and Tb(3+)) containing the benzenetricarboxylate ligands (BTC) [hemimellitic (EMA), trimellitic (TLA) and trimesic (TMA)] were synthesized and characterized by elemental analysis, complexometric titration, X-ray diffraction patterns, thermogravimetric analysis and infrared spectroscopy. The complexes presented the following formula: [RE(EMA)(H(2)O)(2)], [RE(TLA)(H(2)O)(4)] and [RE(TMA)(H(2)O)(G)], except for Tb-TMA compound, which was obtained only as anhydrous. Phosphorescence data of Gd(3+)-(BTC) complexes showed that the triplet states (T) of the BTC(3-) anions have energy higher than the main emitting states of the Eu(3+) ((5)D(0)) and Tb(3+) ((5)D(4)), indicating that BTC ligands can act as intramolecular energy donors for these metal ions. The high values of experimental intensity parameters (Omega(2)) of Eu(3+)-(BTC) complexes indicate that the europium ion is in a highly polarizable chemical environment. Based on the luminescence spectra, the energy transfer from the T state of BTC ligands to the excited (5)D(0) and (5)D(4) levels of the Eu(3+) and Tb(3+) ions is discussed. The emission quantum efficiencies (eta) of the (5)D(0) emitting level of the Eu(3+) ion have been also determined. In the case of the Tb(3+) ion, the photoluminescence data show the high emission intensity of the characteristic transitions (5)D(4) -> (7)F(J) (J=0-6), indicating that the BTC ligands are good sensitizers. The RE(3+)-(BTC) complexes act as efficient light conversion molecular devices (LCMDs) and can be used as tricolor luminescent materials. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In boreal forest regions, a great portion of forest tree seedlings are stored indoors in late autumn to prevent seedlings from outdoor winter damage. For seedlings to be able to survive in storage it is crucial that they store well and can cope with the dark and cold storage environment. The aim of this study was to search for genes that can determine the vitality status of Norway spruce (Picea abies (L.) Karst.) seedlings during frozen storage. Furthermore, the sensitivity of the ColdNSure (TM) test, a gene activity test that predicts storability was assessed. The storability of seedlings was tested biweekly by evaluating damage with the gene activity test and the electrolyte leakage test after freezing seedlings to -25 A degrees C (the SELdiff-25 method). In parallel, seedlings were frozen stored at -3 A degrees C. According to both methods, seedlings were considered storable from week 41. This also corresponded to the post storage results determined at the end of the storage period. In order to identify vitality indicators, Next Generation Sequencing (NGS) was performed on bud samples collected during storage. Comparing physiological post storage data to gene analysis data revealed numerous vitality related genes. To validate the results, a second trial was performed. In this trial, gene activity was better in predicting seedling storability than the conventional freezing test; this indicates a high sensitivity level of this molecular assay. For multiple indicators a clear switch between damaged and vital seedlings was observed. A collection of indicators will be used in the future development of a commercial vitality test.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work shows the design, simulation, and analysis of two optical interconnection networks for a Dataflow parallel computer architecture. To verify the optical interconnection network performance on the Dataflow architecture, we have analyzed the load balancing among the processors during the parallel programs executions. The load balancing is a very important parameter because it is directly associated to the dataflow parallelism degree. This article proves that optical interconnection networks designed with simple optical devices can provide efficiently the dataflow requirements of a high performance communication system.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Changes in the oceanic heat storage (HS) can reveal important evidences of climate variability related to ocean heat fluxes. Specifically, long-term variations in HS are a powerful indicator of climate change as HS represents the balance between the net surface energy flux and the poleward heat transported by the ocean currents. HS is estimated from sea surface height anomaly measured from the altimeters TOPEX/Poseidon and Jason 1 from 1993 to 2006. To characterize and validate the altimeter-based HS in the Atlantic, we used the data from the Pilot Research Moored Array in the Tropical Atlantic (PIRATA) array. Correlations and rms differences are used as statistical figures of merit to compare the HS estimates. The correlations range from 0.50 to 0.87 in the buoys located at the equator and at the southern part of the array. In that region the rms differences range between 0.40 and 0.51 x 10(9) Jm(-2). These results are encouraging and indicate that the altimeter has the precision necessary to capture the interannual trends in HS in the Atlantic. Albeit relatively small, salinity changes can also have an effect on the sea surface height anomaly. To account for this effect, NCEP/GODAS reanalysis data are used to estimate the haline contraction. To understand which dynamical processes are involved in the HS variability, the total signal is decomposed into nonpropagating basin-scale and seasonal (HS(l)) planetary waves, mesoscale eddies, and small-scale residual components. In general, HS(l) is the dominant signal in the tropical region. Results show a warming trend of HS(l) in the past 13 years almost all over the Atlantic basin with the most prominent slopes found at high latitudes. Positive interannual trends are found in the halosteric component at high latitudes of the South Atlantic and near the Labrador Sea. This could be an indication that the salinity anomaly increased in the upper layers during this period. The dynamics of the South Atlantic subtropical gyre could also be subject to low-frequency changes caused by a trend in the halosteric component on each side of the South Atlantic Current.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Data visualization techniques are powerful in the handling and analysis of multivariate systems. One such technique known as parallel coordinates was used to support the diagnosis of an event, detected by a neural network-based monitoring system, in a boiler at a Brazilian Kraft pulp mill. Its attractiveness is the possibility of the visualization of several variables simultaneously. The diagnostic procedure was carried out step-by-step going through exploratory, explanatory, confirmatory, and communicative goals. This tool allowed the visualization of the boiler dynamics in an easier way, compared to commonly used univariate trend plots. In addition it facilitated analysis of other aspects, namely relationships among process variables, distinct modes of operation and discrepant data. The whole analysis revealed firstly that the period involving the detected event was associated with a transition between two distinct normal modes of operation, and secondly the presence of unusual changes in process variables at this time.