744 resultados para Cloud Computing, attori, piattaforme, Pattern, Orleans


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a special-purpose neural computing system for face identification. The system architecture and hardware implementation are introduced in detail. An algorithm based on biomimetic pattern recognition has been embedded. For the total 1200 tests for face identification, the false rejection rate is 3.7% and the false acceptance rate is 0.7%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new model of pattern recognition principles-Biomimetic Pattern Recognition, which is based on "matter cognition" instead of "matter classification", has been proposed. As a important means realizing Biomimetic Pattern Recognition, the mathematical model and analyzing method of ANN get breakthrough: a novel all-purpose mathematical model has been advanced, which can simulate all kinds of neuron architecture, including RBF and BP models. As the same time this model has been realized using hardware; the high-dimension space geometry method, a new means to analyzing ANN, has been researched.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND:
tissue MicroArrays (TMAs) are a valuable platform for tissue based translational research and the discovery of tissue biomarkers. The digitised TMA slides or TMA Virtual Slides, are ultra-large digital images, and can contain several hundred samples. The processing of such slides is time-consuming, bottlenecking a potentially high throughput platform.
METHODS:
a High Performance Computing (HPC) platform for the rapid analysis of TMA virtual slides is presented in this study. Using an HP high performance cluster and a centralised dynamic load balancing approach, the simultaneous analysis of multiple tissue-cores were established. This was evaluated on Non-Small Cell Lung Cancer TMAs for complex analysis of tissue pattern and immunohistochemical positivity.
RESULTS:
the automated processing of a single TMA virtual slide containing 230 patient samples can be significantly speeded up by a factor of circa 22, bringing the analysis time to one minute. Over 90 TMAs could also be analysed simultaneously, speeding up multiplex biomarker experiments enormously.
CONCLUSIONS:
the methodologies developed in this paper provide for the first time a genuine high throughput analysis platform for TMA biomarker discovery that will significantly enhance the reliability and speed for biomarker research. This will have widespread implications in translational tissue based research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the availability of a wide range of cloud Virtual Machines (VMs) it is difficult to determine which VMs can maximise the performance of an application. Benchmarking is commonly used to this end for capturing the performance of VMs. Most cloud benchmarking techniques are typically heavyweight - time consuming processes which have to benchmark the entire VM in order to obtain accurate benchmark data. Such benchmarks cannot be used in real-time on the cloud and incur extra costs even before an application is deployed.

In this paper, we present lightweight cloud benchmarking techniques that execute quickly and can be used in near real-time on the cloud. The exploration of lightweight benchmarking techniques are facilitated by the development of DocLite - Docker Container-based Lightweight Benchmarking. DocLite is built on the Docker container technology which allows a user-defined portion (such as memory size and the number of CPU cores) of the VM to be benchmarked. DocLite operates in two modes, in the first mode, containers are used to benchmark a small portion of the VM to generate performance ranks. In the second mode, historic benchmark data is used along with the first mode as a hybrid to generate VM ranks. The generated ranks are evaluated against three scientific high-performance computing applications. The proposed techniques are up to 91 times faster than a heavyweight technique which benchmarks the entire VM. It is observed that the first mode can generate ranks with over 90% and 86% accuracy for sequential and parallel execution of an application. The hybrid mode improves the correlation slightly but the first mode is sufficient for benchmarking cloud VMs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Existing benchmarking methods are time consuming processes as they typically benchmark the entire Virtual Machine (VM) in order to generate accurate performance data, making them less suitable for real-time analytics. The research in this paper is aimed to surmount the above challenge by presenting DocLite - Docker Container-based Lightweight benchmarking tool. DocLite explores lightweight cloud benchmarking methods for rapidly executing benchmarks in near real-time. DocLite is built on the Docker container technology, which allows a user-defined memory size and number of CPU cores of the VM to be benchmarked. The tool incorporates two benchmarking methods - the first referred to as the native method employs containers to benchmark a small portion of the VM and generate performance ranks, and the second uses historic benchmark data along with the native method as a hybrid to generate VM ranks. The proposed methods are evaluated on three use-cases and are observed to be up to 91 times faster than benchmarking the entire VM. In both methods, small containers provide the same quality of rankings as a large container. The native method generates ranks with over 90% and 86% accuracy for sequential and parallel execution of an application compared against benchmarking the whole VM. The hybrid method did not improve the quality of the rankings significantly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scheduling jobs with deadlines, each of which defines the latest time that a job must be completed, can be challenging on the cloud due to incurred costs and unpredictable performance. This problem is further complicated when there is not enough information to effectively schedule a job such that its deadline is satisfied, and the cost is minimised. In this paper, we present an approach to schedule jobs, whose performance are unknown before execution, with deadlines on the cloud. By performing a sampling phase to collect the necessary information about those jobs, our approach delivers the scheduling decision within 10% cost and 16% violation rate when compared to the ideal setting, which has complete knowledge about each of the jobs from the beginning. It is noted that our proposed algorithm outperforms existing approaches, which use a fixed amount of resources by reducing the violation cost by at least two times.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The absolute necessity of obtaining 3D information of structured and unknown environments in autonomous navigation reduce considerably the set of sensors that can be used. The necessity to know, at each time, the position of the mobile robot with respect to the scene is indispensable. Furthermore, this information must be obtained in the least computing time. Stereo vision is an attractive and widely used method, but, it is rather limited to make fast 3D surface maps, due to the correspondence problem. The spatial and temporal correspondence among images can be alleviated using a method based on structured light. This relationship can be directly found codifying the projected light; then each imaged region of the projected pattern carries the needed information to solve the correspondence problem. We present the most significant techniques, used in recent years, concerning the coded structured light method

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The purpose of this paper is to address a classic problem – pattern formation identified by researchers in the area of swarm robotic systems – and is also motivated by the need for mathematical foundations in swarm systems. Design/methodology/approach: The work is separated out as inspirations, applications, definitions, challenges and classifications of pattern formation in swarm systems based on recent literature. Further, the work proposes a mathematical model for swarm pattern formation and transformation. Findings: A swarm pattern formation model based on mathematical foundations and macroscopic primitives is proposed. A formal definition for swarm pattern transformation and four special cases of transformation are introduced. Two general methods for transforming patterns are investigated and a comparison of the two methods is presented. The validity of the proposed models, and the feasibility of the methods investigated are confirmed on the Traer Physics and Processing environment. Originality/value: This paper helps in understanding the limitations of existing research in pattern formation and the lack of mathematical foundations for swarm systems. The mathematical model and transformation methods introduce two key concepts, namely macroscopic primitives and a mathematical model. The exercise of implementing the proposed models on physics simulator is novel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents and assesses an algorithm that constructs 3D distributions of cloud from passive satellite imagery and collocated 2D nadir profiles of cloud properties inferred synergistically from lidar, cloud radar and imager data. It effectively widens the active–passive retrieved cross-section (RXS) of cloud properties, thereby enabling computation of radiative fluxes and radiances that can be compared with measured values in an attempt to perform radiative closure experiments that aim to assess the RXS. For this introductory study, A-train data were used to verify the scene-construction algorithm and only 1D radiative transfer calculations were performed. The construction algorithm fills off-RXS recipient pixels by computing sums of squared differences (a cost function F) between their spectral radiances and those of potential donor pixels/columns on the RXS. Of the RXS pixels with F lower than a certain value, the one with the smallest Euclidean distance to the recipient pixel is designated as the donor, and its retrieved cloud properties and other attributes such as 1D radiative heating rates are consigned to the recipient. It is shown that both the RXS itself and Moderate Resolution Imaging Spectroradiometer (MODIS) imagery can be reconstructed extremely well using just visible and thermal infrared channels. Suitable donors usually lie within 10 km of the recipient. RXSs and their associated radiative heating profiles are reconstructed best for extensive planar clouds and less reliably for broken convective clouds. Domain-average 1D broadband radiative fluxes at the top of theatmosphere(TOA)for (21 km)2 domains constructed from MODIS, CloudSat andCloud–Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) data agree well with coincidental values derived from Clouds and the Earth’s Radiant Energy System (CERES) radiances: differences betweenmodelled and measured reflected shortwave fluxes are within±10Wm−2 for∼35% of the several hundred domains constructed for eight orbits. Correspondingly, for outgoing longwave radiation∼65% are within ±10Wm−2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An El Niño-like steady response is found in a greenhouse warming simulation resulting from coupled ocean-atmosphere dynamical feedbacks similar to those producing the present-day El Niños. There is a strong negative cloud-radiation feedback on the sea surface temperature (SST) anomaly associated with this enhanced eastern equatorial Pacific warm pattern. However, this negative feedback is overwhelmed by the positive dynamical feedbacks and cannot diminish the sensitivity of the tropical SST to enhanced greenhouse gas concentrations. The enhanced eastern-Pacific warming in the coupled ocean-atmosphere system suggests that coupled dynamics can strengthen this sensitivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The simulated annealing approach to crystal structure determination from powder diffraction data, as implemented in the DASH program, is readily amenable to parallelization at the individual run level. Very large scale increases in speed of execution can be achieved by distributing individual DASH runs over a network of computers. The CDASH program delivers this by using scalable on-demand computing clusters built on the Amazon Elastic Compute Cloud service. By way of example, a 360 vCPU cluster returned the crystal structure of racemic ornidazole (Z0 = 3, 30 degrees of freedom) ca 40 times faster than a typical modern quad-core desktop CPU. Whilst used here specifically for DASH, this approach is of general applicability to other packages that are amenable to coarse-grained parallelism strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud streets are common feature in the Amazon Basin. They form from the combination of the vertical trade wind stress and moist convection. Here, satellite imagery, data collected during the COBRA-PARA (Caxiuan Observations in the Biosphere, River and Atmosphere of Para) field campaign, and high resolution modeling are used to understand the streets` formation and behavior. The observations show that the streets have an aspect ratio of about 3.5 and they reach their maximum activity around 15:00 UTC when the wind shear is weaker, and the convective boundary layer reaches its maximum height. The simulations reveal that the cloud streets onset is caused by the local circulations and convection produced at the interfaces between forest and rivers of the Amazon. The satellite data and modeling show that the large rivers anchor the cloud streets producing a quasi-stationary horizontal pattern. The streets are associated with horizontal roll vortices parallel to the mean flow that organizes the turbulence causing advection of latent heat flux towards the upward branches. The streets have multiple warm plumes that promote a connection between the rolls. These spatial patterns allow fundamental insights on the interpretation of the Amazon exchanges between surface and atmosphere with important consequences for the climate change understanding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Burst firing is ubiquitous in nervous systems and has been intensively studied in central pattern generators (CPGs). Previous works have described subtle intraburst spike patterns (IBSPs) that, despite being traditionally neglected for their lack of relation to CPG motor function, were shown to be cell-type specific and sensitive to CPG connectivity. Here we address this matter by investigating how a bursting motor neuron expresses information about other neurons in the network. We performed experiments on the crustacean stomatogastric pyloric CPG, both in control conditions and interacting in real-time with computer model neurons. The sensitivity of postsynaptic to presynaptic IBSPs was inferred by computing their average mutual information along each neuron burst. We found that details of input patterns are nonlinearly and inhomogeneously coded through a single synapse into the fine IBSPs structure of the postsynaptic neuron following burst. In this way, motor neurons are able to use different time scales to convey two types of information simultaneously: muscle contraction (related to bursting rhythm) and the behavior of other CPG neurons (at a much shorter timescale by using IBSPs as information carriers). Moreover, the analysis revealed that the coding mechanism described takes part in a previously unsuspected information pathway from a CPG motor neuron to a nerve that projects to sensory brain areas, thus providing evidence of the general physiological role of information coding through IBSPs in the regulation of neuronal firing patterns in remote circuits by the CNS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ever increasing spurt in digital crimes such as image manipulation, image tampering, signature forgery, image forgery, illegal transaction, etc. have hard pressed the demand to combat these forms of criminal activities. In this direction, biometrics - the computer-based validation of a persons' identity is becoming more and more essential particularly for high security systems. The essence of biometrics is the measurement of person’s physiological or behavioral characteristics, it enables authentication of a person’s identity. Biometric-based authentication is also becoming increasingly important in computer-based applications because the amount of sensitive data stored in such systems is growing. The new demands of biometric systems are robustness, high recognition rates, capability to handle imprecision, uncertainties of non-statistical kind and magnanimous flexibility. It is exactly here that, the role of soft computing techniques comes to play. The main aim of this write-up is to present a pragmatic view on applications of soft computing techniques in biometrics and to analyze its impact. It is found that soft computing has already made inroads in terms of individual methods or in combination. Applications of varieties of neural networks top the list followed by fuzzy logic and evolutionary algorithms. In a nutshell, the soft computing paradigms are used for biometric tasks such as feature extraction, dimensionality reduction, pattern identification, pattern mapping and the like.