994 resultados para Computer Experiments


Relevância:

20.00% 20.00%

Publicador:

Resumo:

En l'article s'analitza prèviament l'estat de l'art de la gestió de l'ample de banda en entorns educatius, presentant en base a diverses classificacions anteriors solucions i experiències proposades. Amb la proposta presentada, mitjançant els experiments de simulació efectuats i els tests en entorns reals es tracta de comprovar-ne el correcte comportament, demostrant la utilitat de la mateixa alhora de fer la gestió de l'ample de banda dels centres.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this paper is to propose a Neural-Q_learning approach designed for online learning of simple and reactive robot behaviors. In this approach, the Q_function is generalized by a multi-layer neural network allowing the use of continuous states and actions. The algorithm uses a database of the most recent learning samples to accelerate and guarantee the convergence. Each Neural-Q_learning function represents an independent, reactive and adaptive behavior which maps sensorial states to robot control actions. A group of these behaviors constitutes a reactive control scheme designed to fulfill simple missions. The paper centers on the description of the Neural-Q_learning based behaviors showing their performance with an underwater robot in a target following task. Real experiments demonstrate the convergence and stability of the learning system, pointing out its suitability for online robot learning. Advantages and limitations are discussed

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A major obstacle to processing images of the ocean floor comes from the absorption and scattering effects of the light in the aquatic environment. Due to the absorption of the natural light, underwater vehicles often require artificial light sources attached to them to provide the adequate illumination. Unfortunately, these flashlights tend to illuminate the scene in a nonuniform fashion, and, as the vehicle moves, induce shadows in the scene. For this reason, the first step towards application of standard computer vision techniques to underwater imaging requires dealing first with these lighting problems. This paper analyses and compares existing methodologies to deal with low-contrast, nonuniform illumination in underwater image sequences. The reviewed techniques include: (i) study of the illumination-reflectance model, (ii) local histogram equalization, (iii) homomorphic filtering, and, (iv) subtraction of the illumination field. Several experiments on real data have been conducted to compare the different approaches

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to the high cost of a large ATM network working up to full strength to apply our ideas about network management, i.e., dynamic virtual path (VP) management and fault restoration, we developed a distributed simulation platform for performing our experiments. This platform also had to be capable of other sorts of tests, such as connection admission control (CAC) algorithms, routing algorithms, and accounting and charging methods. The platform was posed as a very simple, event-oriented and scalable simulation. The main goal was the simulation of a working ATM backbone network with a potentially large number of nodes (hundreds). As research into control algorithms and low-level, or rather cell-level methods, was beyond the scope of this study, the simulation took place at a connection level, i.e., there was no real traffic of cells. The simulated network behaved like a real network accepting and rejecting SNMP ones, or experimental tools using the API node

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most network operators have considered reducing Label Switched Routers (LSR) label spaces (i.e. the number of labels that can be used) as a means of simplifying management of underlaying Virtual Private Networks (VPNs) and, hence, reducing operational expenditure (OPEX). This letter discusses the problem of reducing the label spaces in Multiprotocol Label Switched (MPLS) networks using label merging - better known as MultiPoint-to-Point (MP2P) connections. Because of its origins in IP, MP2P connections have been considered to have tree- shapes with Label Switched Paths (LSP) as branches. Due to this fact, previous works by many authors affirm that the problem of minimizing the label space using MP2P in MPLS - the Merging Problem - cannot be solved optimally with a polynomial algorithm (NP-complete), since it involves a hard- decision problem. However, in this letter, the Merging Problem is analyzed, from the perspective of MPLS, and it is deduced that tree-shapes in MP2P connections are irrelevant. By overriding this tree-shape consideration, it is possible to perform label merging in polynomial time. Based on how MPLS signaling works, this letter proposes an algorithm to compute the minimum number of labels using label merging: the Full Label Merging algorithm. As conclusion, we reclassify the Merging Problem as Polynomial-solvable, instead of NP-complete. In addition, simulation experiments confirm that without the tree-branch selection problem, more labels can be reduced

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, different recovery methods applied at different network layers and time scales are used in order to enhance the network reliability. Each layer deploys its own fault management methods. However, current recovery methods are applied to only a specific layer. New protection schemes, based on the proposed partial disjoint path algorithm, are defined in order to avoid protection duplications in a multi-layer scenario. The new protection schemes also encompass shared segment backup computation and shared risk link group identification. A complete set of experiments proves the efficiency of the proposed methods in relation with previous ones, in terms of resources used to protect the network, the failure recovery time and the request rejection ratio

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper focuses on QoS routing with protection in an MPLS network over an optical layer. In this multi-layer scenario each layer deploys its own fault management methods. A partially protected optical layer is proposed and the rest of the network is protected at the MPLS layer. New protection schemes that avoid protection duplications are proposed. Moreover, this paper also introduces a new traffic classification based on the level of reliability. The failure impact is evaluated in terms of recovery time depending on the traffic class. The proposed schemes also include a novel variation of minimum interference routing and shared segment backup computation. A complete set of experiments proves that the proposed schemes are more efficient as compared to the previous ones, in terms of resources used to protect the network, failure impact and the request rejection ratio

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A recent study defines a new network plane: the knowledge plane. The incorporation of the knowledge plane over the network allows having more accurate information of the current and future network states. In this paper, the introduction and management of the network reliability information in the knowledge plane is proposed in order to improve the quality of service with protection routing algorithms in GMPLS over WDM networks. Different experiments prove the efficiency and scalability of the proposed scheme in terms of the percentage of resources used to protect the network

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently, morphometric measurements of the ascending aorta have been done with ECG-gated multidector computerized tomography (MDCT) to help the development of future novel transcatheter therapies (TCT); nevertheless, the variability of such measurements remains unknown. Thirty patients referred for ECG-gated CT thoracic angiography were evaluated. Continuous reformations of the ascending aorta, perpendicular to the centerline, were obtained automatically with a commercially available computer aided diagnosis (CAD). Then measurements of the maximal diameter were done with the CAD and manually by two observers (separately). Measurements were repeated one month later. The Bland-Altman method, Spearman coefficients, and a Wilcoxon signed-rank test were used to evaluate the variability, the correlation, and the differences between observers. The interobserver variability for maximal diameter between the two observers was up to 1.2 mm with limits of agreement [-1.5, +0.9] mm; whereas the intraobserver limits were [-1.2, +1.0] mm for the first observer and [-0.8, +0.8] mm for the second observer. The intraobserver CAD variability was 0.8 mm. The correlation was good between observers and the CAD (0.980-0.986); however, significant differences do exist (P<0.001). The maximum variability observed was 1.2 mm and should be considered in reports of measurements of the ascending aorta. The CAD is as reproducible as an experienced reader.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The international Functional Annotation Of the Mammalian Genomes 4 (FANTOM4) research collaboration set out to better understand the transcriptional network that regulates macrophage differentiation and to uncover novel components of the transcriptome employing a series of high-throughput experiments. The primary and unique technique is cap analysis of gene expression (CAGE), sequencing mRNA 5'-ends with a second-generation sequencer to quantify promoter activities even in the absence of gene annotation. Additional genome-wide experiments complement the setup including short RNA sequencing, microarray gene expression profiling on large-scale perturbation experiments and ChIP-chip for epigenetic marks and transcription factors. All the experiments are performed in a differentiation time course of the THP-1 human leukemic cell line. Furthermore, we performed a large-scale mammalian two-hybrid (M2H) assay between transcription factors and monitored their expression profile across human and mouse tissues with qRT-PCR to address combinatorial effects of regulation by transcription factors. These interdependent data have been analyzed individually and in combination with each other and are published in related but distinct papers. We provide all data together with systematic annotation in an integrated view as resource for the scientific community (http://fantom.gsc.riken.jp/4/). Additionally, we assembled a rich set of derived analysis results including published predicted and validated regulatory interactions. Here we introduce the resource and its update after the initial release.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The EVS4CSCL project starts in the context of a Computer Supported Collaborative Learning environment (CSCL). Previous UOC projects created a CSCL generic platform (CLPL) to facilitate the development of CSCL applications. A discussion forum (DF) was the first application developed over the framework. This discussion forum was different from other products on the marketplace because of its focus on the learning process. The DF carried out the specification and elaboration phases from the discussion learning process but there was a lack in the consensus phase. The consensus phase in a learning environment is not something to be achieved but tested. Common tests are done by Electronic Voting System (EVS) tools, but consensus test is not an assessment test. We are not evaluating our students by their answers but by their discussion activity. Our educational EVS would be used as a discussion catalyst proposing a discussion about the results after an initial query or it would be used after a discussion period in order to manifest how the discussion changed the students mind (consensus). It should be also used by the teacher as a quick way to know where the student needs some reinforcement. That is important in a distance-learning environment where there is no direct contact between the teacher and the student and it is difficult to detect the learning lacks. In an educational environment, assessment it is a must and the EVS will provide direct assessment by peer usefulness evaluation, teacher marks on every query created and indirect assessment from statistics regarding the user activity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND Functional brain images such as Single-Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) have been widely used to guide the clinicians in the Alzheimer's Disease (AD) diagnosis. However, the subjectivity involved in their evaluation has favoured the development of Computer Aided Diagnosis (CAD) Systems. METHODS It is proposed a novel combination of feature extraction techniques to improve the diagnosis of AD. Firstly, Regions of Interest (ROIs) are selected by means of a t-test carried out on 3D Normalised Mean Square Error (NMSE) features restricted to be located within a predefined brain activation mask. In order to address the small sample-size problem, the dimension of the feature space was further reduced by: Large Margin Nearest Neighbours using a rectangular matrix (LMNN-RECT), Principal Component Analysis (PCA) or Partial Least Squares (PLS) (the two latter also analysed with a LMNN transformation). Regarding the classifiers, kernel Support Vector Machines (SVMs) and LMNN using Euclidean, Mahalanobis and Energy-based metrics were compared. RESULTS Several experiments were conducted in order to evaluate the proposed LMNN-based feature extraction algorithms and its benefits as: i) linear transformation of the PLS or PCA reduced data, ii) feature reduction technique, and iii) classifier (with Euclidean, Mahalanobis or Energy-based methodology). The system was evaluated by means of k-fold cross-validation yielding accuracy, sensitivity and specificity values of 92.78%, 91.07% and 95.12% (for SPECT) and 90.67%, 88% and 93.33% (for PET), respectively, when a NMSE-PLS-LMNN feature extraction method was used in combination with a SVM classifier, thus outperforming recently reported baseline methods. CONCLUSIONS All the proposed methods turned out to be a valid solution for the presented problem. One of the advances is the robustness of the LMNN algorithm that not only provides higher separation rate between the classes but it also makes (in combination with NMSE and PLS) this rate variation more stable. In addition, their generalization ability is another advance since several experiments were performed on two image modalities (SPECT and PET).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Malposition of the acetabular component during hip arthroplasty increases the occurrence of impingement, reduces range of motion, and increases the risk of dislocation and long-term wear. To prevent malpositioned hip implants, an increasing number of computer-assisted orthopaedic systems have been described, but their accuracy is not well established. The purpose of this study was to determine the reproducibility and accuracy of conventional versus computer-assisted techniques for positioning the acetabular component in total hip arthroplasty. Using a lateral approach, 150 cups were placed by 10 surgeons in 10 identical plastic pelvis models (freehand, with a mechanical guide, using computer assistance). Conditions for cup implantations were made to mimic the operating room situation. Preoperative planning was done from a computed tomography scan. The accuracy of cup abduction and anteversion was assessed with an electromagnetic system. Freehand placement revealed a mean accuracy of cup anteversion and abduction of 10 degrees and 3.5 degrees, respectively (maximum error, 35 degrees). With the cup positioner, these angles measured 8 degrees and 4 degrees (maximum error, 29.8 degrees), respectively, and using computer assistance, 1.5 degrees and 2.5 degrees degrees (maximum error, 8 degrees), respectively. Computer-assisted cup placement was an accurate and reproducible technique for total hip arthroplasty. It was more accurate than traditional methods of cup positioning.