820 resultados para fog computing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Spinal image analysis and computer assisted intervention have emerged as new and independent research areas, due to the importance of treatment of spinal diseases, increasing availability of spinal imaging, and advances in analytics and navigation tools. Among others, multiple modality spinal image analysis and spinal navigation tools have emerged as two keys in this new area. We believe that further focused research in these two areas will lead to a much more efficient and accelerated research path, avoiding detours that exist in other applications, such as in brain and heart.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Percentile shares provide an intuitive and easy-to-understand way for analyzing income or wealth distributions. A celebrated example are the top income shares sported by the works of Thomas Piketty and colleagues. Moreover, series of percentile shares, defined as differences between Lorenz ordinates, can be used to visualize whole distributions or changes in distributions. In this talk, I present a new command called pshare that computes and graphs percentile shares (or changes in percentile shares) from individual level data. The command also provides confidence intervals and supports survey estimation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Percentile shares provide an intuitive and easy-to-understand way for analyzing income or wealth distributions. A celebrated example is the top income shares sported by the works of Thomas Piketty and colleagues. Moreover, series of percentile shares, defined as differences between Lorenz ordinates, can be used to visualize whole distributions or changes in distributions. In this talk, I present a new command called pshare that computes and graphs percentile shares (or changes in percentile shares) from individual level data. The command also provides confidence intervals and supports survey estimation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two studies among college students were conducted to evaluate appropriate measurement methods for etiological research on computing-related upper extremity musculoskeletal disorders (UEMSDs). ^ A cross-sectional study among 100 graduate students evaluated the utility of symptoms surveys (a VAS scale and 5-point Likert scale) compared with two UEMSD clinical classification systems (Gerr and Moore protocols). The two symptom measures were highly concordant (Lin's rho = 0.54; Spearman's r = 0.72); the two clinical protocols were moderately concordant (Cohen's kappa = 0.50). Sensitivity and specificity, endorsed by Youden's J statistic, did not reveal much agreement between the symptoms surveys and clinical examinations. It cannot be concluded self-report symptoms surveys can be used as surrogate for clinical examinations. ^ A pilot repeated measures study conducted among 30 undergraduate students evaluated computing exposure measurement methods. Key findings are: temporal variations in symptoms, the odds of experiencing symptoms increased with every hour of computer use (adjOR = 1.1, p < .10) and every stretch break taken (adjOR = 1.3, p < .10). When measuring posture using the Computer Use Checklist, a positive association with symptoms was observed (adjOR = 1.3, p < 0.10), while measuring posture using a modified Rapid Upper Limb Assessment produced unexpected and inconsistent associations. The findings were inconclusive in identifying an appropriate posture assessment or superior conceptualization of computer use exposure. ^ A cross-sectional study of 166 graduate students evaluated the comparability of graduate students to College Computing & Health surveys administered to undergraduate students. Fifty-five percent reported computing-related pain and functional limitations. Years of computer use in graduate school and number of years in school where weekly computer use was ≥ 10 hours were associated with pain within an hour of computing in logistic regression analyses. The findings are consistent with current literature on both undergraduate and graduate students. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fog deposition, precipitation, throughfall and stemflow were measured in a windward tropical montane cloud forest near Monteverde, Costa Rica, for a 65-day period during the dry season of 2003. Net fog deposition was measured directly using the eddy covariance (EC) method and it amounted to 1.2 ± 0.1 mm/day (mean ± standard error). Fog water deposition was 5-9% of incident rainfall for the entire period, which is at the low end of previously reported values. Stable isotope concentrations (d18O and d2H) were determined in a large number of samples of each water component. Mass balance-based estimates of fog deposition were 1.0 ± 0.3 and 5.0 ± 2.7 mm/day (mean ± SE) when d18O and d2H were used as tracer, respectively. Comparisons between direct fog deposition measurements and the results of the mass balance model using d18O as a tracer indicated that the latter might be a good tool to estimate fog deposition in the absence of direct measurement under many (but not all) conditions. At 506 mm, measured water inputs over the 65 days (fog plus rain) fell short by 46 mm compared to the canopy output of 552 mm (throughfall, stemflow and interception evaporation). This discrepancy is attributed to the underestimation of rainfall during conditions of high wind.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a novel graphical user interface program GrafLab (GRAvity Field LABoratory) for spherical harmonic synthesis (SHS) created in MATLAB®. This program allows to comfortably compute 38 various functionals of the geopotential up to ultra-high degrees and orders of spherical harmonic expansion. For the most difficult part of the SHS, namely the evaluation of the fully normalized associated Legendre functions (fnALFs), we used three different approaches according to required maximum degree: (i) the standard forward column method (up to maximum degree 1800, in some cases up to degree 2190); (ii) the modified forward column method combined with Horner's scheme (up to maximum degree 2700); (iii) the extended-range arithmetic (up to an arbitrary maximum degree). For the maximum degree 2190, the SHS with fnALFs evaluated using the extended-range arithmetic approach takes only approximately 2-3 times longer than its standard arithmetic counterpart, i.e. the standard forward column method. In the GrafLab, the functionals of the geopotential can be evaluated on a regular grid or point-wise, while the input coordinates can either be read from a data file or entered manually. For the computation on a regular grid we decided to apply the lumped coefficients approach due to significant time-efficiency of this method. Furthermore, if a full variance-covariance matrix of spherical harmonic coefficients is available, it is possible to compute the commission errors of the functionals. When computing on a regular grid, the output functionals or their commission errors may be depicted on a map using automatically selected cartographic projection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Managing large medical image collections is an increasingly demanding important issue in many hospitals and other medical settings. A huge amount of this information is daily generated, which requires robust and agile systems. In this paper we present a distributed multi-agent system capable of managing very large medical image datasets. In this approach, agents extract low-level information from images and store them in a data structure implemented in a relational database. The data structure can also store semantic information related to images and particular regions. A distinctive aspect of our work is that a single image can be divided so that the resultant sub-images can be stored and managed separately by different agents to improve performance in data accessing and processing. The system also offers the possibility of applying some region-based operations and filters on images, facilitating image classification. These operations can be performed directly on data structures in the database.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

En el campo de la biomedicina se genera una inmensa cantidad de imágenes diariamente. Para administrarlas es necesaria la creación de sistemas informáticos robustos y ágiles, que necesitan gran cantidad de recursos computacionales. El presente artículo presenta un servicio de cloud computing capaz de manejar grandes colecciones de imágenes biomédicas. Gracias a este servicio organizaciones y usuarios podrían administrar sus imágenes biomédicas sin necesidad de poseer grandes recursos informáticos. El servicio usa un sistema distribuido multi agente donde las imágenes son procesadas y se extraen y almacenan en una estructura de datos las regiones que contiene junto con sus características. Una característica novedosa del sistema es que una misma imagen puede ser dividida, y las sub-imágenes resultantes pueden ser almacenadas por separado por distintos agentes. Esta característica ayuda a mejorar el rendimiento del sistema a la hora de buscar y recuperar las imágenes almacenadas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of cloud computing is extending to all kind of systems, including the ones that are part of Critical Infrastructures, and measuring the reliability is becoming more difficult. Computing is becoming the 5th utility, in part thanks to the use of cloud services. Cloud computing is used now by all types of systems and organizations, including critical infrastructure, creating hidden inter-dependencies on both public and private cloud models. This paper investigates the use of cloud computing by critical infrastructure systems, the reliability and continuity of services risks associated with their use by critical systems. Some examples are presented of their use by different critical industries, and even when the use of cloud computing by such systems is not widely extended, there is a future risk that this paper presents. The concepts of macro and micro dependability and the model we introduce are useful for inter-dependency definition and for analyzing the resilience of systems that depend on other systems, specifically in the cloud model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We introduce in this paper a method to calculate the Hessenberg matrix of a sum of measures from the Hessenberg matrices of the component measures. Our method extends the spectral techniques used by G. Mantica to calculate the Jacobi matrix associated with a sum of measures from the Jacobi matrices of each of the measures. We apply this method to approximate the Hessenberg matrix associated with a self-similar measure and compare it with the result obtained by a former method for self-similar measures which uses a fixed point theorem for moment matrices. Results are given for a series of classical examples of self-similar measures. Finally, we also apply the method introduced in this paper to some examples of sums of (not self-similar) measures obtaining the exact value of the sections of the Hessenberg matrix.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The European Credit Transfer and Accumulation System (ECTS) is the credit system for higher education used in the European Higher Education Area (EHEA), which involves all the countries engaged in the Bologna Process. This paper describes a study which is part of the project of the Bologna Experts Team-Spain and was carried out with the following aims: 1) designing some procedures for the assessment of transferable competences; and 2) testing some basic psychometric features that an assessment device with some consequences for the subjects being evaluated needs to prove. We will focus on the degrees of Computing. The sample of students (20) includes first year students from the Technical University of Madrid. In this paper, we will report some results of data analyses carried out to this moment on reliability and validity of the task designed to measure problem solving.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the modelling and validation of an evolvable hardware architecture which can be mapped on a 2D systolic structure implemented on commercial reconfigurable FPGAs. The adaptation capabilities of the architecture are exercised to validate its evolvability. The underlying proposal is the use of a library of reconfigurable components characterised by their partial bitstreams, which are used by the Evolutionary Algorithm to find a solution to a given task. Evolution of image noise filters is selected as the proof of concept application. Results show that computation speed of the resulting evolved circuit is higher than with the Virtual Reconfigurable Circuits approach, and this can be exploited on the evolution process by using dynamic reconfiguration

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a technique to estimate accurate speedups for parallel logic programs with relative independence from characteristics of a given implementation or underlying parallel hardware. The proposed technique is based on gathering accurate data describing one execution at run-time, which is fed to a simulator. Alternative schedulings are then simulated and estimates computed for the corresponding speedups. A tool implementing the aforementioned techniques is presented, and its predictions are compared to the performance of real systems, showing good correlation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distributed parallel execution systems speed up applications by splitting tasks into processes whose execution is assigned to different receiving nodes in a high-bandwidth network. On the distributing side, a fundamental problem is grouping and scheduling such tasks such that each one involves sufñcient computational cost when compared to the task creation and communication costs and other such practical overheads. On the receiving side, an important issue is to have some assurance of the correctness and characteristics of the code received and also of the kind of load the particular task is going to pose, which can be specified by means of certificates. In this paper we present in a tutorial way a number of general solutions to these problems, and illustrate them through their implementation in the Ciao multi-paradigm language and program development environment. This system includes facilities for parallel and distributed execution, an assertion language for specifying complex programs properties (including safety and resource-related properties), and compile-time and run-time tools for performing automated parallelization and resource control, as well as certification of programs with resource consumption assurances and efñcient checking of such certificates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Performance studies of actual parallel systems usually tend to concéntrate on the effectiveness of a given implementation. This is often done in the absolute, without quantitave reference to the potential parallelism contained in the programs from the point of view of the execution paradigm. We feel that studying the parallelism inherent to the programs is interesting, as it gives information about the best possible behavior of any implementation and thus allows contrasting the results obtained. We propose a method for obtaining ideal speedups for programs through a combination of sequential or parallel execution and simulation, and the algorithms that allow implementing the method. Our approach is novel and, we argüe, more accurate than previously proposed methods, in that a crucial part of the data - the execution times of tasks - is obtained from actual executions, while speedup is computed by simulation. This allows obtaining speedup (and other) data under controlled and ideal assumptions regarding issues such as number of processor, scheduling algorithm and overheads, etc. The results obtained can be used for example to evalúate the ideal parallelism that a program contains for a given model of execution and to compare such "perfect" parallelism to that obtained by a given implementation of that model. We also present a tool, IDRA, which implements the proposed method, and results obtained with IDRA for benchmark programs, which are then compared with those obtained in actual executions on real parallel systems.