792 resultados para cloud-based computing
Resumo:
The evolution of commodity computing lead to the possibility of efficient usage of interconnected machines to solve computationally-intensive tasks, which were previously solvable only by using expensive supercomputers. This, however, required new methods for process scheduling and distribution, considering the network latency, communication cost, heterogeneous environments and distributed computing constraints. An efficient distribution of processes over such environments requires an adequate scheduling strategy, as the cost of inefficient process allocation is unacceptably high. Therefore, a knowledge and prediction of application behavior is essential to perform effective scheduling. In this paper, we overview the evolution of scheduling approaches, focusing on distributed environments. We also evaluate the current approaches for process behavior extraction and prediction, aiming at selecting an adequate technique for online prediction of application execution. Based on this evaluation, we propose a novel model for application behavior prediction, considering chaotic properties of such behavior and the automatic detection of critical execution points. The proposed model is applied and evaluated for process scheduling in cluster and grid computing environments. The obtained results demonstrate that prediction of the process behavior is essential for efficient scheduling in large-scale and heterogeneous distributed environments, outperforming conventional scheduling policies by a factor of 10, and even more in some cases. Furthermore, the proposed approach proves to be efficient for online predictions due to its low computational cost and good precision. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This paper presents an automatic method to detect and classify weathered aggregates by assessing changes of colors and textures. The method allows the extraction of aggregate features from images and the automatic classification of them based on surface characteristics. The concept of entropy is used to extract features from digital images. An analysis of the use of this concept is presented and two classification approaches, based on neural networks architectures, are proposed. The classification performance of the proposed approaches is compared to the results obtained by other algorithms (commonly considered for classification purposes). The obtained results confirm that the presented method strongly supports the detection of weathered aggregates.
Resumo:
Cloud computing innebär att datorkraft och IT-resurser i form av servrar, lagring och lokala nätverk görs tillgängliga via internet, cloud computing har ökat snabbt i användning de senaste åren. Möjligheterna med molntjänster har lett till nya utmaningar avseende tillit till molntjänster genom att öppna för helt nya förhållanden mellan leverantör och kund. I dagsläget så är tillit till molntjänsterna ett stort problem för kunder såsom mikroföretag samt små och medelstora företag. Vårt examensarbete syftar till att beskriva mikroföretags samt små och medelstora företags tillit och misstro till molntjänster, samt identifiera vilka kriterier som bidrar till detta. Resultatet visar att det är svårt att tydligt definiera tillit och misstro hos företag, då deras situationer och verksamheter skiljer sig. Tack vare vår litteraturstudie i kombination med våra intervjuer har vi skapat oss en bra bild av vilka kriterier som bidrar med tillit och misstro till molntjänster. Baserat på vårt arbete har vi sammanställt en konceptuell modell som beskriver tillit och misstro till molntjänster.
Resumo:
The ever increasing spurt in digital crimes such as image manipulation, image tampering, signature forgery, image forgery, illegal transaction, etc. have hard pressed the demand to combat these forms of criminal activities. In this direction, biometrics - the computer-based validation of a persons' identity is becoming more and more essential particularly for high security systems. The essence of biometrics is the measurement of person’s physiological or behavioral characteristics, it enables authentication of a person’s identity. Biometric-based authentication is also becoming increasingly important in computer-based applications because the amount of sensitive data stored in such systems is growing. The new demands of biometric systems are robustness, high recognition rates, capability to handle imprecision, uncertainties of non-statistical kind and magnanimous flexibility. It is exactly here that, the role of soft computing techniques comes to play. The main aim of this write-up is to present a pragmatic view on applications of soft computing techniques in biometrics and to analyze its impact. It is found that soft computing has already made inroads in terms of individual methods or in combination. Applications of varieties of neural networks top the list followed by fuzzy logic and evolutionary algorithms. In a nutshell, the soft computing paradigms are used for biometric tasks such as feature extraction, dimensionality reduction, pattern identification, pattern mapping and the like.
Resumo:
Architectural description languages (ADLs) are used to specify high-level, compositional view of a software application. ADLs usually come equipped with a rigourous state-transition style semantics, facilitating specification and analysis of distributed and event-based systems. However, enterprise system architectures built upon newer middleware (implementations of Java’s EJB specification, or Microsoft’s COM+/ .NET) require additional expressive power from an ADL. The TrustME ADL is designed to meet this need. In this paper, we describe several aspects of TrustME that facilitate specification and anlysis of middleware-based architectures for the enterprise.
Resumo:
HydroShare is an online, collaborative system being developed for open sharing of hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access hydrologic data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. The HydroShare web interface and social media functions are being developed using the Drupal content management system. A geospatial visualization and analysis component enables searching, visualizing, and analyzing geographic datasets. The integrated Rule-Oriented Data System (iRODS) is being used to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development.
Resumo:
Existing distributed hydrologic models are complex and computationally demanding for using as a rapid-forecasting policy-decision tool, or even as a class-room educational tool. In addition, platform dependence, specific input/output data structures and non-dynamic data-interaction with pluggable software components inside the existing proprietary frameworks make these models restrictive only to the specialized user groups. RWater is a web-based hydrologic analysis and modeling framework that utilizes the commonly used R software within the HUBzero cyber infrastructure of Purdue University. RWater is designed as an integrated framework for distributed hydrologic simulation, along with subsequent parameter optimization and visualization schemes. RWater provides platform independent web-based interface, flexible data integration capacity, grid-based simulations, and user-extensibility. RWater uses RStudio to simulate hydrologic processes on raster based data obtained through conventional GIS pre-processing. The program integrates Shuffled Complex Evolution (SCE) algorithm for parameter optimization. Moreover, RWater enables users to produce different descriptive statistics and visualization of the outputs at different temporal resolutions. The applicability of RWater will be demonstrated by application on two watersheds in Indiana for multiple rainfall events.
Resumo:
Ubiquitous computing raises new usability challenges that cut across design and development. We are particularly interested in environments enhanced with sensors, public displays and personal devices. How can prototypes be used to explore the users' mobility and interaction, both explicitly and implicitly, to access services within these environments? Because of the potential cost of development and design failure, these systems must be explored using early assessment techniques and versions of the systems that could disrupt if deployed in the target environment. These techniques are required to evaluate alternative solutions before making the decision to deploy the system on location. This is crucial for a successful development, that anticipates potential user problems, and reduces the cost of redesign. This thesis reports on the development of a framework for the rapid prototyping and analysis of ubiquitous computing environments that facilitates the evaluation of design alternatives. It describes APEX, a framework that brings together an existing 3D Application Server with a modelling tool. APEX-based prototypes enable users to navigate a virtual world simulation of the envisaged ubiquitous environment. By this means users can experience many of the features of the proposed design. Prototypes and their simulations are generated in the framework to help the developer understand how the user might experience the system. These are supported through three different layers: a simulation layer (using a 3D Application Server); a modelling layer (using a modelling tool) and a physical layer (using external devices and real users). APEX allows the developer to move between these layers to evaluate different features. It supports exploration of user experience through observation of how users might behave with the system as well as enabling exhaustive analysis based on models. The models support checking of properties based on patterns. These patterns are based on ones that have been used successfully in interactive system analysis in other contexts. They help the analyst to generate and verify relevant properties. Where these properties fail then scenarios suggested by the failure provide an important aid to redesign.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This work analyses the waveshapes of continuing currents and parameters of M-components in positive cloud-to-ground (CG) flashes through high-speed GPS synchronized videos. The dataset is composed of only long continuing currents (with duration longer than 40 ms) and was selected from more than 800 flashes recorded in Sao Jose dos Campos (45.864 degrees W, 23.215 degrees S) and Uruguaiana (29.806 degrees W, 57.005 degrees S) in Southeast and South of Brazil, respectively, during 2003 to 2007 summers. The videos are compared with data obtained by the Brazilian Lightning Location System (BrasilDAT) in order to determine the polarity of each flash and select only positive cases. There are only two studies of waveshapes of continuing currents in the literature. One is based on direct current measurements of triggered lightning, in which four different types of waveshapes were observed; and the other is based on measurements of luminosity variations in high-speed videos of CG negative lightning, in which besides the four types above mentioned two additional types were observed. The present work is an extension of the latter, using the same method but now applied to obtain the waveshapes of positive CG lightning. As far as the authors know, this is the first report on M-components in positive continuing currents. We also have used the luminosity-versus-time graphs to observe their occurrence and measure some parameters (duration, elapsed time and time between two successive M-components), whose statistics are presented and compared in detail to the data for negative flashes. We have plotted a histogram of the M-components elapsed time over the total duration of the continuing current for positive flashes, which presented an exponential decay (correlation coefficient: 0.83), similar to what has been observed for negative flashes. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This work presents a methodology to analyze electric power systems transient stability for first swing using a neural network based on adaptive resonance theory (ART) architecture, called Euclidean ARTMAP neural network. The ART architectures present plasticity and stability characteristics, which are very important for the training and to execute the analysis in a fast way. The Euclidean ARTMAP version provides more accurate and faster solutions, when compared to the fuzzy ARTMAP configuration. Three steps are necessary for the network working, training, analysis and continuous training. The training step requires much effort (processing) while the analysis is effectuated almost without computational effort. The proposed network allows approaching several topologies of the electric system at the same time; therefore it is an alternative for real time transient stability of electric power systems. To illustrate the proposed neural network an application is presented for a multi-machine electric power systems composed of 10 synchronous machines, 45 buses and 73 transmission lines. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
With the advance of the Cloud Computing paradigm, a single service offered by a cloud platform may not be enough to meet all the application requirements. To fulfill such requirements, it may be necessary, instead of a single service, a composition of services that aggregates services provided by different cloud platforms. In order to generate aggregated value for the user, this composition of services provided by several Cloud Computing platforms requires a solution in terms of platforms integration, which encompasses the manipulation of a wide number of noninteroperable APIs and protocols from different platform vendors. In this scenario, this work presents Cloud Integrator, a middleware platform for composing services provided by different Cloud Computing platforms. Besides providing an environment that facilitates the development and execution of applications that use such services, Cloud Integrator works as a mediator by providing mechanisms for building applications through composition and selection of semantic Web services that take into account metadata about the services, such as QoS (Quality of Service), prices, etc. Moreover, the proposed middleware platform provides an adaptation mechanism that can be triggered in case of failure or quality degradation of one or more services used by the running application in order to ensure its quality and availability. In this work, through a case study that consists of an application that use services provided by different cloud platforms, Cloud Integrator is evaluated in terms of the efficiency of the performed service composition, selection and adaptation processes, as well as the potential of using this middleware in heterogeneous computational clouds scenarios
Resumo:
Accurate long-term monitoring of total ozone is one of the most important requirements for identifying possible natural or anthropogenic changes in the composition of the stratosphere. For this purpose, the NDACC (Network for the Detection of Atmospheric Composition Change) UV-visible Working Group has made recommendations for improving and homogenizing the retrieval of total ozone columns from twilight zenith-sky visible spectrometers. These instruments, deployed all over the world in about 35 stations, allow measuring total ozone twice daily with limited sensitivity to stratospheric temperature and cloud cover. The NDACC recommendations address both the DOAS spectral parameters and the calculation of air mass factors (AMF) needed for the conversion of O-3 slant column densities into vertical column amounts. The most important improvement is the use of O-3 AMF look-up tables calculated using the TOMS V8 (TV8) O-3 profile climatology, that allows accounting for the dependence of the O-3 AMF on the seasonal and latitudinal variations of the O-3 vertical distribution. To investigate their impact on the retrieved ozone columns, the recommendations have been applied to measurements from the NDACC/SAOZ (Systeme d'Analyse par Observation Zenithale) network. The revised SAOZ ozone data from eight stations deployed at all latitudes have been compared to TOMS, GOMEGDP4, SCIAMACHY-TOSOMI, SCIAMACHY-OL3, OMI-TOMS, and OMI-DOAS satellite overpass observations, as well as to those of collocated Dobson and Brewer instruments at Observatoire de Haute Provence (44 degrees N, 5.5 degrees E) and Sodankyla (67 degrees N, 27 degrees E), respectively. A significantly better agreement is obtained between SAOZ and correlative reference ground-based measurements after applying the new O-3 AMFs. However, systematic seasonal differences between SAOZ and satellite instruments remain. These are shown to mainly originate from (i) a possible problem in the satellite retrieval algorithms in dealing with the temperature dependence of the ozone cross-sections in the UV and the solar zenith angle (SZA) dependence, (ii) zonal modulations and seasonal variations of tropospheric ozone columns not accounted for in the TV8 profile climatology, and (iii) uncertainty on the stratospheric ozone profiles at high latitude in the winter in the TV8 climatology. For those measurements mostly sensitive to stratospheric temperature like TOMS, OMI-TOMS, Dobson and Brewer, or to SZA like SCIAMACHY-TOSOMI, the application of temperature and SZA corrections results in the almost complete removal of the seasonal difference with SAOZ, improving significantly the consistency between all ground-based and satellite total ozone observations.