938 resultados para Optimistic data replication system


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Over 150 million cubic meter of sand-sized sediment has disappeared from the central region of the San Francisco Bay Coastal System during the last half century. This enormous loss may reflect numerous anthropogenic influences, such as watershed damming, bay-fill development, aggregate mining, and dredging. The reduction in Bay sediment also appears to be linked to a reduction in sediment supply and recent widespread erosion of adjacent beaches, wetlands, and submarine environments. A unique, multi-faceted provenance study was performed to definitively establish the primary sources, sinks, and transport pathways of beach sized-sand in the region, thereby identifying the activities and processes that directly limit supply to the outer coast. This integrative program is based on comprehensive surficial sediment sampling of the San Francisco Bay Coastal System, including the seabed, Bay floor, area beaches, adjacent rock units, and major drainages. Analyses of sample morphometrics and biological composition (e.g., Foraminifera) were then integrated with a suite of tracers including 87Sr/86Sr and 143Nd/144Nd isotopes, rare earth elements, semi-quantitative X-ray diffraction mineralogy, and heavy minerals, and with process-based numerical modeling, in situ current measurements, and bedform asymmetry to robustly determine the provenance of beach-sized sand in the region.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Advances in communication, navigation and imaging technologies are expected to fundamentally change methods currently used to collect data. Electronic data interchange strategies will also minimize data handling and automatically update files at the point of capture. This report summarizes the outcome of using a multi-camera platform as a method to collect roadway inventory data. It defines basic system requirements as expressed by users, who applied these techniques and examines how the application of the technology met those needs. A sign inventory case study was used to determine the advantages of creating and maintaining the database and provides the capability to monitor performance criteria for a Safety Management System. The project identified at least 75 percent of the data elements needed for a sign inventory can be gathered by viewing a high resolution image.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Understanding transcriptional regulation by genome-wide microarray studies can contribute to unravel complex relationships between genes. Attempts to standardize the annotation of microarray data include the Minimum Information About a Microarray Experiment (MIAME) recommendations, the MAGE-ML format for data interchange, and the use of controlled vocabularies or ontologies. The existing software systems for microarray data analysis implement the mentioned standards only partially and are often hard to use and extend. Integration of genomic annotation data and other sources of external knowledge using open standards is therefore a key requirement for future integrated analysis systems. Results: The EMMA 2 software has been designed to resolve shortcomings with respect to full MAGE-ML and ontology support and makes use of modern data integration techniques. We present a software system that features comprehensive data analysis functions for spotted arrays, and for the most common synthesized oligo arrays such as Agilent, Affymetrix and NimbleGen. The system is based on the full MAGE object model. Analysis functionality is based on R and Bioconductor packages and can make use of a compute cluster for distributed services. Conclusion: Our model-driven approach for automatically implementing a full MAGE object model provides high flexibility and compatibility. Data integration via SOAP-based web-services is advantageous in a distributed client-server environment as the collaborative analysis of microarray data is gaining more and more relevance in international research consortia. The adequacy of the EMMA 2 software design and implementation has been proven by its application in many distributed functional genomics projects. Its scalability makes the current architecture suited for extensions towards future transcriptomics methods based on high-throughput sequencing approaches which have much higher computational requirements than microarrays.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A ecografia é o exame de primeira linha na identificação e caraterização de tumores anexiais. Foram descritos diversos métodos de diagnóstico diferencial incluindo a avaliação subjetiva do observador, índices descritivos simples e índices matematicamente desenvolvidos como modelos de regressão logística, continuando a avaliação subjectiva por examinador diferenciado a ser o melhor método de discriminação entre tumores malignos e benignos. No entanto, devido à subjectividade inerente a esta avaliação tornouse necessário estabelecer uma nomenclatura padronizada e uma classificação que facilitasse a comunicação de resultados e respectivas recomendações de vigilância. O objetivo deste artigo é resumir e comparar diferentes métodos de avaliação e classificação de tumores anexiais, nomeadamente os modelos do grupo International Ovary Tumor Analysis (IOTA) e a classificação Gynecologic Imaging Report and Data System (GI-RADS), em termos de desempenho diagnóstico e utilidade na prática clínica.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

significant amount of Expendable Bathythermograph (XBT) data has been collected in the Mediterranean Sea since 1999 in the framework of operational oceanography activities. The management and storage of such a volume of data poses significant challenges and opportunities. The SeaDataNet project, a pan-European infrastructure for marine data diffusion, provides a convenient way to avoid dispersion of these temperature vertical profiles and to facilitate access to a wider public. The XBT data flow, along with the recent improvements in the quality check procedures and the consistence of the available historical data set are described. The main features of SeaDataNet services and the advantage of using this system for long-term data archiving are presented. Finally, focus on the Ligurian Sea is included in order to provide an example of the kind of information and final products devoted to different users can be easily derived from the SeaDataNet web portal.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In recent years, the 380V DC and 48V DC distribution systems have been extensively studied for the latest data centers. It is widely believed that the 380V DC system is a very promising candidate because of its lower cable cost compared to the 48V DC system. However, previous studies have not adequately addressed the low reliability issue with the 380V DC systems due to large amount of series connected batteries. In this thesis, a quantitative comparison for the two systems has been presented in terms of efficiency, reliability and cost. A new multi-port DC UPS with both high voltage output and low voltage output is proposed. When utility ac is available, it delivers power to the load through its high voltage output and charges the battery through its low voltage output. When utility ac is off, it boosts the low battery voltage and delivers power to the load form the battery. Thus, the advantages of both systems are combined and the disadvantages of them are avoided. High efficiency is also achieved as only one converter is working in either situation. Details about the design and analysis of the new UPS are presented. For the main AC-DC part of the new UPS, a novel bridgeless three-level single-stage AC-DC converter is proposed. It eliminates the auxiliary circuit for balancing the capacitor voltages and the two bridge rectifier diodes in previous topology. Zero voltage switching, high power factor, and low component stresses are achieved with this topology. Compared to previous topologies, the proposed converter has a lower cost, higher reliability, and higher efficiency. The steady state operation of the converter is analyzed and a decoupled model is proposed for the converter. For the battery side converter as a part of the new UPS, a ZVS bidirectional DC-DC converter based on self-sustained oscillation control is proposed. Frequency control is used to ensure the ZVS operation of all four switches and phase shift control is employed to regulate the converter output power. Detailed analysis of the steady state operation and design of the converter are presented. Theoretical, simulation, and experimental results are presented to verify the effectiveness of the proposed concepts.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present the extraction and processing of the IUE Low Dispersion spectra within the framework of the ESA “IUE Newly Extracted Spectra” (INES) System. Weak points of SWET, the optimal extraction implementation to produce the NEWSIPS output products (extracted spectra) are discussed, and the procedures implemented in INES to solve these problems are outlined. The more relevant modifications are: 1) the use of a new noise model, 2) a more accurate representation of the spatial profile of the spectrum and 3) a more reliable determination of the background. The INES extraction also includes a correction for the contamination by solar light in long wavelength spectra. Examples showing the improvements obtained in INES with respect to SWET are described. Finally, the linearity and repeatability characteristics of INES data are evaluated and the validity of the errors provided in the extraction is discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The research activities involved the application of the Geomatic techniques in the Cultural Heritage field, following the development of two themes: Firstly, the application of high precision surveying techniques for the restoration and interpretation of relevant monuments and archaeological finds. The main case regards the activities for the generation of a high-fidelity 3D model of the Fountain of Neptune in Bologna. In this work, aimed to the restoration of the manufacture, both the geometrical and radiometrical aspects were crucial. The final product was the base of a 3D information system representing a shared tool where the different figures involved in the restoration activities shared their contribution in a multidisciplinary approach. Secondly, the arrangement of 3D databases for a Building Information Modeling (BIM) approach, in a process which involves the generation and management of digital representations of physical and functional characteristics of historical buildings, towards a so-called Historical Building Information Model (HBIM). A first application was conducted for the San Michele in Acerboli’s church in Santarcangelo di Romagna. The survey was performed by the integration of the classical and modern Geomatic techniques and the point cloud representing the church was used for the development of a HBIM model, where the relevant information connected to the building could be stored and georeferenced. A second application regards the domus of Obellio Firmo in Pompeii, surveyed by the integration of the classical and modern Geomatic techniques. An historical analysis permitted the definitions of phases and the organization of a database of materials and constructive elements. The goal is the obtaining of a federate model able to manage the different aspects: documental, analytic and reconstructive ones.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Intelligent systems are currently inherent to the society, supporting a synergistic human-machine collaboration. Beyond economical and climate factors, energy consumption is strongly affected by the performance of computing systems. The quality of software functioning may invalidate any improvement attempt. In addition, data-driven machine learning algorithms are the basis for human-centered applications, being their interpretability one of the most important features of computational systems. Software maintenance is a critical discipline to support automatic and life-long system operation. As most software registers its inner events by means of logs, log analysis is an approach to keep system operation. Logs are characterized as Big data assembled in large-flow streams, being unstructured, heterogeneous, imprecise, and uncertain. This thesis addresses fuzzy and neuro-granular methods to provide maintenance solutions applied to anomaly detection (AD) and log parsing (LP), dealing with data uncertainty, identifying ideal time periods for detailed software analyses. LP provides deeper semantics interpretation of the anomalous occurrences. The solutions evolve over time and are general-purpose, being highly applicable, scalable, and maintainable. Granular classification models, namely, Fuzzy set-Based evolving Model (FBeM), evolving Granular Neural Network (eGNN), and evolving Gaussian Fuzzy Classifier (eGFC), are compared considering the AD problem. The evolving Log Parsing (eLP) method is proposed to approach the automatic parsing applied to system logs. All the methods perform recursive mechanisms to create, update, merge, and delete information granules according with the data behavior. For the first time in the evolving intelligent systems literature, the proposed method, eLP, is able to process streams of words and sentences. Essentially, regarding to AD accuracy, FBeM achieved (85.64+-3.69)%; eGNN reached (96.17+-0.78)%; eGFC obtained (92.48+-1.21)%; and eLP reached (96.05+-1.04)%. Besides being competitive, eLP particularly generates a log grammar, and presents a higher level of model interpretability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mesoporous SBA-15 silica with uniform hexagonal pore, narrow pore size distribution and tuneable pore diameter was organofunctionalized with glutaraldehyde-bridged silylating agent. The precursor and its derivative silicas were ibuprofen-loaded for controlled delivery in simulated biological fluids. The synthesized silicas were characterized by elemental analysis, infrared spectroscopy, (13)C and (29)Si solid state NMR spectroscopy, nitrogen adsorption, X-ray diffractometry, thermogravimetry and scanning electron microscopy. Surface functionalization with amine containing bridged hydrophobic structure resulted in significantly decreased surface area from 802.4 to 63.0 m(2) g(-1) and pore diameter 8.0-6.0 nm, which ultimately increased the drug-loading capacity from 18.0% up to 28.3% and a very slow release rate of ibuprofen over the period of 72.5h. The in vitro drug release demonstrated that SBA-15 presented the fastest release from 25% to 27% and SBA-15GA gave near 10% of drug release in all fluids during 72.5 h. The Korsmeyer-Peppas model better fits the release data with the Fickian diffusion mechanism and zero order kinetics for synthesized mesoporous silicas. Both pore sizes and hydrophobicity influenced the rate of the release process, indicating that the chemically modified silica can be suggested to design formulation of slow and constant release over a defined period, to avoid repeated administration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To evaluate the antimicrobial efficacy of Clearfil SE Protect (CP) and Clearfil SE Bond (CB) after curing and rinsed against five individual oral microorganisms as well as a mixture of bacterial culture prepared from the selected test organisms. Bacterial suspensions were prepared from single species of Streptococcus mutans, Streptococcus sobrinus, Streptococcus gordonii, Actinomyces viscosus and Lactobacillus lactis, as well as mixed bacterial suspensions from these organisms. Dentin bonding system discs (6 mm×2 mm) were prepared, cured, washed and placed on the bacterial suspension of single species or multispecies bacteria for 15, 30 and 60 min. MTT, Live/Dead bacterial viability (antibacterial effect), and XTT (metabolic activity) assays were used to test the two dentin system's antibacterial effect. All assays were done in triplicates and each experiment repeated at least three times. Data were submitted to ANOVA and Scheffe's f-test (5%). Greater than 40% bacteria killing was seen within 15 min, and the killing progressed with increasing time of incubation with CP discs. However, a longer (60 min) period of incubation was required by CP to achieve similar antimicrobial effect against mixed bacterial suspension. CB had no significant effect on the viability or metabolic activity of the test microorganisms when compared to the control bacterial culture. CP was significantly effective in reducing the viability and metabolic activity of the test organisms. The results demonstrated the antimicrobial efficacy of CP both on single and multispecies bacterial culture. CP may be beneficial in reducing bacterial infections in cavity preparations in clinical dentistry.