960 resultados para Data Flows


Relevância:

20.00% 20.00%

Publicador:

Resumo:

An investigation of the construction data management needs of the Florida Department of Transportation (FDOT) with regard to XML standards including development of data dictionary and data mapping. The review of existing XML schemas indicated the need for development of specific XML schemas. XML schemas were developed for all FDOT construction data management processes. Additionally, data entry, approval and data retrieval applications were developed for payroll compliance reporting and pile quantity payment development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the following problem: users in a dynamic group store their encrypted documents on an untrusted server, and wish to retrieve documents containing some keywords without any loss of data confidentiality. In this paper, we investigate common secure indices which can make multi-users in a dynamic group to obtain securely the encrypted documents shared among the group members without re-encrypting them. We give a formal definition of common secure index for conjunctive keyword-based retrieval over encrypted data (CSI-CKR), define the security requirement for CSI-CKR, and construct a CSI-CKR based on dynamic accumulators, Paillier’s cryptosystem and blind signatures. The security of proposed scheme is proved under strong RSA and co-DDH assumptions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article presents the field applications and validations for the controlled Monte Carlo data generation scheme. This scheme was previously derived to assist the Mahalanobis squared distance–based damage identification method to cope with data-shortage problems which often cause inadequate data multinormality and unreliable identification outcome. To do so, real-vibration datasets from two actual civil engineering structures with such data (and identification) problems are selected as the test objects which are then shown to be in need of enhancement to consolidate their conditions. By utilizing the robust probability measures of the data condition indices in controlled Monte Carlo data generation and statistical sensitivity analysis of the Mahalanobis squared distance computational system, well-conditioned synthetic data generated by an optimal controlled Monte Carlo data generation configurations can be unbiasedly evaluated against those generated by other set-ups and against the original data. The analysis results reconfirm that controlled Monte Carlo data generation is able to overcome the shortage of observations, improve the data multinormality and enhance the reliability of the Mahalanobis squared distance–based damage identification method particularly with respect to false-positive errors. The results also highlight the dynamic structure of controlled Monte Carlo data generation that makes this scheme well adaptive to any type of input data with any (original) distributional condition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A dynamic accumulator is an algorithm, which merges a large set of elements into a constant-size value such that for an element accumulated, there is a witness confirming that the element was included into the value, with a property that accumulated elements can be dynamically added and deleted into/from the original set. Recently Wang et al. presented a dynamic accumulator for batch updates at ICICS 2007. However, their construction suffers from two serious problems. We analyze them and propose a way to repair their scheme. We use the accumulator to construct a new scheme for common secure indices with conjunctive keyword-based retrieval.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A catchment-scale multivariate statistical analysis of hydrochemistry enabled assessment of interactions between alluvial groundwater and Cressbrook Creek, an intermittent drainage system in southeast Queensland, Australia. Hierarchical cluster analyses and principal component analysis were applied to time-series data to evaluate the hydrochemical evolution of groundwater during periods of extreme drought and severe flooding. A simple three-dimensional geological model was developed to conceptualise the catchment morphology and the stratigraphic framework of the alluvium. The alluvium forms a two-layer system with a basal coarse-grained layer overlain by a clay-rich low-permeability unit. In the upper and middle catchment, alluvial groundwater is chemically similar to streamwater, particularly near the creek (reflected by high HCO3/Cl and K/Na ratios and low salinities), indicating a high degree of connectivity. In the lower catchment, groundwater is more saline with lower HCO3/Cl and K/Na ratios, notably during dry periods. Groundwater salinity substantially decreased following severe flooding in 2011, notably in the lower catchment, confirming that flooding is an important mechanism for both recharge and maintaining groundwater quality. The integrated approach used in this study enabled effective interpretation of hydrological processes and can be applied to a variety of hydrological settings to synthesise and evaluate large hydrochemical datasets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wind speed measurement systems are sparse in the tropical regions of Australia. Given this, tropical cyclone wind speeds impacting communities are seldom measured and often only ‘guestimated’ by analysing the extent of damage to structures. In an attempt to overcome this dearth of data, a re-locatable network of anemometers to be deployed prior to tropical cyclone landfall is currently being developed. This paper discusses design criteria of the network’s tripods and tie down system, proposed deployment of the anemometers, instrumentation and data logging. Preliminary assessment of the anemometer response indicates a reliable system for measuring the spectral component of wind with frequencies of approximately 1 Hz. This system limitation highlights an important difference between the capabilities of modern instrumentation and that of the Dines anemometer (around 0.2 seconds) that was used to develop much of the design criteria within the Australian building code and wind loading standard.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this chapter is to provide an overview of traffic data collection that can and should be used for the calibration and validation of traffic simulation models. There are big differences in availability of data from different sources. Some types of data such as loop detector data are widely available and used. Some can be measured with additional effort, for example, travel time data from GPS probe vehicles. Some types such as trajectory data are available only in rare situations such as research projects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This project recognized lack of data analysis and travel time prediction on arterials as the main gap in the current literature. For this purpose it first investigated reliability of data gathered by Bluetooth technology as a new cost effective method for data collection on arterial roads. Then by considering the similarity among varieties of daily travel time on different arterial routes, created a SARIMA model to predict future travel time values. Based on this research outcome, the created model can be applied for online short term travel time prediction in future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

MOST PAN stages in Australian factories use only five or six batch pans for the high grade massecuite production and operate these in a fairly rigid repeating production schedule. It is common that some of the pans are of large dropping capacity e.g. 150 to 240 t. Because of the relatively small number and large sizes of the pans, steam consumption varies widely through the schedule, often by ±30% about the mean value. Large fluctuations in steam consumption have implications for the steam generation/condensate management of the factory and the evaporators when bleed vapour is used. One of the objectives of a project to develop a supervisory control system for a pan stage is to (a) reduce the average steam consumption and (b) reduce the variation in the steam consumption. The operation of each of the high grade pans within the schedule at Macknade Mill was analysed to determine the idle (or buffer) time, time allocations for essential but unproductive operations (e.g. pan turn round, charging, slow ramping up of steam rates on pan start etc.), and productive time i.e. the time during boil-on of liquor and molasses feed. Empirical models were developed for each high grade pan on the stage to define the interdependence of the production rate and the evaporation rate for the different phases of each pan’s cycle. The data were analysed in a spreadsheet model to try to reduce and smooth the total steam consumption. This paper reports on the methodology developed in the model and the results of the investigations for the pan stage at Macknade Mill. It was found that the operation of the schedule severely restricted the ability to reduce the average steam consumption and smooth the steam flows. While longer cycle times provide increased flexibility the steam consumption profile was changed only slightly. The ability to cut massecuite on the run among pans, or the use of a high grade seed vessel, would assist in reducing the average steam consumption and the magnitude of the variations in steam flow.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research proposes the development of interfaces to support collaborative, community-driven inquiry into data, which we refer to as Participatory Data Analytics. Since the investigation is led by local communities, it is not possible to anticipate which data will be relevant and what questions are going to be asked. Therefore, users have to be able to construct and tailor visualisations to their own needs. The poster presents early work towards defining a suitable compositional model, which will allow users to mix, match, and manipulate data sets to obtain visual representations with little-to-no programming knowledge. Following a user-centred design process, we are subsequently planning to identify appropriate interaction techniques and metaphors for generating such visual specifications on wall-sized, multi-touch displays.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the following problem: a user stores encrypted documents on an untrusted server, and wishes to retrieve all documents containing some keywords without any loss of data confidentiality. Conjunctive keyword searches on encrypted data have been studied by numerous researchers over the past few years, and all existing schemes use keyword fields as compulsory information. This however is impractical for many applications. In this paper, we propose a scheme of keyword field-free conjunctive keyword searches on encrypted data, which affirmatively answers an open problem asked by Golle et al. at ACNS 2004. Furthermore, the proposed scheme is extended to the dynamic group setting. Security analysis of our constructions is given in the paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present new evidence for sector collapses of the South Soufrière Hills (SSH) edifice, Montserrat during the mid-Pleistocene. High-resolution geophysical data provide evidence for sector collapse, producing an approximately 1 km3 submarine collapse deposit to the south of SSH. Sedimentological and geochemical analyses of submarine deposits sampled by sediment cores suggest that they were formed by large multi-stage flank failures of the subaerial SSH edifice into the sea. This work identifies two distinct geochemical suites within the SSH succession on the basis of trace-element and Pb-isotope compositions. Volcaniclastic turbidites in the cores preserve these chemically heterogeneous rock suites. However, the subaerial chemostratigraphy is reversed within the submarine sediment cores. Sedimentological analysis suggests that the edifice failures produced high-concentration turbidites and that the collapses occurred in multiple stages, with an interval of at least 2 ka between the first and second failure. Detailed field and petrographical observations, coupled with SEM image analysis, shows that the SSH volcanic products preserve a complex record of magmatic activity. This activity consisted of episodic explosive eruptions of andesitic pumice, probably triggered by mafic magmatic pulses and followed by eruptions of poorly vesiculated basaltic scoria, and basaltic lava flows.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Carcinoma ex pleomorphic adenoma (Ca ex PA) is a carcinoma arising from a primary or recurrent benign pleomorphic adenoma. It often poses a diagnostic challenge to clinicians and pathologists. This study intends to review the literature and highlight the current clinical and molecular perspectives about this entity. The most common clinical presentation of CA ex PA is of a firm mass in the parotid gland. The proportion of adenoma and carcinoma components determines the macroscopic features of this neoplasm. The entity is difficult to diagnose pre-operatively. Pathologic assessment is the gold standard for making the diagnosis. Treatment for Ca ex PA often involves an ablative surgical procedure which may be followed by radiotherapy. Overall, patients with Ca ex PA have a poor prognosis. Accurate diagnosis and aggressive surgical management of patients presenting with Ca ex PA can increase their survival rates. Molecular studies have revealed that the development of Ca ex PA follows a multi-step model of carcinogenesis, with the progressive loss of heterozygosity at chromosomal arms 8q, then 12q and finally 17p. There are specific candidate genes in these regions that are associated with particular stages in the progression of Ca ex PA. In addition, many genes which regulate tumour suppression, cell cycle control, growth factors and cell-cell adhesion play a role in the development and progression of Ca ex PA. It is hopeful that these molecular data can give clues for the diagnosis and management of the disease.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability to build high-fidelity 3D representations of the environment from sensor data is critical for autonomous robots. Multi-sensor data fusion allows for more complete and accurate representations. Furthermore, using distinct sensing modalities (i.e. sensors using a different physical process and/or operating at different electromagnetic frequencies) usually leads to more reliable perception, especially in challenging environments, as modalities may complement each other. However, they may react differently to certain materials or environmental conditions, leading to catastrophic fusion. In this paper, we propose a new method to reliably fuse data from multiple sensing modalities, including in situations where they detect different targets. We first compute distinct continuous surface representations for each sensing modality, with uncertainty, using Gaussian Process Implicit Surfaces (GPIS). Second, we perform a local consistency test between these representations, to separate consistent data (i.e. data corresponding to the detection of the same target by the sensors) from inconsistent data. The consistent data can then be fused together, using another GPIS process, and the rest of the data can be combined as appropriate. The approach is first validated using synthetic data. We then demonstrate its benefit using a mobile robot, equipped with a laser scanner and a radar, which operates in an outdoor environment in the presence of large clouds of airborne dust and smoke.