975 resultados para multiple data


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Distributing multiple replicas in geographically-dispersed clouds is a popular approach to reduce latency to users. It is important to ensure that each replica should have availability and data integrity features; that is, the same as the original data without any corruption and tampering. Remote data possession checking is a valid method to verify the replicass availability and integrity. Since remotely checking the entire data is time-consuming due to both the large data volume and the limited bandwidth, efficient data-possession- verifying methods generally sample and check a small hash (or random blocks) of the data to greatly reduce the I/O cost. Most recent research on data possession checking considers only single replica. However, multiple replicas data possession checking is much more challenging, since it is difficult to optimize the remote communication cost among multiple geographically-dispersed clouds. In this paper, we provide a novel efficient Distributed Multiple Replicas Data Possession Checking (DMRDPC) scheme to tackle new challenges. Our goal is to improve efficiency by finding an optimal spanning tree to define the partial order of scheduling multiple replicas data possession checking. But since the bandwidths have geographical diversity on the different replica links and the bandwidths between two replicas are asymmetric, we must resolve the problem of Finding an Optimal Spanning Tree in a Complete Bidirectional Directed Graph, which we call the FOSTCBDG problem. Particularly, we provide theories for resolving the FOSTCBDG problem through counting all the available paths that viruses attack in clouds network environment. Also, we help the cloud users to achieve efficient multiple replicas data possession checking by an approximate algorithm for tackling the FOSTCBDG problem, and the effectiveness is demonstrated by an experimental study. © 2011 Elsevier Inc.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVES: This study investigated the extent that psychosocial job stressors had lasting effects on a scaled measure of mental health. We applied econometric approaches to a longitudinal cohort to: (1) control for unmeasured individual effects; (2) assess the role of prior (lagged) exposures of job stressors on mental health and (3) the persistence of mental health.

METHODS: We used a panel study with 13 annual waves and applied fixed-effects, first-difference and fixed-effects Arellano-Bond models. The Short Form 36 (SF-36) Mental Health Component Summary score was the outcome variable and the key exposures included: job control, job demands, job insecurity and fairness of pay.

RESULTS: Results from the Arellano-Bond models suggest that greater fairness of pay (β-coefficient 0.34, 95% CI 0.23 to 0.45), job control (β-coefficient 0.15, 95% CI 0.10 to 0.20) and job security (β-coefficient 0.37, 95% CI 0.32 to 0.42) were contemporaneously associated with better mental health. Similar results were found for the fixed-effects and first-difference models. The Arellano-Bond model also showed persistent effects of individual mental health, whereby individuals' previous reports of mental health were related to their reporting in subsequent waves. The estimated long-run impact of job demands on mental health increased after accounting for time-related dynamics, while there were more minimal impacts for the other job stressor variables.

CONCLUSIONS: Our results showed that the majority of the effects of psychosocial job stressors on a scaled measure of mental health are contemporaneous except for job demands where accounting for the lagged dynamics was important.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

To generate realistic predictions, species distribution models require the accurate coregistration of occurrence data with environmental variables. There is a common assumption that species occurrence data are accurately georeferenced; however, this is often not the case. This study investigates whether locational uncertainty and sample size affect the performance and interpretation of fine-scale species distribution models. This study evaluated the effects of locational uncertainty across multiple sample sizes by subsampling and spatially degrading occurrence data. Distribution models were constructed for kelp (Ecklonia radiata), across a large study site (680 km2) off the coast of southeastern Australia. Generalized additive models were used to predict distributions based on fine-resolution (2·5 m cell size) seafloor variables, generated from multibeam echosounder data sets, and occurrence data from underwater towed video. The effects of different levels of locational uncertainty in combination with sample size were evaluated by comparing model performance and predicted distributions. While locational uncertainty was observed to influence some measures of model performance, in general this was small and varied based on the accuracy metric used. However, simulated locational uncertainty caused changes in variable importance and predicted distributions at fine scales, potentially influencing model interpretation. This was most evident with small sample sizes. Results suggested that seemingly high-performing, fine-scale models can be generated from data containing locational uncertainty, although interpreting their predictions can be misleading if the predictions are interpreted at scales similar to the spatial errors. This study demonstrated the need to consider predictions across geographic space rather than performance alone. The findings are important for conservation managers as they highlight the inherent variation in predictions between equally performing distribution models, and the subsequent restrictions on ecological interpretations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper uses dynamic computer simulation techniques to apply a procedure using vibration-based methods for damage assessment in multiple-girder composite bridge. In addition to changes in natural frequencies, this multi-criteria procedure incorporates two methods, namely the modal flexibility and the modal strain energy method. Using the numerically simulated modal data obtained through finite element analysis software, algorithms based on modal flexibility and modal strain energy change before and after damage are obtained and used as the indices for the assessment of structural health state. The feasibility and capability of the approach is demonstrated through numerical studies of proposed structure with six damage scenarios. It is concluded that the modal strain energy method is competent for application on multiple-girder composite bridge, as evidenced through the example treated in this paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Qualitative research methods require transparency to ensure the ‘trustworthiness’ of the data analysis. The intricate processes of organizing, coding and analyzing the data are often rendered invisible in the presentation of the research findings, which requires a ‘leap of faith’ for the reader. Computer assisted data analysis software can be used to make the research process more transparent, without sacrificing rich, interpretive analysis by the researcher. This article describes in detail how one software package was used in a poststructural study to link and code multiple forms of data to four research questions for fine-grained analysis. This description will be useful for researchers seeking to use qualitative data analysis software as an analytic tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The early stages of the building design process are when the most far reaching decisions are made regarding the configuration of the proposed project. This paper examines methods of providing decision support to building designers across multiple disciplines during the early stage of design. The level of detail supported is at the massing study stage where the basic envelope of the project is being defined. The block outlines on the building envelope are sliced into floors. Within a floor the only spatial divisions supported are the “user” space and the building core. The building core includes vertical transportation systems, emergency egress and vertical duct runs. The current focus of the project described in the paper is multi-storey mixed use office/residential buildings with car parking. This is a common type of building in redevelopment projects within and adjacent to the central business districts of major Australian cities. The key design parameters for system selection across the major systems in multi-storey building projects - architectural, structural, HVAC, vertical transportation, electrical distribution, fire protection, hydraulics and cost – are examined. These have been identified through literature research and discussions with building designers from various disciplines. This information is being encoded in decision support tools. The decision support tools communicate through a shared database to ensure that the relevant information is shared across all of the disciplines. An internal data model has been developed to support the very early design phase and the high level system descriptions required. A mapping to IFC 2x2 has also been defined to ensure that this early information is available at later stages of the design process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The over represented number of novice drivers involved in crashes is alarming. Driver training is one of the interventions aimed at mitigating the number of crashes that involve young drivers. To our knowledge, Advanced Driver Assistance Systems (ADAS) have never been comprehensively used in designing an intelligent driver training system. Currently, there is a need to develop and evaluate ADAS that could assess driving competencies. The aim is to develop an unsupervised system called Intelligent Driver Training System (IDTS) that analyzes crash risks in a given driving situation. In order to design a comprehensive IDTS, data is collected from the Driver, Vehicle and Environment (DVE), synchronized and analyzed. The first implementation phase of this intelligent driver training system deals with synchronizing multiple variables acquired from DVE. RTMaps is used to collect and synchronize data like GPS, vehicle dynamics and driver head movement. After the data synchronization, maneuvers are segmented out as right turn, left turn and overtake. Each maneuver is composed of several individual tasks that are necessary to be performed in a sequential manner. This paper focuses on turn maneuvers. Some of the tasks required in the analysis of ‘turn’ maneuver are: detect the start and end of the turn, detect the indicator status change, check if the indicator was turned on within a safe distance and check the lane keeping during the turn maneuver. This paper proposes a fusion and analysis of heterogeneous data, mainly involved in driving, to determine the risk factor of particular maneuvers within the drive. It also explains the segmentation and risk analysis of the turn maneuver in a drive.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Benefit finding is a meaning making construct that has been shown to be related to adjustment in people with MS and their carers. This study investigated the dimensions, stability and potency of benefit finding in predicting adjustment over a 12 month interval using a newly developed Benefit Finding in Multiple Sclerosis Scale (BFiMSS). Usable data from 388 persons with MS and 232 carers was obtained from questionnaires completed at Time 1 and 12 months later (Time 2). Factor analysis of the BFiMSS revealed seven psychometrically sound factors: Compassion/Empathy, Spiritual Growth, Mindfulness, Family Relations Growth, Life Style Gains, Personal Growth, New Opportunities. BFiMSS total and factors showed satisfactory internal and retest reliability coefficients, and convergent, criterion and external validity. Results of regression analyses indicated that the Time 1 BFiMSS factors accounted for significant amounts of variance in each of the Time 2 adjustment outcomes (positive states of mind, positive affect, anxiety, depression) after controlling for Time 1 adjustment, and relevant demographic and illness variables. Findings delineate the dimensional structure of benefit finding in MS, the differential links between benefit finding dimensions and adjustment and the temporal unfolding of benefit finding in chronic illness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In public venues, crowd size is a key indicator of crowd safety and stability. Crowding levels can be detected using holistic image features, however this requires a large amount of training data to capture the wide variations in crowd distribution. If a crowd counting algorithm is to be deployed across a large number of cameras, such a large and burdensome training requirement is far from ideal. In this paper we propose an approach that uses local features to count the number of people in each foreground blob segment, so that the total crowd estimate is the sum of the group sizes. This results in an approach that is scalable to crowd volumes not seen in the training data, and can be trained on a very small data set. As a local approach is used, the proposed algorithm can easily be used to estimate crowd density throughout different regions of the scene and be used in a multi-camera environment. A unique localised approach to ground truth annotation reduces the required training data is also presented, as a localised approach to crowd counting has different training requirements to a holistic one. Testing on a large pedestrian database compares the proposed technique to existing holistic techniques and demonstrates improved accuracy, and superior performance when test conditions are unseen in the training set, or a minimal training set is used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Real-Time Kinematic (RTK) positioning is a technique used to provide precise positioning services at centimetre accuracy level in the context of Global Navigation Satellite Systems (GNSS). While a Network-based RTK (N-RTK) system involves multiple continuously operating reference stations (CORS), the simplest form of a NRTK system is a single-base RTK. In Australia there are several NRTK services operating in different states and over 1000 single-base RTK systems to support precise positioning applications for surveying, mining, agriculture, and civil construction in regional areas. Additionally, future generation GNSS constellations, including modernised GPS, Galileo, GLONASS, and Compass, with multiple frequencies have been either developed or will become fully operational in the next decade. A trend of future development of RTK systems is to make use of various isolated operating network and single-base RTK systems and multiple GNSS constellations for extended service coverage and improved performance. Several computational challenges have been identified for future NRTK services including: • Multiple GNSS constellations and multiple frequencies • Large scale, wide area NRTK services with a network of networks • Complex computation algorithms and processes • Greater part of positioning processes shifting from user end to network centre with the ability to cope with hundreds of simultaneous users’ requests (reverse RTK) There are two major requirements for NRTK data processing based on the four challenges faced by future NRTK systems, expandable computing power and scalable data sharing/transferring capability. This research explores new approaches to address these future NRTK challenges and requirements using the Grid Computing facility, in particular for large data processing burdens and complex computation algorithms. A Grid Computing based NRTK framework is proposed in this research, which is a layered framework consisting of: 1) Client layer with the form of Grid portal; 2) Service layer; 3) Execution layer. The user’s request is passed through these layers, and scheduled to different Grid nodes in the network infrastructure. A proof-of-concept demonstration for the proposed framework is performed in a five-node Grid environment at QUT and also Grid Australia. The Networked Transport of RTCM via Internet Protocol (Ntrip) open source software is adopted to download real-time RTCM data from multiple reference stations through the Internet, followed by job scheduling and simplified RTK computing. The system performance has been analysed and the results have preliminarily demonstrated the concepts and functionality of the new NRTK framework based on Grid Computing, whilst some aspects of the performance of the system are yet to be improved in future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper analyses the expected value of OD volumes from probe with fixed error, error that is proportional to zone size and inversely proportional to zone size. To add realism to the analysis, real trip ODs in the Tokyo Metropolitan Region are synthesised. The results show that for small zone coding with average radius of 1.1km, and fixed measurement error of 100m, an accuracy of 70% can be expected. The equivalent accuracy for medium zone coding with average radius of 5km would translate into a fixed error of approximately 300m. As expected small zone coding is more sensitive than medium zone coding as the chances of the probe error envelope falling into adjacent zones are higher. For the same error radii, error proportional to zone size would deliver higher level of accuracy. As over half (54.8%) of the trip ends start or end at zone with equivalent radius of ≤ 1.2 km and only 13% of trips ends occurred at zones with equivalent radius ≥2.5km, measurement error that is proportional to zone size such as mobile phone would deliver higher level of accuracy. The synthesis of real OD with different probe error characteristics have shown that expected value of >85% is difficult to achieve for small zone coding with average radius of 1.1km. For most transport applications, OD matrix at medium zone coding is sufficient for transport management. From this study it can be drawn that GPS with error range between 2 and 5m, and at medium zone coding (average radius of 5km) would provide OD estimates greater than 90% of the expected value. However, for a typical mobile phone operating error range at medium zone coding the expected value would be lower than 85%. This paper assumes transmission of one origin and one destination positions from the probe. However, if multiple positions within the origin and destination zones are transmitted, map matching to transport network could be performed and it would greatly improve the accuracy of the probe data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a model to estimate travel time using cumulative plots. Three different cases considered are i) case-Det, for only detector data; ii) case-DetSig, for detector data and signal controller data and iii) case-DetSigSFR: for detector data, signal controller data and saturation flow rate. The performance of the model for different detection intervals is evaluated. It is observed that detection interval is not critical if signal timings are available. Comparable accuracy can be obtained from larger detection interval with signal timings or from shorter detection interval without signal timings. The performance for case-DetSig and for case-DetSigSFR is consistent with accuracy generally more than 95% whereas, case-Det is highly sensitive to the signal phases in the detection interval and its performance is uncertain if detection interval is integral multiple of signal cycles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Established Monte Carlo user codes BEAMnrc and DOSXYZnrc permit the accurate and straightforward simulation of radiotherapy experiments and treatments delivered from multiple beam angles. However, when an electronic portal imaging detector (EPID) is included in these simulations, treatment delivery from non-zero beam angles becomes problematic. This study introduces CTCombine, a purpose-built code for rotating selected CT data volumes, converting CT numbers to mass densities, combining the results with model EPIDs and writing output in a form which can easily be read and used by the dose calculation code DOSXYZnrc. The geometric and dosimetric accuracy of CTCombine’s output has been assessed by simulating simple and complex treatments applied to a rotated planar phantom and a rotated humanoid phantom and comparing the resulting virtual EPID images with the images acquired using experimental measurements and independent simulations of equivalent phantoms. It is expected that CTCombine will be useful for Monte Carlo studies of EPID dosimetry as well as other EPID imaging applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Migraine is a painful disorder for which the etiology remains obscure. Diagnosis is largely based on International Headache Society criteria. However, no feature occurs in all patients who meet these criteria, and no single symptom is required for diagnosis. Consequently, this definition may not accurately reflect the phenotypic heterogeneity or genetic basis of the disorder. Such phenotypic uncertainty is typical for complex genetic disorders and has encouraged interest in multivariate statistical methods for classifying disease phenotypes. We applied three popular statistical phenotyping methods—latent class analysis, grade of membership and grade of membership “fuzzy” clustering (Fanny)—to migraine symptom data, and compared heritability and genome-wide linkage results obtained using each approach. Our results demonstrate that different methodologies produce different clustering structures and non-negligible differences in subsequent analyses. We therefore urge caution in the use of any single approach and suggest that multiple phenotyping methods be used.