393 resultados para data transfer
Resumo:
In this paper, the results of the time dispersion parameters obtained from a set of channel measurements conducted in various environments that are typical of multiuser Infostation application scenarios are presented. The measurement procedure takes into account the practical scenarios typical of the positions and movements of the users in the particular Infostation network. To provide one with the knowledge of how much data can be downloaded by users over a given time and mobile speed, data transfer analysis for multiband orthogonal frequency division multiplexing (MB-OFDM) is presented. As expected, the rough estimate of simultaneous data transfer in a multiuser Infostation scenario indicates dependency of the percentage of download on the data size, number and speed of the users, and the elapse time.
Resumo:
This project report presents the results of a study on wireless communication data transfer rates for a mobile device running a custombuilt construction defect reporting application. The study measured the time taken to transmit data about a construction defect, which included digital imagery and text, in order to assess the feasibility of transferring various types and sizes of data and the ICT-supported construction management applications that could be developed as a consequence. Data transfer rates over GPRS through the Telstra network and WiFi over a private network were compared. Based on the data size and data transfer time, the rate of transfer was calculated to determine the actual data transmission speeds at which the information was being sent using the wireless mobile communication protocols. The report finds that the transmission speeds vary considerably when using GPRS and can be significantly slower than what is advertised by mobile network providers. While WiFi is much faster than GPRS, the limited range of WiFi limits the protocol to residential-scale construction sites.
Resumo:
Increasingly larger scale applications are generating an unprecedented amount of data. However, the increasing gap between computation and I/O capacity on High End Computing machines makes a severe bottleneck for data analysis. Instead of moving data from its source to the output storage, in-situ analytics processes output data while simulations are running. However, in-situ data analysis incurs much more computing resource contentions with simulations. Such contentions severely damage the performance of simulation on HPE. Since different data processing strategies have different impact on performance and cost, there is a consequent need for flexibility in the location of data analytics. In this paper, we explore and analyze several potential data-analytics placement strategies along the I/O path. To find out the best strategy to reduce data movement in given situation, we propose a flexible data analytics (FlexAnalytics) framework in this paper. Based on this framework, a FlexAnalytics prototype system is developed for analytics placement. FlexAnalytics system enhances the scalability and flexibility of current I/O stack on HEC platforms and is useful for data pre-processing, runtime data analysis and visualization, as well as for large-scale data transfer. Two use cases – scientific data compression and remote visualization – have been applied in the study to verify the performance of FlexAnalytics. Experimental results demonstrate that FlexAnalytics framework increases data transition bandwidth and improves the application end-to-end transfer performance.
Resumo:
Objective The review addresses two distinct sets of issues: 1. specific functionality, interface, and calculation problems that presumably can be fixed or improved; and 2. the more fundamental question of whether the system is close to being ready for ‘commercial prime time’ in the North American market. Findings Many of our comments relate to the first set of issues, especially sections B and C. Sections D and E deal with the second set. Overall, we feel that LCADesign represents a very impressive step forward in the ongoing quest to link CAD with LCA tools and, more importantly, to link the world of architectural practice and that of environmental research. From that perspective, it deserves continued financial support as a research project. However, if the decision is whether or not to continue the development program from a purely commercial perspective, we are less bullish. In terms of the North American market, there are no regulatory or other drivers to press design teams to use a tool of this nature. There is certainly interest in this area, but the tools must be very easy to use with little or no training. Understanding the results is as important in this regard as knowing how to apply the tool. Our comments are fairly negative when it comes to that aspect. Our opinion might change to some degree when the ‘fixes’ are made and the functionality improved. However, as discussed in more detail in the following sections, we feel that the multi-step process — CAD to IFC to LCADesign — could pose a serious problem in terms of market acceptance. The CAD to IFC part is impossible for us to judge with the information provided, and we can’t even begin to answer the question about the ease of using the software to import designs, but it appears cumbersome from what we do know. There does appear to be a developing North American market for 3D CAD, with a recent survey indicating that about 50% of the firms use some form of 3D modeling for about 75% of their projects. However, this does not mean that full 3D CAD is always being used. Our information suggests that AutoDesk accounts for about 75 to 80% of the 3D CAD market, and they are very cautious about any links that do not serve a latent demand. Finally, other system that link CAD to energy simulation are using XML data transfer protocols rather than IFC files, and it is our understanding that the market served by AutoDesk tends in that direction right now. This is a subject that is outside our area of expertise, so please take these comments as suggestions for more intensive market research rather than as definitive findings.
Resumo:
Harmful Algal Blooms (HABs) have become an important environmental concern along the western coast of the United States. Toxic and noxious blooms adversely impact the economies of coastal communities in the region, pose risks to human health, and cause mortality events that have resulted in the deaths of thousands of fish, marine mammals and seabirds. One goal of field-based research efforts on this topic is the development of predictive models of HABs that would enable rapid response, mitigation and ultimately prevention of these events. In turn, these objectives are predicated on understanding the environmental conditions that stimulate these transient phenomena. An embedded sensor network (Fig. 1), under development in the San Pedro Shelf region off the Southern California coast, is providing tools for acquiring chemical, physical and biological data at high temporal and spatial resolution to help document the emergence and persistence of HAB events, supporting the design and testing of predictive models, and providing contextual information for experimental studies designed to reveal the environmental conditions promoting HABs. The sensor platforms contained within this network include pier-based sensor arrays, ocean moorings, HF radar stations, along with mobile sensor nodes in the form of surface and subsurface autonomous vehicles. FreewaveTM radio modems facilitate network communication and form a minimally-intrusive, wireless communication infrastructure throughout the Southern California coastal region, allowing rapid and cost-effective data transfer. An emerging focus of this project is the incorporation of a predictive ocean model that assimilates near-real time, in situ data from deployed Autonomous Underwater Vehicles (AUVs). The model then assimilates the data to increase the skill of both nowcasts and forecasts, thus providing insight into bloom initiation as well as the movement of blooms or other oceanic features of interest (e.g., thermoclines, fronts, river discharge, etc.). From these predictions, deployed mobile sensors can be tasked to track a designated feature. This focus has led to the creation of a technology chain in which algorithms are being implemented for the innovative trajectory design for AUVs. Such intelligent mission planning is required to maneuver a vehicle to precise depths and locations that are the sites of active blooms, or physical/chemical features that might be sources of bloom initiation or persistence. The embedded network yields high-resolution, temporal and spatial measurements of pertinent environmental parameters and resulting biology (see Fig. 1). Supplementing this with ocean current information and remotely sensed imagery and meteorological data, we obtain a comprehensive foundation for developing a fundamental understanding of HAB events. This then directs labor- intensive and costly sampling efforts and analyses. Additionally, we provide coastal municipalities, managers and state agencies with detailed information to aid their efforts in providing responsible environmental stewardship of their coastal waters.
Resumo:
Autonomous Underwater Vehicles (AUVs) are revolutionizing oceanography through their versatility, autonomy and endurance. However, they are still an underutilized technology. For coastal operations, the ability to track a certain feature is of interest to ocean scientists. Adaptive and predictive path planning requires frequent communication with significant data transfer. Currently, most AUVs rely on satellite phones as their primary communication. This communication protocol is expensive and slow. To reduce communication costs and provide adequate data transfer rates, we present a hardware modification along with a software system that provides an alternative robust disruption- tolerant communications framework enabling cost-effective glider operation in coastal regions. The framework is specifically designed to address multi-sensor deployments. We provide a system overview and present testing and coverage data for the network. Additionally, we include an application of ocean-model driven trajectory design, which can benefit from the use of this network and communication system. Simulation and implementation results are presented for single and multiple vehicle deployments. The presented combination of infrastructure, software development and deployment experience brings us closer to the goal of providing a reliable and cost-effective data transfer framework to enable real-time, optimal trajectory design, based on ocean model predictions, to gather in situ measurements of interesting and evolving ocean features and phenomena.
Resumo:
We read the excellent review of telemonitoring in chronic heart failure (CHF)1 with interest and commend the authors on the proposed classification of telemedical remote management systems according to the type of data transfer, decision ability and level of integration. However, several points require clarification in relation to our Cochrane review of telemonitoring and structured telephone support2. We included a study by Kielblock3. We corresponded directly with this study team specifically to find out whether or not this was a randomised study and were informed that it was a randomised trial, albeit by date of birth. We note in our review2 that this randomisation method carries a high risk of bias. Post-hoc metaanalyses without these data demonstrate no substantial change to the effect estimates for all cause mortality (original risk ratio (RR) 0·66 [95% CI 0·54, 0·81], p<0·0001; revised RR 0·72 [95% CI 0·57, 0·92], p=0·008), all-cause hospitalisation (original RR 0·91 [95% CI 0·84, 0·99] p=0·02; revised RR 0.92 [95% CI 0·84, 1·02], p=0·10 ) or CHF-related hospitalisation (original RR 0·79 [95% CI 0·67, 0·94] p=0·008; revised RR 0·75 [95% CI 0·60, 0·94] p=0·01). Secondly, we would classify the Tele-HF study4, 5 as structured telephone support, rather than telemonitoring. Again, inclusion of these data alters the point-estimate but not the overall result of the meta-analyses4. Finally, our review2 does not include invasive telemonitoring as the search strategy was not designed to capture these studies. Therefore direct comparison of our review findings with recent studies of these interventions is not recommended.
Resumo:
Purpose – The rapidly changing role of capital city airports has placed demands on surrounding infrastructure. The need for infrastructure management and coordination is increasing as airports and cities grow and share common infrastructure frameworks. The purpose of this paper is to document the changing context in Australia, where the privatisation of airports has stimulated considerable land development with resulting pressures on surrounding infrastructure provision. It aims to describe a tool that is being developed to support decision-making between various stakeholders in the airport region. The use of planning support systems improves both communication and data transfer between stakeholders and provides a foundation for complex decisions on infrastructure. Design/methodology/approach – The research uses a case study approach and focuses on Brisbane International Airport and Brisbane City Council. The research is primarily descriptive and provides an empirical assessment of the challenges of developing and implementing planning support systems as a tool for governance and decision-making. Findings – The research assesses the challenges in implementing a common data platform for stakeholders. Agency data platforms and models, traditional roles in infrastructure planning, and integrating similar data platforms all provide barriers to sharing a common language. The use of a decision support system has to be shared by all stakeholders with a common platform that can be versatile enough to support scenarios and changing conditions. The use of iPadss for scenario modelling provides stakeholders the opportunity to interact, compare scenarios and views, and react with the modellers to explore other options. Originality/value – The research confirms that planning support systems have to be accessible and interactive by their users. The Airport City concept is a new and evolving focus for airport development and will place continuing pressure on infrastructure servicing. A coordinated and efficient approach to infrastructure decision-making is critical, and an interactive planning support system that can model infrastructure scenarios provides a sound tool for governance.
Resumo:
Many computationally intensive scientific applications involve repetitive floating point operations other than addition and multiplication which may present a significant performance bottleneck due to the relatively large latency or low throughput involved in executing such arithmetic primitives on commod- ity processors. A promising alternative is to execute such primitives on Field Programmable Gate Array (FPGA) hardware acting as an application-specific custom co-processor in a high performance reconfig- urable computing platform. The use of FPGAs can provide advantages such as fine-grain parallelism but issues relating to code development in a hardware description language and efficient data transfer to and from the FPGA chip can present significant application development challenges. In this paper, we discuss our practical experiences in developing a selection of floating point hardware designs to be implemented using FPGAs. Our designs include some basic mathemati cal library functions which can be implemented for user defined precisions suitable for novel applications requiring non-standard floating point represen- tation. We discuss the details of our designs along with results from performance and accuracy analysis tests.
Resumo:
A simple and effective down-sample algorithm, Peak-Hold-Down-Sample (PHDS) algorithm is developed in this paper to enable a rapid and efficient data transfer in remote condition monitoring applications. The algorithm is particularly useful for high frequency Condition Monitoring (CM) techniques, and for low speed machine applications since the combination of the high sampling frequency and low rotating speed will generally lead to large unwieldy data size. The effectiveness of the algorithm was evaluated and tested on four sets of data in the study. One set of the data was extracted from the condition monitoring signal of a practical industry application. Another set of data was acquired from a low speed machine test rig in the laboratory. The other two sets of data were computer simulated bearing defect signals having either a single or multiple bearing defects. The results disclose that the PHDS algorithm can substantially reduce the size of data while preserving the critical bearing defect information for all the data sets used in this work even when a large down-sample ratio was used (i.e., 500 times down-sampled). In contrast, the down-sample process using existing normal down-sample technique in signal processing eliminates the useful and critical information such as bearing defect frequencies in a signal when the same down-sample ratio was employed. Noise and artificial frequency components were also induced by the normal down-sample technique, thus limits its usefulness for machine condition monitoring applications.
Resumo:
This paper proposes a new iterative method to achieve an optimally fitting plate for preoperative planning purposes. The proposed method involves integration of four commercially available software tools, Matlab, Rapidform2006, SolidWorks and ANSYS, each performing specific tasks to obtain a plate shape that fits optimally for an individual tibia and is mechanically safe. A typical challenge when crossing multiple platforms is to ensure correct data transfer. We present an example of the implementation of the proposed method to demonstrate successful data transfer between the four platforms and the feasibility of the method.
Resumo:
In this paper we propose the hybrid use of illuminant invariant and RGB images to perform image classification of urban scenes despite challenging variation in lighting conditions. Coping with lighting change (and the shadows thereby invoked) is a non-negotiable requirement for long term autonomy using vision. One aspect of this is the ability to reliably classify scene components in the presence of marked and often sudden changes in lighting. This is the focus of this paper. Posed with the task of classifying all parts in a scene from a full colour image, we propose that lighting invariant transforms can reduce the variability of the scene, resulting in a more reliable classification. We leverage the ideas of “data transfer” for classification, beginning with full colour images for obtaining candidate scene-level matches using global image descriptors. This is commonly followed by superpixellevel matching with local features. However, we show that if the RGB images are subjected to an illuminant invariant transform before computing the superpixel-level features, classification is significantly more robust to scene illumination effects. The approach is evaluated using three datasets. The first being our own dataset and the second being the KITTI dataset using manually generated ground truth for quantitative analysis. We qualitatively evaluate the method on a third custom dataset over a 750m trajectory.
Resumo:
Anatomically pre-contoured fracture fixation plates are a treatment option for bone fractures. A well-fitting plate can be used as a tool for anatomical reduction of the fractured bone. However, recent studies showed that some plates fit poorly for many patients due to considerable shape variations between bones of the same anatomical site. Therefore, the plates have to be manually fitted and deformed by surgeons to fit each patient optimally. The process is time-intensive and labor-intensive, and could lead to adverse clinical implications such as wound infection or plate failure. This paper proposes a new iterative method to simulate the patient-specific deformation of an optimally fitting plate for pre-operative planning purposes. We further demonstrate the validation of the method through a case study. The proposed method involves the integration of four commercially available software tools, Matlab, Rapidform2006, SolidWorks, and ANSYS, each performing specific tasks to obtain a plate shape that fits optimally for an individual tibia and is mechanically safe. A typical challenge when crossing multiple platforms is to ensure correct data transfer. We present an example of the implementation of the proposed method to demonstrate successful data transfer between the four platforms and the feasibility of the method.
Resumo:
Observing the working procedure of construction workers is an effective means of maintaining the safety performance of a construction project. It is also difficult to achieve due to a high worker-to-safety-officer ratio. There is an imminent need for the development of a tool to assist in the real-time monitoring of workers, in order to reduce the number of construction accidents. The development and application of a real time locating system (RTLS) based on the Chirp Spread Spectrum (CSS) technique is described in this paper for tracking the real-time position of workers on construction sites. Experiments and tests were carried out both on- and off-site to verify the accuracy of static and dynamic targets by the system, indicating an average error of within one metre. Experiments were also carried out to verify the ability of the system to identify workers’ unsafe behaviours. Wireless data transfer was used to simplify the deployment of the system. The system was deployed in a public residential construction project and proved to be quick and simple to use. The cost of the developed system is also reported to be reasonable (around 1800USD) in this study and is much cheaper than the cost of other RTLS. In addition, the CCS technique is shown to provide an economical solution with reasonable accuracy compared with other positioning systems, such as ultra wideband. The study verifies the potential of the CCS technique to provide an effective and economical aid in the improvement of safety management in the construction industry.
Resumo:
In public transport, seamless coordinated transfer strengthens the quality of service and attracts ridership. The problem of transfer coordination is sophisticated due to (1) the stochasticity of travel time variability, (2) unavailability of passenger transfer plan. However, the proliferation of Big Data technologies provides a tremendous opportunity to solve these problems. This dissertation enhances passenger transfer quality by offline and online transfer coordination. While offline transfer coordination exploits the knowledge of travel time variability to coordinate transfers, online transfer coordination provides simultaneous vehicle arrivals at stops to facilitate transfers by employing the knowledge of passenger behaviours.