97 resultados para soil data requirements


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract As regional and continental carbon balances of terrestrial ecosystems become available, it becomes clear that the soils are the largest source of uncertainty. Repeated inventories of soil organic carbon (SOC) organized in soil monitoring networks (SMN) are being implemented in a number of countries. This paper reviews the concepts and design of SMNs in ten countries, and discusses the contribution of such networks to reducing the uncertainty of soil carbon balances. Some SMNs are designed to estimate country-specific land use or management effects on SOC stocks, while others collect soil carbon and ancillary data to provide a nationally consistent assessment of soil carbon condition across the major land-use/soil type combinations. The former use a single sampling campaign of paired sites, while for the latter both systematic (usually grid based) and stratified repeated sampling campaigns (5–10 years interval) are used with densities of one site per 10–1,040 km². For paired sites, multiple samples at each site are taken in order to allow statistical analysis, while for the single sites, composite samples are taken. In both cases, fixed depth increments together with samples for bulk density and stone content are recommended. Samples should be archived to allow for re-measurement purposes using updated techniques. Information on land management, and where possible, land use history should be systematically recorded for each site. A case study of the agricultural frontier in Brazil is presented in which land use effect factors are calculated in order to quantify the CO2 fluxes from national land use/management conversion matrices. Process-based SOC models can be run for the individual points of the SMN, provided detailed land management records are available. These studies are still rare, as most SMNs have been implemented recently or are in progress. Examples from the USA and Belgium show that uncertainties in SOC change range from 1.6–6.5 Mg C ha−1 for the prediction of SOC stock changes on individual sites to 11.72 Mg C ha−1 or 34% of the median SOC change for soil/land use/climate units. For national SOC monitoring, stratified sampling sites appears to be the most straightforward attribution of SOC values to units with similar soil/land use/climate conditions (i.e. a spatially implicit upscaling approach). Keywords Soil monitoring networks - Soil organic carbon - Modeling - Sampling design

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Soil organic carbon (C) sequestration rates based on the Intergovernmental Panel for Climate Change (IPCC) methodology were combined with local economic data to simulate the economic potential for C sequestration in response to conservation tillage in the six agro-ecological zones within the Southern Region of the Australian grains industry. The net C sequestration rate over 20 years for the Southern Region (which includes discounting for associated greenhouse gases) is estimated to be 3.6 or 6.3 Mg C/ha after converting to either minimum or no-tillage practices, respectively, with no-till practices estimated to return 75% more carbon on average than minimum tillage. The highest net gains in C per ha are realised when converting from conventional to no-tillage practices in the high-activity clay soils of the High Rainfall and Wimmera agro-ecological zones. On the basis of total area available for change, the Slopes agro-ecological zone offers the highest net returns, potentially sequestering an additional 7.1 Mt C under no-tillage scenario over 20 years. The economic analysis was summarised as C supply curves for each of the 6 zones expressing the total additional C accumulated over 20 years for a price per t C sequestered ranging from zero to AU$200. For a price of $50/Mg C, a total of 427 000 Mg C would be sequestered over 20 years across the Southern Region, <5% of the simulated C sequestration potential of 9.1 Mt for the region. The Wimmera and Mid-North offer the largest gains in C under minimum tillage over 20 years of all zones for all C prices. For the no-tillage scenario, for a price of $50/Mg C, 1.74 Mt C would be sequestered over 20 years across the Southern Region, <10% of the simulated C sequestration potential of 18.6 Mt for the region over 20 years. The Slopes agro-ecological zone offers the best return in C over 20 years under no-tillage for all C prices. The Mallee offers the least return for both minimum and no-tillage scenarios. At a price of $200/Mg C, the transition from conventional tillage to minimum or no-tillage practices will only realise 19% and 33%, respectively, of the total biogeochemical sequestration potential of crop and pasture systems of the Southern Region over a 20-year period.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Crack is a significant influential factor in soil slope that could leads to rainfall-induced slope instability. Existence of cracks at soil surface will decrease the shear strength and increase the hydraulic conductivity of soil slope. Although previous research has shown the effect of surface-cracks in soil stability, the influence of deep-cracks on soil stability is still unknown. The limited availability of deep crack data due to the difficulty of effective investigate methods could be one of the obstacles. Current technology in electrical resistivity can be used to detect deep-cracks in soil. This paper discusses deep cracks in unsaturated residual soil slopes in Indonesia using electrical resistivity method. The field investigation such as bore hole and SPT tests was carried out at multiple locations in the area where the electrical resistivity testing have been conducted. Subsequently, the results from bore-hole and SPT test were used to verify the results of the electrical resistivity test. This study demonstrates the benefits and limitations of the electrical resistivity in detecting deep-cracks in a residual soil slopes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Policies that encourage greenhouse-gas emitters to mitigate emissions through terrestrial carbon (C) offsets – C sequestration in soils or biomass – will promote practices that reduce erosion and build soil fertility, while fostering adaptation to climate change, agricultural development, and rehabilitation of degraded soils. However none of these benefits will be possible until changes in C stocks can be documented accurately and cost-effectively. This is particularly challenging when dealing with changes in soil organic C (SOC) stocks. Precise methods for measuring C in soil samples are well established, but spatial variability in the factors that determine SOC stocks makes it difficult to document change. Widespread interest in the benefits of SOC sequestration has brought this issue to the fore in the development of US and international climate policy. Here, we review the challenges to documenting changes in SOC stocks, how policy decisions influence offset documentation requirements, and the benefits and drawbacks of different sampling strategies and extrapolation methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The uncertainty associated with how projected climate change will affect global C cycling could have a large impact on predictions of soil C stocks. The purpose of our study was to determine how various soil decomposition and chemistry characteristics relate to soil organic matter (SOM) temperature sensitivity. We accomplished this objective using long-term soil incubations at three temperatures (15, 25, and 35°C) and pyrolysis molecular beam mass spectrometry (py-MBMS) on 12 soils from 6 sites along a mean annual temperature (MAT) gradient (2–25.6°C). The Q10 values calculated from the CO2 respired during a long-term incubation using the Q10-q method showed decomposition of the more resistant fraction to be more temperature sensitive with a Q10-q of 1.95 ± 0.08 for the labile fraction and a Q10-q of 3.33 ± 0.04 for the more resistant fraction. We compared the fit of soil respiration data using a two-pool model (active and slow) with first-order kinetics with a three-pool model and found that the two and three-pool models statistically fit the data equally well. The three-pool model changed the size and rate constant for the more resistant pool. The size of the active pool in these soils, calculated using the two-pool model, increased with incubation temperature and ranged from 0.1 to 14.0% of initial soil organic C. Sites with an intermediate MAT and lowest C/N ratio had the largest active pool. Pyrolysis molecular beam mass spectrometry showed declines in carbohydrates with conversion from grassland to wheat cultivation and a greater amount of protected carbohydrates in allophanic soils which may have lead to differences found between the total amount of CO2 respired, the size of the active pool, and the Q10-q values of the soils.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Soluble organic matter derived from exotic Pinus vegetation forms stronger complexes with iron (Fe) than the soluble organic matter derived from most native Australian species. This has lead to concern about the environmental impacts related to the establishment of extensive exotic Pinus plantations in coastal southeast Queensland, Australia. It has been suggested that the Pinus plantations may enhance the solubility of Fe in soils by increasing the amount of organically complexed Fe. While this remains inconclusive, the environmental impacts of an increased flux of dissolved, organically complexed Fe from soils to the fluvial system and then to sensitive coastal ecosystems are potentially damaging. Previous work investigated a small number of samples, was largely laboratory based and had limited application to field conditions. These assessments lacked field-based studies, including the comparison of the soil water chemistry of sites associated with Pinus vegetation and undisturbed native vegetation. In addition, the main controls on the distribution and mobilisation of Fe in soils of this subtropical coastal region have not been determined. This information is required in order to better understand the relative significance of any Pinus enhanced solubility of Fe. The main aim of this thesis is to determine the controls on Fe distribution and mobilisation in soils and soil waters of a representative coastal catchment in southeast Queensland (Poona Creek catchment, Fraser Coast) and to test the effect of Pinus vegetation on the solubility and speciation of Fe. The thesis is structured around three individual papers. The first paper identifies the main processes responsible for the distribution and mobilisation of labile Fe in the study area and takes a catchment scale approach. Physicochemical attributes of 120 soil samples distributed throughout the catchment are analysed, and a new multivariate data analysis approach (Kohonen’s self organising maps) is used to identify the conditions associated with high labile Fe. The second paper establishes whether Fe nodules play a major role as an iron source in the catchment, by determining the genetic mechanism responsible for their formation. The nodules are a major pool of Fe in much of the region and previous studies have implied that they may be involved in redox-controlled mobilisation and redistribution of Fe. This is achieved by combining a detailed study of a ferric soil profile (morphology, mineralogy and micromorphology) with the distribution of Fe nodules on a catchment scale. The third component of the thesis tests whether the concentration and speciation of Fe in soil solutions from Pinus plantations differs significantly from native vegetation soil solutions. Microlysimeters are employed to collect unaltered, in situ soil water samples. The redox speciation of Fe is determined spectrophotometrically and the interaction between Fe and dissolved organic matter (DOM) is modelled with the Stockholm Humic Model. The thesis provides a better understanding of the controls on the distribution, concentration and speciation of Fe in the soils and soil waters of southeast Queensland. Reductive dissolution is the main mechanism by which mobilisation of Fe occurs in the study area. Labile Fe concentrations are low overall, particularly in the sandy soils of the coastal plain. However, high labile Fe is common in seasonally waterlogged and clay-rich soils which are exposed to fluctuating redox conditions and in organic-rich soils adjacent to streams. Clay-rich soils are most common in the upper parts of the catchment. Fe nodules were shown to have a negligible role in the redistribution of dissolved iron in the catchment. They are formed by the erosion, colluvial transport and chemical weathering of iron-rich sandstones. The ferric horizons, in which nodules are commonly concentrated, subsequently form through differential biological mixing of the soil. Whereas dissolution/ reprecipitation of the Fe cements is an important component of nodule formation, mobilised Fe reprecipitates locally. Dissolved Fe in the soil waters is almost entirely in the ferrous form. Vegetation type does not affect the concentration and speciation of Fe in soil waters, although Pinus DOM has greater acidic functional group site densities than DOM from native vegetation. Iron concentrations are highest in the high DOM soil waters collected from sandy podosols, where they are controlled by redox potential. Iron concentrations are low in soil solutions from clay and iron oxide rich soils, in spite of similar redox potentials. This is related to stronger sorption to the reactive clay and iron oxide mineral surfaces in these soils, which reduces the amount of DOM available for microbial metabolisation and reductive dissolution of Fe. Modelling suggests that Pinus DOM can significantly increase the amount of truly dissolved ferric iron remaining in solution in oxidising conditions. Thus, inputs of ferrous iron together with Pinus DOM to surface waters may reduce precipitation of hydrous ferric oxides and increase the flux of dissolved iron out of the catchment. Such inputs are most likely from the lower catchment, where podosols planted with Pinus are most widely distributed. Significant outcomes other than the main aims were also achieved. It is shown that mobilisation of Fe in podosols can occur as dissolved Fe(II) rather than as Fe(III)-organic complexes. This has implications for the large body of work which assumes that Fe(II) plays a minor role. Also, the first paper demonstrates that a data analysis approach based on Kohonen’s self organising maps can facilitate the interpretation of complex datasets and can help identify geochemical processes operating on a catchment scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The health system is one sector dealing with a deluge of complex data. Many healthcare organisations struggle to utilise these volumes of health data effectively and efficiently. Also, there are many healthcare organisations, which still have stand-alone systems, not integrated for management of information and decision-making. This shows, there is a need for an effective system to capture, collate and distribute this health data. Therefore, implementing the data warehouse concept in healthcare is potentially one of the solutions to integrate health data. Data warehousing has been used to support business intelligence and decision-making in many other sectors such as the engineering, defence and retail sectors. The research problem that is going to be addressed is, "how can data warehousing assist the decision-making process in healthcare". To address this problem the researcher has narrowed an investigation focusing on a cardiac surgery unit. This research used the cardiac surgery unit at the Prince Charles Hospital (TPCH) as the case study. The cardiac surgery unit at TPCH uses a stand-alone database of patient clinical data, which supports clinical audit, service management and research functions. However, much of the time, the interaction between the cardiac surgery unit information system with other units is minimal. There is a limited and basic two-way interaction with other clinical and administrative databases at TPCH which support decision-making processes. The aims of this research are to investigate what decision-making issues are faced by the healthcare professionals with the current information systems and how decision-making might be improved within this healthcare setting by implementing an aligned data warehouse model or models. As a part of the research the researcher will propose and develop a suitable data warehouse prototype based on the cardiac surgery unit needs and integrating the Intensive Care Unit database, Clinical Costing unit database (Transition II) and Quality and Safety unit database [electronic discharge summary (e-DS)]. The goal is to improve the current decision-making processes. The main objectives of this research are to improve access to integrated clinical and financial data, providing potentially better information for decision-making for both improved from the questionnaire and by referring to the literature, the results indicate a centralised data warehouse model for the cardiac surgery unit at this stage. A centralised data warehouse model addresses current needs and can also be upgraded to an enterprise wide warehouse model or federated data warehouse model as discussed in the many consulted publications. The data warehouse prototype was able to be developed using SAS enterprise data integration studio 4.2 and the data was analysed using SAS enterprise edition 4.3. In the final stage, the data warehouse prototype was evaluated by collecting feedback from the end users. This was achieved by using output created from the data warehouse prototype as examples of the data desired and possible in a data warehouse environment. According to the feedback collected from the end users, implementation of a data warehouse was seen to be a useful tool to inform management options, provide a more complete representation of factors related to a decision scenario and potentially reduce information product development time. However, there are many constraints exist in this research. For example the technical issues such as data incompatibilities, integration of the cardiac surgery database and e-DS database servers and also, Queensland Health information restrictions (Queensland Health information related policies, patient data confidentiality and ethics requirements), limited availability of support from IT technical staff and time restrictions. These factors have influenced the process for the warehouse model development, necessitating an incremental approach. This highlights the presence of many practical barriers to data warehousing and integration at the clinical service level. Limitations included the use of a small convenience sample of survey respondents, and a single site case report study design. As mentioned previously, the proposed data warehouse is a prototype and was developed using only four database repositories. Despite this constraint, the research demonstrates that by implementing a data warehouse at the service level, decision-making is supported and data quality issues related to access and availability can be reduced, providing many benefits. Output reports produced from the data warehouse prototype demonstrated usefulness for the improvement of decision-making in the management of clinical services, and quality and safety monitoring for better clinical care. However, in the future, the centralised model selected can be upgraded to an enterprise wide architecture by integrating with additional hospital units’ databases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Queensland Building Services Authority (QBSA) regulates the construction industry in Queensland, Australia, with licensing requirements creating differential financial reporting obligations, depending on firm size. Economic theories of regulation and behaviour provide a framework for investigating effects of the financial constraints and financial reporting requirements imposed by QBSA licensing. Data are analysed for all small and medium construction entities operating in Queensland between 2001 and 2006. Findings suggesting that construction licensees are categorizing themselves as smaller to avoid the more onerous and costly financial reporting of higher licensee categories are consistent with US findings from the 2002 Sarbanes-Oxley (SOX) regulation which created incentives for small firms to stay small to avoid the costs of compliance with more onerous financial reporting requirements. Such behaviour can have the undesirable economic consequences of adversely affecting employment, investment, wealth creation and financial stability. Insights and implications from the analysed QBSA processes are important for future policy reform and design, and useful to be considered where similar regulatory approaches are planned.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Projects funded by the Australian National Data Service(ANDS). The specific projects that were funded included: a) Greenhouse Gas Emissions Project (N2O) with Prof. Peter Grace from QUT’s Institute of Sustainable Resources. b) Q150 Project for the management of multimedia data collected at Festival events with Prof. Phil Graham from QUT’s Institute of Creative Industries. c) Bio-diversity environmental sensing with Prof. Paul Roe from the QUT Microsoft eResearch Centre. For the purposes of these projects the Eclipse Rich Client Platform (Eclipse RCP) was chosen as an appropriate software development framework within which to develop the respective software. This poster will present a brief overview of the requirements of the projects, an overview of the experiences of the project team in using Eclipse RCP, report on the advantages and disadvantages of using Eclipse and it’s perspective on Eclipse as an integrated tool for supporting future data management requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis provides a query model suitable for context sensitive access to a wide range of distributed linked datasets which are available to scientists using the Internet. The model is designed based on scientific research standards which require scientists to provide replicable methods in their publications. Although there are query models available that provide limited replicability, they do not contextualise the process whereby different scientists select dataset locations based on their trust and physical location. In different contexts, scientists need to perform different data cleaning actions, independent of the overall query, and the model was designed to accommodate this function. The query model was implemented as a prototype web application and its features were verified through its use as the engine behind a major scientific data access site, Bio2RDF.org. The prototype showed that it was possible to have context sensitive behaviour for each of the three mirrors of Bio2RDF.org using a single set of configuration settings. The prototype provided executable query provenance that could be attached to scientific publications to fulfil replicability requirements. The model was designed to make it simple to independently interpret and execute the query provenance documents using context specific profiles, without modifying the original provenance documents. Experiments using the prototype as the data access tool in workflow management systems confirmed that the design of the model made it possible to replicate results in different contexts with minimal additions, and no deletions, to query provenance documents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The emergence of semantic technologies to deal with the underlying meaning of things, instead of a purely syntactical representation, has led to new developments in various fields, including business process modeling. Inspired by artificial intelligence research, technologies for semantic Web services have been proposed and extended to process modeling. However, the applicablility of semantic Web services for semantic business processes is limited because business processes encompass wider requirements of business than Web services. In particular, processes are concerned with the composition of tasks, that is, in which order activities are carried out, regardless of their implementation details; resources assigned to carry out tasks, such as machinery, people, and goods; data exchange; and security and compliance concerns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are several popular soil moisture measurement methods today such as time domain reflectometry, electromagnetic (EM) wave, electrical and acoustic methods. Significant studies have been dedicated in developing method of measurements using those concepts, especially to achieve the characteristics of noninvasiveness. EM wave method provides an advantage because it is non-invasive to the soil and does not need to utilise probes to penetrate or bury in the soil. But some EM methods are also too complex, expensive, and not portable for the application of Wireless Sensor Networks; for example satellites or UAV (Unmanned Aerial Vehicle) based sensors. This research proposes a method in detecting changes in soil moisture using soil-reflected electromagnetic (SREM) wave from Wireless Sensor Networks (WSNs). Studies have shown that different levels of soil moisture will affects soil’s dielectric properties, such as relative permittivity and conductivity, and in turns change its reflection coefficients. The SREM wave method uses a transmitter adjacent to a WSNs node with purpose exclusively to transmit wireless signals that will be reflected by the soil. The strength from the reflected signal that is determined by the soil’s reflection coefficients is used to differentiate the level of soil moisture. The novel nature of this method comes from using WSNs communication signals to perform soil moisture estimation without the need of external sensors or invasive equipment. This innovative method is non-invasive, low cost and simple to set up. There are three locations at Brisbane, Australia chosen as the experiment’s location. The soil type in these locations contains 10–20% clay according to the Australian Soil Resource Information System. Six approximate levels of soil moisture (8, 10, 13, 15, 18 and 20%) are measured at each location; with each measurement consisting of 200 data. In total 3600 measurements are completed in this research, which is sufficient to achieve the research objective, assessing and proving the concept of SREM wave method. These results are compared with reference data from similar soil type to prove the concept. A fourth degree polynomial analysis is used to generate an equation to estimate soil moisture from received signal strength as recorded by using the SREM wave method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Soil organic carbon sequestration rates over 20 years based on the Intergovernmental Panel for Climate Change (IPCC) methodology were combined with local economic data to determine the potential for soil C sequestration in wheat-based production systems on the Indo-Gangetic Plain (IGP). The C sequestration potential of rice–wheat systems of India on conversion to no-tillage is estimated to be 44.1 Mt C over 20 years. Implementing no-tillage practices in maize–wheat and cotton–wheat production systems would yield an additional 6.6 Mt C. This offset is equivalent to 9.6% of India's annual greenhouse gas emissions (519 Mt C) from all sectors (excluding land use change and forestry), or less than one percent per annum. The economic analysis was summarized as carbon supply curves expressing the total additional C accumulated over 20 year for a price per tonne of carbon sequestered ranging from zero to USD 200. At a carbon price of USD 25 Mg C−1, 3 Mt C (7% of the soil C sequestration potential) could be sequestered over 20 years through the implementation of no-till cropping practices in rice–wheat systems of the Indian States of the IGP, increasing to 7.3 Mt C (17% of the soil C sequestration potential) at USD 50 Mg C−1. Maximum levels of sequestration could be attained with carbon prices approaching USD 200 Mg C−1 for the States of Bihar and Punjab. At this carbon price, a total of 34.7 Mt C (79% of the estimated C sequestration potential) could be sequestered over 20 years across the rice–wheat region of India, with Uttar Pradesh contributing 13.9 Mt C.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A substantial body of literature exists identifying factors contributing to under-performing Enterprise Resource Planning systems (ERPs), including poor communication, lack of executive support and user dissatisfaction (Calisir et al., 2009). Of particular interest is Momoh et al.’s (2010) recent review identifying poor data quality (DQ) as one of nine critical factors associated with ERP failure. DQ is central to ERP operating processes, ERP facilitated decision-making and inter-organizational cooperation (Batini et al., 2009). Crucial in ERP contexts is that the integrated, automated, process driven nature of ERP data flows can amplify DQ issues, compounding minor errors as they flow through the system (Haug et al., 2009; Xu et al., 2002). However, the growing appreciation of the importance of DQ in determining ERP success lacks research addressing the relationship between stakeholders’ requirements and perceptions of ERP DQ, perceived data utility and the impact of users’ treatment of data on ERP outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A significant issue encountered when fusing data received from multiple sensors is the accuracy of the timestamp associated with each piece of data. This is particularly important in applications such as Simultaneous Localisation and Mapping (SLAM) where vehicle velocity forms an important part of the mapping algorithms; on fastmoving vehicles, even millisecond inconsistencies in data timestamping can produce errors which need to be compensated for. The timestamping problem is compounded in a robot swarm environment due to the use of non-deterministic readily-available hardware (such as 802.11-based wireless) and inaccurate clock synchronisation protocols (such as Network Time Protocol (NTP)). As a result, the synchronisation of the clocks between robots can be out by tens-to-hundreds of milliseconds making correlation of data difficult and preventing the possibility of the units performing synchronised actions such as triggering cameras or intricate swarm manoeuvres. In this thesis, a complete data fusion unit is designed, implemented and tested. The unit, named BabelFuse, is able to accept sensor data from a number of low-speed communication buses (such as RS232, RS485 and CAN Bus) and also timestamp events that occur on General Purpose Input/Output (GPIO) pins referencing a submillisecondaccurate wirelessly-distributed "global" clock signal. In addition to its timestamping capabilities, it can also be used to trigger an attached camera at a predefined start time and frame rate. This functionality enables the creation of a wirelessly-synchronised distributed image acquisition system over a large geographic area; a real world application for this functionality is the creation of a platform to facilitate wirelessly-distributed 3D stereoscopic vision. A ‘best-practice’ design methodology is adopted within the project to ensure the final system operates according to its requirements. Initially, requirements are generated from which a high-level architecture is distilled. This architecture is then converted into a hardware specification and low-level design, which is then manufactured. The manufactured hardware is then verified to ensure it operates as designed and firmware and Linux Operating System (OS) drivers are written to provide the features and connectivity required of the system. Finally, integration testing is performed to ensure the unit functions as per its requirements. The BabelFuse System comprises of a single Grand Master unit which is responsible for maintaining the absolute value of the "global" clock. Slave nodes then determine their local clock o.set from that of the Grand Master via synchronisation events which occur multiple times per-second. The mechanism used for synchronising the clocks between the boards wirelessly makes use of specific hardware and a firmware protocol based on elements of the IEEE-1588 Precision Time Protocol (PTP). With the key requirement of the system being submillisecond-accurate clock synchronisation (as a basis for timestamping and camera triggering), automated testing is carried out to monitor the o.sets between each Slave and the Grand Master over time. A common strobe pulse is also sent to each unit for timestamping; the correlation between the timestamps of the di.erent units is used to validate the clock o.set results. Analysis of the automated test results show that the BabelFuse units are almost threemagnitudes more accurate than their requirement; clocks of the Slave and Grand Master units do not di.er by more than three microseconds over a running time of six hours and the mean clock o.set of Slaves to the Grand Master is less-than one microsecond. The common strobe pulse used to verify the clock o.set data yields a positive result with a maximum variation between units of less-than two microseconds and a mean value of less-than one microsecond. The camera triggering functionality is verified by connecting the trigger pulse output of each board to a four-channel digital oscilloscope and setting each unit to output a 100Hz periodic pulse with a common start time. The resulting waveform shows a maximum variation between the rising-edges of the pulses of approximately 39¥ìs, well below its target of 1ms.