42 resultados para Florida Bay Interagency Science Center
Resumo:
Ambiguity validation as an important procedure of integer ambiguity resolution is to test the correctness of the fixed integer ambiguity of phase measurements before being used for positioning computation. Most existing investigations on ambiguity validation focus on test statistic. How to determine the threshold more reasonably is less understood, although it is one of the most important topics in ambiguity validation. Currently, there are two threshold determination methods in the ambiguity validation procedure: the empirical approach and the fixed failure rate (FF-) approach. The empirical approach is simple but lacks of theoretical basis. The fixed failure rate approach has a rigorous probability theory basis, but it employs a more complicated procedure. This paper focuses on how to determine the threshold easily and reasonably. Both FF-ratio test and FF-difference test are investigated in this research and the extensive simulation results show that the FF-difference test can achieve comparable or even better performance than the well-known FF-ratio test. Another benefit of adopting the FF-difference test is that its threshold can be expressed as a function of integer least-squares (ILS) success rate with specified failure rate tolerance. Thus, a new threshold determination method named threshold function for the FF-difference test is proposed. The threshold function method preserves the fixed failure rate characteristic and is also easy-to-apply. The performance of the threshold function is validated with simulated data. The validation results show that with the threshold function method, the impact of the modelling error on the failure rate is less than 0.08%. Overall, the threshold function for the FF-difference test is a very promising threshold validation method and it makes the FF-approach applicable for the real-time GNSS positioning applications.
Resumo:
Environment Bay of Plenty Commissioned GNS Science to measure nitrogen and phosphorus concentrations in rainfalla nd rainfall recharge to groundwater at the Kaharoa rainfall recharge site. The aim of this work is to determine nutrient concentrations in rainfall recharge to groundwater and rainfall under pasoral land use.
Resumo:
Sediment samples were taken from six sampling sites in Bramble Bay, Queensland, Australia between February and November in 2012. They were analysed for a range of heavy metals including Al, Fe, Mn, Ti, Ce, Th, U, V, Cr, Co, Ni, Cu, Zn, As, Cd, Sb, Te, Hg, Tl and Pb. Fraction analysis, enrichment factors and Principal Component Analysis –Absolute Principal Component Scores (PCA-APCS) were carried out in order to assess metal pollution, potential bioavailability and source apportionment. Cr and Ni exceeded the Australian Interim Sediment Quality Guidelines at some sampling sites, while Hg was found to be the most enriched metal. Fraction analysis identified increased weak acid soluble Hg and Cd during the sampling period. Source apportionment via PCA-APCS found four sources of metals pollution, namely, marine sediments, shipping, antifouling coatings and a mixed source. These sources need to be considered in any metal pollution control measure within Bramble Bay.
Resumo:
The focus of this paper is on two World Heritage Areas: the Great Barrier Reef in Queensland, Australia and the Everglades in Florida. While both are World Heritage listed by the UNESCO, the Everglades is on the "World Heritage in Danger" list and the Great Barrier Reef could be on this list within the next year if present pressures continue. This paper examines the planning approaches and governance structures used in these two areas (Queensland and Florida) to manage the growth and development pressures. To make the analysis manageable, given the scale of these World Heritage areas, case studies at the local government level will be used: the Cairns Regional Council in Queensland and Monroe County in Florida. The case study analysis will involve three steps: (1) examination of the various plans at the federal, state, local levels that impact upon environmental quality in the Great Barrier Reef and Everglades; (2) assessing the degree to which these plans have been implemented; and (3) determine if (and how) the plans have improved environmental quality. In addition to the planning analysis we will also examine the governance structures (Lebel et al. 2006) within which planning operates. In any comparative analysis context is important (Hantrais 2009). Contextual differences between Queensland and Florida have previously been examined by Sipe, et al. (2007) and will be used as the starting point for this analysis. Our operating hypothesis and preliminary analysis suggests that the planning approaches and governance structures used in Florida and Queensland are considerably different, but the environmental outcomes may be similar. This is based, in part, on Vella (2004) who did a comparative analysis of environmental practices in the sugar industry in Florida and Queensland. This research re-examines this hypothesis and broadens the focus beyond the sugar industry to growth and development more broadly.
Resumo:
Climate has been, throughout modern history, a primary attribute for attracting residents to the “Sunshine States” of Florida (USA) and Queensland (Australia). The first major group of settlers capitalized on the winter growing season to support a year-‐round agricultural economy. As these economies developed, the climate attracted tourism and retirement industries. Yet as Florida and Queensland have blossomed under beneficial climates, the stresses acting on the natural environment are exacting a toll. Southeast Florida and eastern Queensland are among the most vulnerable coastal metropolitan areas in the world. In these places the certainty of sea level rise is measurable with impacts, empirically observable, that will continue to increase regardless of any climate change mitigation.1 The cities of the subtropics share a series of paradoxes relating to climate, resources, environment, and culture. As the subtropical climate entices new residents and visitors there are increasing costs associated with urban infrastructure and the ravages of violent weather. The carefree lifestyle of subtropical cities is increasingly dependent on scarce water and energy resources and the flow of tangible goods that support a trade economy. The natural environment is no longer exploitable as the survival of the human environment is contingent upon the ability of natural ecosystems to absorb the impact of human actions. The quality of subtropical living is challenged by the mounting pressures of population growth and rapid urbanization yet urban form and contemporary building design fail to take advantage of the subtropical zone’s natural attributes of abundant sunshine, cooling breezes and warm temperatures. Yet, by building a global network of local knowledge, subtropical cities like Brisbane, the City of Gold Coast and Fort Lauderdale, are confidently leading the way with innovative and inventive solutions for building resiliency and adaptation to climate change. The Centre for Subtropical Design at Queensland University of Technology organized the first international Subtropical Cities conference in Brisbane, Australia, where the “fault-‐lines” of subtropical cities at breaking points were revealed. The second conference, held in 2008, shed a more optimistic light with the theme "From fault-‐lines to sight-‐lines -‐ subtropical urbanism in 20-‐20" highlighting the leadership exemplified in the vitality of small and large works from around the subtropical world. Yet beyond these isolated local actions the need for more cooperation and collaboration was identified as the key to moving beyond the problems of the present and foreseeable future. The spirit of leadership and collaboration has taken on new force, as two institutions from opposite sides of the globe joined together to host the 3rd international conference Subtropical Cities 2011 -‐ Subtropical Urbanism: Beyond Climate Change. The collaboration between Florida Atlantic University and the Queensland University of Technology to host this conference, for the first time in the United States, forges a new direction in international cooperative research to address urban design solutions that support sustainable behaviours, resiliency and adaptation to sea level rise, green house gas (GHG) reduction, and climate change research in the areas of architecture and urban design, planning, and public policy. With southeast Queensland and southern Florida as contributors to this global effort among subtropical urban regions that share similar challenges, opportunities, and vulnerabilities our mutual aim is to advance the development and application of local knowledge to the global problems we share. The conference attracted over 150 participants from four continents. Presentations by authors were organized into three sub-‐themes: Cultural/Place Identity, Environment and Ecology, and Social Economics. Each of the 22 papers presented underwent a double-‐blind peer review by a panel of international experts among the disciplines and research areas represented. The Centre for Subtropical Design at the Queensland University of Technology is leading Australia in innovative environmental design with a multi-‐disciplinary focus on creating places that are ‘at home’ in the warm humid subtropics. The Broward Community Design Collaborative at Florida Atlantic University's College for Design and Social Inquiry has built an interdisciplinary collaboration that is unique in the United States among the units of Architecture, Urban and Regional Planning, Social Work, Public Administration, together with the College of Engineering and Computer Science, the College of Science, and the Center for Environmental Studies, to engage in funded action research through design inquiry to solve the problems of development for urban resiliency and environmental sustainment. As we move beyond debates about climate change -‐ now acting upon us -‐ the subtropical urban regions of the world will continue to convene to demonstrate the power of local knowledge against global forces, thereby inspiring us as we work toward everyday engagement and action that can make our cities more livable, equitable, and green.
Resumo:
The importance of a thorough and systematic literature review has long been recognised across academic domains as critical to the foundation of new knowledge and theory evolution. Driven by an exponentially growing body of knowledge in the IS discipline, there has been a recent influx of guidance on how to conduct a literature review. As literature reviews are emerging as a standalone research method in itself, increasingly these method focused guidelines are of great interest, receiving acceptance at top tier IS publication outlets. Nevertheless, the finer details which offer justification for the selected content, and the effective presentation of supporting data has not been widely discussed in these method papers to date. This paper addresses this gap by exploring the concept of ‘literature profiling’ while arguing that it is a key aspect of a comprehensive literature review. The study establishes the importance of profiling for managing aspects such as quality assurance, transparency and the mitigation of selection bias. And then discusses how profiling can provide a valid basis for data analysis based on the attributes of selected literature. In essence, this study has conducted an archival analysis of literature (predominately from the IS domain) to present its main argument; the value for literature profiling, with supporting exemplary illustrations.
Resumo:
IT consumerization is both a major opportunity and significant challenge for organizations. However, IS research has hardly discussed the implications for IT management so far. In this paper we address this topic by empirically identifying organizational themes for IT consumerization and conceptually exploring the direct and indirect effects on the business value of IT, IT capabilities, and the IT function. More specifically, based on two case studies, we identify eight organizational themes: consumer IT strategy, policy development and responsibilities, consideration of private life of employees, user involvement into IT-related processes, individualization, updated IT infrastructure, end user support, and data and system security. The contributions of this paper are: (1) the identification of organizational themes for IT consumerization; (2) the proposed effects on the business value of IT, IT capabilities and the IT function, and; (3) combining empirical insights into IT consumerization with managerial theories in the IS discipline.
Resumo:
The phase relations have been investigated experimentally at 200 and 500 MPa as a function of water activity for one of the least evolved (Indian Batt Rhyolite) and of a more evolved rhyolite composition (Cougar Point Tuff XV) from the 12·8-8·1 Ma Bruneau-Jarbidge eruptive center of the Yellowstone hotspot. Particular priority was given to accurate determination of the water content of the quenched glasses using infrared spectroscopic techniques. Comparison of the composition of natural and experimentally synthesized phases confirms that high temperatures (>900°C) and extremely low melt water contents (<1·5 wt % H₂O) are required to reproduce the natural mineral assemblages. In melts containing 0·5-1·5 wt % H₂O, the liquidus phase is clinopyroxene (excluding Fe-Ti oxides, which are strongly dependent on fO₂), and the liquidus temperature of the more evolved Cougar Point Tuff sample (BJR; 940-1000°C) is at least 30°C lower than that of the Indian Batt Rhyolite lava sample (IBR2; 970-1030°C). For the composition BJR, the comparison of the compositions of the natural and experimental glasses indicates a pre-eruptive temperature of at least 900°C. The composition of clinopyroxene and pigeonite pairs can be reproduced only for water contents below 1·5 wt % H₂O at 900°C, or lower water contents if the temperature is higher. For the composition IBR2, a minimum temperature of 920°C is necessary to reproduce the main phases at 200 and 500 MPa. At 200 MPa, the pre-eruptive water content of the melt is constrained in the range 0·7-1·3 wt % at 950°C and 0·3-1·0 wt % at 1000°C. At 500 MPa, the pre-eruptive temperatures are slightly higher (by 30-50°C) for the same ranges of water concentration. The experimental results are used to explore possible proxies to constrain the depth of magma storage. The crystallization sequence of tectosilicates is strongly dependent on pressure between 200 and 500 MPa. In addition, the normative Qtz-Ab-Or contents of glasses quenched from melts coexisting with quartz, sanidine and plagioclase depend on pressure and melt water content, assuming that the normative Qtz and Ab/Or content of such melts is mainly dependent on pressure and water activity, respectively. The combination of results from the phase equilibria and from the composition of glasses indicates that the depth of magma storage for the IBR2 and BJR compositions may be in the range 300-400 MPa (13 km) and 200-300 MPa (10 km), respectively.
Resumo:
The Bruneau-Jarbidge eruptive center (BJEC) in the central Snake River Plain, Idaho, USA consists of the Cougar Point Tuff (CPT), a series of ten, high-temperature (900-1000°C) voluminous ignimbrites produced over the explosive phase of volcanism (12.8-10.5 Ma) and more than a dozen equally high-temperature rhyolite lava flows produced during the effusive phase (10.5-8 Ma). Spot analyses by ion microprobe of oxygen isotope ratios in 210 zircons demonstrate that all of the eruptive units of the BJEC are characterized by zircon δ¹⁸O values ≤ 2.5‰, thus documenting the largest low δ¹⁸O silicic volcanic province known on Earth (>10⁴ km³). There is no evidence for voluminous normal δ¹⁸O magmatism at the BJEC that precedes generation of low δ¹⁸O magmas as there is at other volcanic centers that generate low δ¹⁸O magmas such as Heise and Yellowstone. At these younger volcanic centers of the hotspot track, such low δ¹⁸O magmas represent ~45 % and ~20% respectively of total eruptive volumes. Zircons in all BJEC tuffs and lavas studied (23 units) document strong δ¹⁸O depletion (median CPT δ¹⁸OZrc = 1.0‰, post-CPT lavas = 1.5‰) with the third member of the CPT recording an excursion to minimum δ¹⁸O values (δ¹⁸OZrc= -1.8‰) in a supereruption > 2‰ lower than other voluminous low δ¹⁸O rhyolites known worldwide (δ¹⁸OWR ≤0.9 vs. 3.4‰). Subsequent units of the CPT and lavas record a progressive recovery in δ¹⁸OZrc to ~2.5‰ over a ~ 4 m.y. interval (12 to 8 Ma). We present detailed evidence of unit-to-unit systematic patterns in O isotopic zoning in zircons (i.e. direction and magnitude of Δcore-rim), spectrum of δ¹⁸O in individual units, and zircon inheritance patterns established by re-analysis of spots for U-Th-Pb isotopes by LA-ICPMS and SHRIMP. In conjunction with mineral thermometry and magma compositions, these patterns are difficult to reconcile with the well-established model for "cannibalistic" low δ¹⁸O magma genesis at Heise and Yellowstone. We present an alternative model for the central Snake River Plain using the modeling results of Leeman et al. (2008) for ¹⁸O depletion as a function of depth in a mid-upper crustal protolith that was hydrothermally altered by infiltrating meteoric waters prior to the onset of silicic magmatism. The model proposes that BJEC silicic magmas were generated in response to the propagation of a melting front, driven by the incremental growth of a vast underlying mafic sill complex, over a ~5 m.y. interval through a crustal volume in which a vertically asymmetric δ¹⁸OWR gradient had previously developed that was sharply inflected from ~ -1 to 10‰ at mid-upper crustal depths. Within the context of the model, data from BJEC zircons are consistent with incremental melting and mixing events in roof zones of magma reservoirs that accompany surfaceward advance of the coupled mafic-silicic magmatic system.
Resumo:
The Bruneau–Jarbidge eruptive center of the central Snake River Plain in southern Idaho, USA produced multiple rhyolite lava flows with volumes of <10 km³ to 200 km³ each from ~11.2 to 8.1 Ma, most of which follow its climactic phase of large-volume explosive volcanism, represented by the Cougar Point Tuff, from 12.7 to 10.5 Ma. These lavas represent the waning stages of silicic volcanism at a major eruptive center of the Yellowstone hotspot track. Here we provide pyroxene compositions and thermometry results from several lavas that demonstrate that the demise of the silicic volcanic system was characterized by sustained, high pre-eruptive magma temperatures (mostly ≥950 °C) prior to the onset of exclusively basaltic volcanism at the eruptive center. Pyroxenes display a variety of textures in single samples, including solitary euhedral crystals as well as glomerocrysts, crystal clots and annealed microgranular inclusions of pyroxene ±magnetite± plagioclase. Pigeonite and augite crystals are unzoned, and there are no detectable differences in major and minor element compositions according to textural variety — mineral compositions in the microgranular inclusions and crystal clots are identical to those of phenocrysts in the host lavas. In contrast to members of the preceding Cougar Point Tuff that host polymodal glass and mineral populations, pyroxene compositions in each of the lavas are characterized by single rather than multiple discrete compositional modes. Collectively, the lavas reproduce and extend the range of Fe–Mg pyroxene compositional modes observed in the Cougar Point Tuff to more Mg-rich varieties. The compositionally homogeneous populations of pyroxene in each of the lavas, as well as the lack of core-to-rim zonation in individual crystals suggest that individual eruptions each were fed by compositionally homogeneous magma reservoirs, and similarities with the Cougar Point Tuff suggest consanguinity of such reservoirs to those that supplied the polymodal Cougar Point Tuff. Pyroxene thermometry results obtained using QUILF equilibria yield pre-eruptive magma temperatures of 905 to 980 °C, and individual modes consistently record higher Ca content and higher temperatures than pyroxenes with equivalent Fe–Mg ratios in the preceding Cougar Point Tuff. As is the case with the Cougar Point Tuff, evidence for up-temperature zonation within single crystals that would be consistent with recycling of sub- or near-solidus material from antecedent magma reservoirs by rapid reheating is extremely rare. Also, the absence of intra-crystal zonation, particularly at crystal rims, is not easily reconciled with cannibalization of caldera fill that subsided into pre-eruptive reservoirs. The textural, compositional and thermometric results rather are consistent with minor re-equilibration to higher temperatures of the unerupted crystalline residue from the explosive phase of volcanism, or perhaps with newly generated magmas from source materials very similar to those for the Cougar Point Tuff. Collectively, the data suggest that most of the pyroxene compositional diversity that is represented by the tuffs and lavas was produced early in the history of the eruptive center and that compositions across this range were preserved or duplicated through much of its lifetime. Mineral compositions and thermometry of the multiple lavas suggest that unerupted magmas residual to the explosive phase of volcanism may have been stored at sustained, high temperatures subsequent to the explosive phase of volcanism. If so, such persistent high temperatures and large eruptive magma volumes likewise require an abundant and persistent supply of basalt magmas to the lower and/or mid-crust, consistent with the tectonic setting of a continental hotspot.
Resumo:
This project was the first comprehensive assessment of heavy metals to be conducted in the sediments of Northern Moreton Bay since the 1970s and found that shipping and shipping related activities contributed significantly to the level of sediment contamination in the area. The study was also used to develop and test new methods of assessing heavy metal sediment quality.
Resumo:
Deriving an estimate of optimal fishing effort or even an approximate estimate is very valuable for managing fisheries with multiple target species. The most challenging task associated with this is allocating effort to individual species when only the total effort is recorded. Spatial information on the distribution of each species within a fishery can be used to justify the allocations, but often such information is not available. To determine the long-term overall effort required to achieve maximum sustainable yield (MSY) and maximum economic yield (MEY), we consider three methods for allocating effort: (i) optimal allocation, which optimally allocates effort among target species; (ii) fixed proportions, which chooses proportions based on past catch data; and (iii) economic allocation, which splits effort based on the expected catch value of each species. Determining the overall fishing effort required to achieve these management objectives is a maximizing problem subject to constraints due to economic and social considerations. We illustrated the approaches using a case study of the Moreton Bay Prawn Trawl Fishery in Queensland (Australia). The results were consistent across the three methods. Importantly, our analysis demonstrated the optimal total effort was very sensitive to daily fishing costs-the effort ranged from 9500-11 500 to 6000-7000, 4000 and 2500 boat-days, using daily cost estimates of $0, $500, $750, and $950, respectively. The zero daily cost corresponds to the MSY, while a daily cost of $750 most closely represents the actual present fishing cost. Given the recent debate on which costs should be factored into the analyses for deriving MEY, our findings highlight the importance of including an appropriate cost function for practical management advice. The approaches developed here could be applied to other multispecies fisheries where only aggregated fishing effort data are recorded, as the literature on this type of modelling is sparse.