921 resultados para Fate and transport ocean model
Resumo:
X-ray computed tomography (CT) provides an insight into the progression of dissolution in the tests of planktonic foraminifera. Four species of foraminifera (G. ruber [white], G. sacculifer, N. dutertrei and P. obliquiloculata) from Pacific, Atlantic and Indian Ocean core-top samples were examined by CT and SEM. Inner chamber walls began to dissolve at Delta[CO3**2-] values of 12-14 µmol/kg. Close to the calcite saturation horizon, dissolution and precipitation of calcite may occur simultaneously. Inner calcite of G. sacculifer, N. dutertrei and P. obliquiloculata from such sites appeared altered or replaced, whereas outer crust calcite was dense with no pores. Unlike the other species, there was no distinction between inner and outer calcite in CT scans of G. ruber. Empty calcite crusts of N. dutertrei and P. obliquiloculata were most resistant to dissolution and were present in samples where Delta[CO3**2-] ~ -20 µmol/kg. Five stages of preservation were identified in CT scans, and an empirical dissolution index, XDX, was established. XDX appears to be insensitive to initial test mass. Mass loss in response to dissolution was similar between species and sites at ~ 0.4 µg/µmol/kg. We provide calibrations to estimate Delta[CO3**2-] and initial test mass from XDX.
Resumo:
We have determined the concentrations and isotopic composition of noble gases in old oceanic crust and oceanic sediments and the isotopic composition of noble gases in emanations from subduction volcanoes. Comparison with the noble gas signature of the upper mantle and a simple model allow us to conclude that at least 98% of the noble gases and water in the subducted slab returns back into the atmosphere through subduction volcanism before they can be admixed into the earth's mantle. It seems that the upper mantle is inaccessible to atmospheric noble gases due to an efficient subduction barrier for volatiles.
Resumo:
Investigating the variability of Agulhas leakage, the volume transport of water from the Indian Ocean to the South Atlantic Ocean, is highly relevant due to its potential contribution to the Atlantic Meridional Overturning Circulation as well as the global circulation of heat and salt and hence global climate. Quantifying Agulhas leakage is challenging due to the non-linear nature of this process; current observations are insufficient to estimate its variability and ocean models all have biases in this region, even at high resolution . An Eulerian threshold integration method is developed to examine the mechanisms of Agulhas leakage variability in six ocean model simulations of varying resolution. This intercomparison, based on the circulation and thermo- haline structure at the Good Hope line, a transect to the south west of the southern tip of Africa, is used to identify features that are robust regardless of the model used and takes into account the thermohaline biases of each model. When determined by a passive tracer method, 60 % of the magnitude of Agulhas leakage is captured and more than 80 % of its temporal fluctuations, suggesting that the method is appropriate for investigating the variability of Agulhas leakage. In all simulations but one, the major driver of variability is associated with mesoscale features passing through the section. High resolution (<1/10 deg.) hindcast models agree on the temporal (2–4 cycles per year) and spatial (300–500 km) scales of these features corresponding to observed Agulhas Rings. Coarser resolution models (<1/4 deg.) reproduce similar time scale of variability of Agulhas leakage in spite of their difficulties in representing the Agulhas rings properties. A coarser resolution climate model (2 deg.) does not resolve the spatio-temporal mechanism of variability of Agulhas leakage. Hence it is expected to underestimate the contribution of Agulhas Current System to climate variability.
Resumo:
Investigating the variability of Agulhas leakage, the volume transport of water from the Indian Ocean to the South Atlantic Ocean, is highly relevant due to its potential contribution to the Atlantic Meridional Overturning Circulation as well as the global circulation of heat and salt and hence global climate. Quantifying Agulhas leakage is challenging due to the non-linear nature of this process; current observations are insufficient to estimate its variability and ocean models all have biases in this region, even at high resolution . An Eulerian threshold integration method is developed to examine the mechanisms of Agulhas leakage variability in six ocean model simulations of varying resolution. This intercomparison, based on the circulation and thermo- haline structure at the Good Hope line, a transect to the south west of the southern tip of Africa, is used to identify features that are robust regardless of the model used and takes into account the thermohaline biases of each model. When determined by a passive tracer method, 60 % of the magnitude of Agulhas leakage is captured and more than 80 % of its temporal fluctuations, suggesting that the method is appropriate for investigating the variability of Agulhas leakage. In all simulations but one, the major driver of variability is associated with mesoscale features passing through the section. High resolution (<1/10 deg.) hindcast models agree on the temporal (2–4 cycles per year) and spatial (300–500 km) scales of these features corresponding to observed Agulhas Rings. Coarser resolution models (<1/4 deg.) reproduce similar time scale of variability of Agulhas leakage in spite of their difficulties in representing the Agulhas rings properties. A coarser resolution climate model (2 deg.) does not resolve the spatio-temporal mechanism of variability of Agulhas leakage. Hence it is expected to underestimate the contribution of Agulhas Current System to climate variability.
Resumo:
Although very little is known about the transport, fate and toxic effects of medical compounds in aquatic environments, the presence of these compounds in potable water sources can no longer be overlooked. We can argue that trace concentrations of drugs in the water is relatively a minor problem, however, the current and future demands on global potable freshwater supplies will probably lead to greater incidents of indirect and direct water-reuse situations at the local, regional, and cross-border levels. It is important to remark that the solution of this emerging ecological issue does not rely on new and better wastewater treatment technologies, but a new paradigm of responsibility and the understanding of the relations between anthropogenic actions and their ecological effects as well. The objective of this brief communication is to present the state of the art of research conducted in the last decade in Europe and United States concerning the presence of pharmaceuticals products in aquatic environments.
Resumo:
Principal Topic Small and micro-enterprises are believed to play a significant part in economic growth and poverty allevition in developing countries. However, there are a range of issues that arise when looking at the support required for local enterprise development, the role of micro finance and sustainability. This paper explores the issues associated with the establishment and resourcing of micro-enterprise develoment and proposes a model of sustainable support of enterprise development in very poor developing economies, particularly in Africa. The purpose of this paper is to identify and address the range of issues raised by the literature and empirical research in Africa, regarding micro-finance and small business support, and to develop a model for sustainable support for enterprise development within a particular cultural and economic context. Micro-finance has become big business with a range of models - from those that operate on a strictly business basis to those that come from a philanthropic base. The models used grow from a range of philosophical and cultural perspectives. Entrepreneurship training is provided around the world. Success is often measured by the number involved and the repayment rates - which are very high, largely because of the lending models used. This paper will explore the range of options available and propose a model that can be implemented and evaluated in rapidly changing developing economies. Methodology/Key Propositions The research draws on entrepreneurial and micro-finance literature and empirical research undertaken in Mozambique, which lies along the Indian ocean sea border of Southern Africa. As a result of war and natural disasters over a prolonged period, there is little industry, primary industries are primitive and there is virtually no infrastructure. Mozambique is ranked as one of the poorest countries in the world. The conditions in Mozambique, though not identical, reflect conditions in many other parts of Africa. A numebr of key elements in the development of enterprises in poor countries are explored including: Impact of micro-finance Sustainable models of micro-finance Education and training Capacity building Support mechanisms Impact on poverty, families and the local economy Survival entrepreneurship versus growth entrepreneurship Transitions to the formal sector. Results and Implications The result of this study is the development of a model for providing intellectual and financial resources to micro-entrepreneurs in poor developing countries in a sustainable way. The model provides a base for ongoing research into the process of entrepreneurial growth in African developing economies. The research raises a numeber of issues regarding sustainability including the nature of the donor/recipient relationship, access to affordable resources, the impact of individual entrepreneurial activity on the local economny and the need for ongoing research to understand the whole process and its impact, intended and unintended.
Resumo:
Key topics: Since the birth of the Open Source movement in the mid-80's, open source software has become more and more widespread. Amongst others, the Linux operating system, the Apache web server and the Firefox internet explorer have taken substantial market shares to their proprietary competitors. Open source software is governed by particular types of licenses. As proprietary licenses only allow the software's use in exchange for a fee, open source licenses grant users more rights like the free use, free copy, free modification and free distribution of the software, as well as free access to the source code. This new phenomenon has raised many managerial questions: organizational issues related to the system of governance that underlie such open source communities (Raymond, 1999a; Lerner and Tirole, 2002; Lee and Cole 2003; Mockus et al. 2000; Tuomi, 2000; Demil and Lecocq, 2006; O'Mahony and Ferraro, 2007;Fleming and Waguespack, 2007), collaborative innovation issues (Von Hippel, 2003; Von Krogh et al., 2003; Von Hippel and Von Krogh, 2003; Dahlander, 2005; Osterloh, 2007; David, 2008), issues related to the nature as well as the motivations of developers (Lerner and Tirole, 2002; Hertel, 2003; Dahlander and McKelvey, 2005; Jeppesen and Frederiksen, 2006), public policy and innovation issues (Jullien and Zimmermann, 2005; Lee, 2006), technological competitions issues related to standard battles between proprietary and open source software (Bonaccorsi and Rossi, 2003; Bonaccorsi et al. 2004, Economides and Katsamakas, 2005; Chen, 2007), intellectual property rights and licensing issues (Laat 2005; Lerner and Tirole, 2005; Gambardella, 2006; Determann et al., 2007). A major unresolved issue concerns open source business models and revenue capture, given that open source licenses imply no fee for users. On this topic, articles show that a commercial activity based on open source software is possible, as they describe different possible ways of doing business around open source (Raymond, 1999; Dahlander, 2004; Daffara, 2007; Bonaccorsi and Merito, 2007). These studies usually look at open source-based companies. Open source-based companies encompass a wide range of firms with different categories of activities: providers of packaged open source solutions, IT Services&Software Engineering firms and open source software publishers. However, business models implications are different for each of these categories: providers of packaged solutions and IT Services&Software Engineering firms' activities are based on software developed outside their boundaries, whereas commercial software publishers sponsor the development of the open source software. This paper focuses on open source software publishers' business models as this issue is even more crucial for this category of firms which take the risk of investing in the development of the software. Literature at last identifies and depicts only two generic types of business models for open source software publishers: the business models of ''bundling'' (Pal and Madanmohan, 2002; Dahlander 2004) and the dual licensing business models (Välimäki, 2003; Comino and Manenti, 2007). Nevertheless, these business models are not applicable in all circumstances. Methodology: The objectives of this paper are: (1) to explore in which contexts the two generic business models described in literature can be implemented successfully and (2) to depict an additional business model for open source software publishers which can be used in a different context. To do so, this paper draws upon an explorative case study of IdealX, a French open source security software publisher. This case study consists in a series of 3 interviews conducted between February 2005 and April 2006 with the co-founder and the business manager. It aims at depicting the process of IdealX's search for the appropriate business model between its creation in 2000 and 2006. This software publisher has tried both generic types of open source software publishers' business models before designing its own. Consequently, through IdealX's trials and errors, I investigate the conditions under which such generic business models can be effective. Moreover, this study describes the business model finally designed and adopted by IdealX: an additional open source software publisher's business model based on the principle of ''mutualisation'', which is applicable in a different context. Results and implications: Finally, this article contributes to ongoing empirical work within entrepreneurship and strategic management on open source software publishers' business models: it provides the characteristics of three generic business models (the business model of bundling, the dual licensing business model and the business model of mutualisation) as well as conditions under which they can be successfully implemented (regarding the type of product developed and the competencies of the firm). This paper also goes further into the traditional concept of business model used by scholars in the open source related literature. In this article, a business model is not only considered as a way of generating incomes (''revenue model'' (Amit and Zott, 2001)), but rather as the necessary conjunction of value creation and value capture, according to the recent literature about business models (Amit and Zott, 2001; Chresbrough and Rosenblum, 2002; Teece, 2007). Consequently, this paper analyses the business models from these two components' point of view.
Resumo:
Motor vehicles are major emitters of gaseous and particulate pollution in urban areas, and exposure to particulate pollution can have serious health effects, ranging from respiratory and cardiovascular disease to mortality. Motor vehicle tailpipe particle emissions span a broad size range from 0.003-10µm, and are measured as different subsets of particle mass concentrations or particle number count. However, no comprehensive inventories currently exist in the international published literature covering this wide size range. This paper presents the first published comprehensive inventory of motor vehicle tailpipe particle emissions covering the full size range of particles emitted. The inventory was developed for urban South-East Queensland by combining two techniques from distinctly different disciplines, from aerosol science and transport modelling. A comprehensive set of particle emission factors were combined with traffic modelling, and tailpipe particle emissions were quantified for particle number (ultrafine particles), PM1, PM2.5 and PM10 for light and heavy duty vehicles and buses. A second aim of the paper involved using the data derived in this inventory for scenario analyses, to model the particle emission implications of different proportions of passengers travelling in light duty vehicles and buses in the study region, and to derive an estimate of fleet particle emissions in 2026. It was found that heavy duty vehicles (HDVs) in the study region were major emitters of particulate matter pollution, and although they contributed only around 6% of total regional vehicle kilometres travelled, they contributed more than 50% of the region’s particle number (ultrafine particles) and PM1 emissions. With the freight task in the region predicted to double over the next 20 years, this suggests that HDVs need to be a major focus of mitigation efforts. HDVs dominated particle number (ultrafine particles) and PM1 emissions; and LDV PM2.5 and PM10 emissions. Buses contributed approximately 1-2% of regional particle emissions.
Resumo:
Introduction During development and regeneration, odontogenesis and osteogenesis are initiated by a cascade of signals driven by several master regulatory genes. Methods In this study, we investigated the differential expression of 84 stem cell–related genes in dental pulp cells (DPCs) and periodontal ligament cells (PDLCs) undergoing odontogenic/osteogenic differentiation. Results Our results showed that, although there was considerable overlap, certain genes had more differential expression in PDLCs than in DPCs. CCND2, DLL1, and MME were the major upregulated genes in both PDLCs and DPCs, whereas KRT15 was the only gene significantly downregulated in PDLCs and DPCs in both odontogenic and osteogenic differentiation. Interestingly, a large number of regulatory genes in odontogenic and osteogenic differentiation interact or crosstalk via Notch, Wnt, transforming growth factor β (TGF-β)/bone morphogenic protein (BMP), and cadherin signaling pathways, such as the regulation of APC, DLL1, CCND2, BMP2, and CDH1. Using a rat dental pulp and periodontal defect model, the expression and distribution of both BMP2 and CDH1 have been verified for their spatial localization in dental pulp and periodontal tissue regeneration. Conclusions This study has generated an overview of stem cell–related gene expression in DPCs and PDLCs during odontogenic/osteogenic differentiation and revealed that these genes may interact through the Notch, Wnt, TGF-β/BMP, and cadherin signalling pathways to play a crucial role in determining the fate of dental derived cell and dental tissue regeneration. These findings provided a new insight into the molecular mechanisms of the dental tissue mineralization and regeneration
Resumo:
Despite more than three decades of research, there is a limited understanding of the transactional processes of appraisal, stress and coping. This has led to calls for more focused research on the entire process that underlies these variables. To date, there remains a paucity of such research. The present study examined Lazarus and Folkman’s (1984) transactional model of stress and coping. One hundred and twenty nine Australian participants with full time employment (i.e. nurses and administration employees) were recruited. There were 49 male (age mean = 34, SD = 10.51) and 80 female (age mean = 36, SD = 10.31) participants. The analysis of three path models indicated that in addition to the original paths, which were found in Lazarus and Folkman’s transactional model (primary appraisal-->secondary appraisal-->stress-->coping), there were also direct links between primary appraisal and stress level time one and between stress level time one to stress level time two. This study has provided additional insights into the transactional process which will extend our understanding of how individuals appraise, cope and experience occupational stress.
Resumo:
LUPTAI is a decision-aiding tool to enable local and state governments to optimise land use and transport integration. In contrast to mobility between land uses (typically via road), accessibility represents opportunity and choice to reach common land use destinations by public transport and/or walking. LUPTAI uses a GIS-based methodology to quantify and map accessibility to common land use destinations by walking and/or public transport. The tool can be applied to small or large study areas. It can be applied to the current situation in a study area or to future scenarios (such as scenarios involving changes to public transport services, public transport corridors or stations, population density or land use). The tool has been piloted on the Gold Coast and the results are encouraging. This paper outlines the GIS-based methodology and the findings related to this pilot study. The paper demonstrates benefits and possible application of LUPTAI to other urbanised local government areas in Queensland. It also discusses how this accessibility indexing approach could be developed into a decision-support tool to assist local and state government agencies in a range of transport and land-use planning activities.
Resumo:
A mathematical model is developed to simulate the discharge of a LiFePO4 cathode. This model contains 3 size scales, which match with experimental observations present in the literature on the multi-scale nature of LiFePO4 material. A shrinking-core is used on the smallest scale to represent the phase-transition of LiFePO4 during discharge. The model is then validated against existing experimental data and this validated model is then used to investigate parameters that influence active material utilisation. Specifically the size and composition of agglomerates of LiFePO4 crystals is discussed, and we investigate and quantify the relative effects that the ionic and electronic conductivities within the oxide have on oxide utilisation. We find that agglomerates of crystals can be tolerated under low discharge rates. The role of the electrolyte in limiting (cathodic) discharge is also discussed, and we show that electrolyte transport does limit performance at high discharge rates, confirming the conclusions of recent literature.
Resumo:
Uninhabited aerial vehicles (UAVs) are a cutting-edge technology that is at the forefront of aviation/aerospace research and development worldwide. Many consider their current military and defence applications as just a token of their enormous potential. Unlocking and fully exploiting this potential will see UAVs in a multitude of civilian applications and routinely operating alongside piloted aircraft. The key to realising the full potential of UAVs lies in addressing a host of regulatory, public relation, and technological challenges never encountered be- fore. Aircraft collision avoidance is considered to be one of the most important issues to be addressed, given its safety critical nature. The collision avoidance problem can be roughly organised into three areas: 1) Sense; 2) Detect; and 3) Avoid. Sensing is concerned with obtaining accurate and reliable information about other aircraft in the air; detection involves identifying potential collision threats based on available information; avoidance deals with the formulation and execution of appropriate manoeuvres to maintain safe separation. This thesis tackles the detection aspect of collision avoidance, via the development of a target detection algorithm that is capable of real-time operation onboard a UAV platform. One of the key challenges of the detection problem is the need to provide early warning. This translates to detecting potential threats whilst they are still far away, when their presence is likely to be obscured and hidden by noise. Another important consideration is the choice of sensors to capture target information, which has implications for the design and practical implementation of the detection algorithm. The main contributions of the thesis are: 1) the proposal of a dim target detection algorithm combining image morphology and hidden Markov model (HMM) filtering approaches; 2) the novel use of relative entropy rate (RER) concepts for HMM filter design; 3) the characterisation of algorithm detection performance based on simulated data as well as real in-flight target image data; and 4) the demonstration of the proposed algorithm's capacity for real-time target detection. We also consider the extension of HMM filtering techniques and the application of RER concepts for target heading angle estimation. In this thesis we propose a computer-vision based detection solution, due to the commercial-off-the-shelf (COTS) availability of camera hardware and the hardware's relatively low cost, power, and size requirements. The proposed target detection algorithm adopts a two-stage processing paradigm that begins with an image enhancement pre-processing stage followed by a track-before-detect (TBD) temporal processing stage that has been shown to be effective in dim target detection. We compare the performance of two candidate morphological filters for the image pre-processing stage, and propose a multiple hidden Markov model (MHMM) filter for the TBD temporal processing stage. The role of the morphological pre-processing stage is to exploit the spatial features of potential collision threats, while the MHMM filter serves to exploit the temporal characteristics or dynamics. The problem of optimising our proposed MHMM filter has been examined in detail. Our investigation has produced a novel design process for the MHMM filter that exploits information theory and entropy related concepts. The filter design process is posed as a mini-max optimisation problem based on a joint RER cost criterion. We provide proof that this joint RER cost criterion provides a bound on the conditional mean estimate (CME) performance of our MHMM filter, and this in turn establishes a strong theoretical basis connecting our filter design process to filter performance. Through this connection we can intelligently compare and optimise candidate filter models at the design stage, rather than having to resort to time consuming Monte Carlo simulations to gauge the relative performance of candidate designs. Moreover, the underlying entropy concepts are not constrained to any particular model type. This suggests that the RER concepts established here may be generalised to provide a useful design criterion for multiple model filtering approaches outside the class of HMM filters. In this thesis we also evaluate the performance of our proposed target detection algorithm under realistic operation conditions, and give consideration to the practical deployment of the detection algorithm onboard a UAV platform. Two fixed-wing UAVs were engaged to recreate various collision-course scenarios to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. Based on this collected data, our proposed detection approach was able to detect targets out to distances ranging from about 400m to 900m. These distances, (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning ahead of impact that approaches the 12.5 second response time recommended for human pilots. Furthermore, readily available graphic processing unit (GPU) based hardware is exploited for its parallel computing capabilities to demonstrate the practical feasibility of the proposed target detection algorithm. A prototype hardware-in- the-loop system has been found to be capable of achieving data processing rates sufficient for real-time operation. There is also scope for further improvement in performance through code optimisations. Overall, our proposed image-based target detection algorithm offers UAVs a cost-effective real-time target detection capability that is a step forward in ad- dressing the collision avoidance issue that is currently one of the most significant obstacles preventing widespread civilian applications of uninhabited aircraft. We also highlight that the algorithm development process has led to the discovery of a powerful multiple HMM filtering approach and a novel RER-based multiple filter design process. The utility of our multiple HMM filtering approach and RER concepts, however, extend beyond the target detection problem. This is demonstrated by our application of HMM filters and RER concepts to a heading angle estimation problem.
Resumo:
Typical daily decision-making process of individuals regarding use of transport system involves mainly three types of decisions: mode choice, departure time choice and route choice. This paper focuses on the mode and departure time choice processes and studies different model specifications for a combined mode and departure time choice model. The paper compares different sets of explanatory variables as well as different model structures to capture the correlation among alternatives and taste variations among the commuters. The main hypothesis tested in this paper is that departure time alternatives are also correlated by the amount of delay. Correlation among different alternatives is confirmed by analyzing different nesting structures as well as error component formulations. Random coefficient logit models confirm the presence of the random taste heterogeneity across commuters. Mixed nested logit models are estimated to jointly account for the random taste heterogeneity and the correlation among different alternatives. Results indicate that accounting for the random taste heterogeneity as well as inter-alternative correlation improves the model performance.