139 resultados para Discrete Time Branching Processes
Resumo:
The Galilee and Eromanga basins are sub-basins of the Great Artesian Basin (GAB). In this study, a multivariate statistical approach (hierarchical cluster analysis, principal component analysis and factor analysis) is carried out to identify hydrochemical patterns and assess the processes that control hydrochemical evolution within key aquifers of the GAB in these basins. The results of the hydrochemical assessment are integrated into a 3D geological model (previously developed) to support the analysis of spatial patterns of hydrochemistry, and to identify the hydrochemical and hydrological processes that control hydrochemical variability. In this area of the GAB, the hydrochemical evolution of groundwater is dominated by evapotranspiration near the recharge area resulting in a dominance of the Na–Cl water types. This is shown conceptually using two selected cross-sections which represent discrete groundwater flow paths from the recharge areas to the deeper parts of the basins. With increasing distance from the recharge area, a shift towards a dominance of carbonate (e.g. Na–HCO3 water type) has been observed. The assessment of hydrochemical changes along groundwater flow paths highlights how aquifers are separated in some areas, and how mixing between groundwater from different aquifers occurs elsewhere controlled by geological structures, including between GAB aquifers and coal bearing strata of the Galilee Basin. The results of this study suggest that distinct hydrochemical differences can be observed within the previously defined Early Cretaceous–Jurassic aquifer sequence of the GAB. A revision of the two previously recognised hydrochemical sequences is being proposed, resulting in three hydrochemical sequences based on systematic differences in hydrochemistry, salinity and dominant hydrochemical processes. The integrated approach presented in this study which combines different complementary multivariate statistical techniques with a detailed assessment of the geological framework of these sedimentary basins, can be adopted in other complex multi-aquifer systems to assess hydrochemical evolution and its geological controls.
Resumo:
Electrical resistivity of soils and sediments is strongly influenced by the presence of interstitial water. Taking advantage of this dependency, electrical-resistivity imaging (ERI) can be effectively utilized to estimate subsurface soil-moisture distributions. The ability to obtain spatially extensive data combined with time-lapse measurements provides further opportunities to understand links between land use and climate processes. In natural settings, spatial and temporal changes in temperature and porewater salinity influence the relationship between soil moisture and electrical resistivity. Apart from environmental factors, technical, theoretical, and methodological ambiguities may also interfere with accurate estimation of soil moisture from ERI data. We have examined several of these complicating factors using data from a two-year study at a forest-grassland ecotone, a boundary between neighboring but different plant communities.At this site, temperature variability accounts for approximately 20-45 of resistivity changes from cold winter to warm summer months. Temporal changes in groundwater conductivity (mean=650 S/cm =57.7) and a roughly 100-S/cm spatial difference between the forest and grassland had only a minor influence on the moisture estimates. Significant seasonal fluctuations in temperature and precipitation had negligible influence on the basic measurement errors in data sets. Extracting accurate temporal changes from ERI can be hindered by nonuniqueness of the inversion process and uncertainties related to time-lapse inversion schemes. The accuracy of soil moisture obtained from ERI depends on all of these factors, in addition to empirical parameters that define the petrophysical soil-moisture/resistivity relationship. Many of the complicating factors and modifying variables to accurately quantify soil moisture changes with ERI can be accounted for using field and theoretical principles.
Resumo:
Increasing numbers of preclinical and clinical studies are utilizing pDNA (plasmid DNA) as the vector. In addition, there has been a growing trend towards larger and larger doses of pDNA utilized in human trials. The growing demand on pDNA manufacture leads to pressure to make more in less time. A key intervention has been the use of monoliths as stationary phases in liquid chromatography. Monolithic stationary phases offer fast separation to pDNA owing to their large pore size, making pDNA in the size range from 100 nm to over 300 nm easily accessible. However, the convective transport mechanism of monoliths does not guarantee plasmid purity. The recovery of pure pDNA hinges on a proper balance in the properties of the adsorbent phase, the mobile phase and the feedstock. The effects of pH and ionic strength of binding buffer, temperature of feedstock, active group density and the pore size of the stationary phase were considered as avenues to improve the recovery and purity of pDNA using a methacrylate-based monolithic adsorbent and Escherichia coli DH5α-pUC19 clarified lysate as feedstock. pDNA recovery was found to be critically dependent on the pH and ionic strength of the mobile phase. Up to a maximum of approx. 92% recovery was obtained under optimum conditions of pH and ionic strength. Increasing the feedstock temperature to 80°C increased the purity of pDNA owing to the extra thermal stability associated with pDNA over contaminants such as proteins. Results from toxicological studies of the plasmid samples using endotoxin standard (E. coli 0.55:B5 lipopolysaccharide) show that endotoxin level decreases with increasing salt concentration. It was obvious that large quantities of pure pDNA can be obtained with minimal extra effort simply by optimizing process parameters and conditions for pDNA purification.
Resumo:
Non-use values (i.e. economic values assigned by individuals to ecosystem goods and services unrelated to current or future uses) provide one of the most compelling incentives for the preservation of ecosystems and biodiversity. Assessing the non-use values of non-users is relatively straightforward using stated preference methods, but the standard approaches for estimating non-use values of users (stated decomposition) have substantial shortcomings which undermine the robustness of their results. In this paper, we propose a pragmatic interpretation of non-use values to derive estimates that capture their main dimensions, based on the identification of a willingness to pay for ecosystem protection beyond one's expected life. We empirically test our approach using a choice experiment conducted on coral reef ecosystem protection in two coastal areas in New Caledonia with different institutional, cultural, environmental and socio-economic contexts. We compute individual willingness to pay estimates, and derive individual non-use value estimates using our interpretation. We find that, a minima, estimates of non-use values may comprise between 25 and 40% of the mean willingness to pay for ecosystem preservation, less than has been found in most studies.
Resumo:
Magnetic resonance is a well-established tool for structural characterisation of porous media. Features of pore-space morphology can be inferred from NMR diffusion-diffraction plots or the time-dependence of the apparent diffusion coefficient. Diffusion NMR signal attenuation can be computed from the restricted diffusion propagator, which describes the distribution of diffusing particles for a given starting position and diffusion time. We present two techniques for efficient evaluation of restricted diffusion propagators for use in NMR porous-media characterisation. The first is the Lattice Path Count (LPC). Its physical essence is that the restricted diffusion propagator connecting points A and B in time t is proportional to the number of distinct length-t paths from A to B. By using a discrete lattice, the number of such paths can be counted exactly. The second technique is the Markov transition matrix (MTM). The matrix represents the probabilities of jumps between every pair of lattice nodes within a single timestep. The propagator for an arbitrary diffusion time can be calculated as the appropriate matrix power. For periodic geometries, the transition matrix needs to be defined only for a single unit cell. This makes MTM ideally suited for periodic systems. Both LPC and MTM are closely related to existing computational techniques: LPC, to combinatorial techniques; and MTM, to the Fokker-Planck master equation. The relationship between LPC, MTM and other computational techniques is briefly discussed in the paper. Both LPC and MTM perform favourably compared to Monte Carlo sampling, yielding highly accurate and almost noiseless restricted diffusion propagators. Initial tests indicate that their computational performance is comparable to that of finite element methods. Both LPC and MTM can be applied to complicated pore-space geometries with no analytic solution. We discuss the new methods in the context of diffusion propagator calculation in porous materials and model biological tissues.
Resumo:
This article aims to fill in the gap of the second-order accurate schemes for the time-fractional subdiffusion equation with unconditional stability. Two fully discrete schemes are first proposed for the time-fractional subdiffusion equation with space discretized by finite element and time discretized by the fractional linear multistep methods. These two methods are unconditionally stable with maximum global convergence order of $O(\tau+h^{r+1})$ in the $L^2$ norm, where $\tau$ and $h$ are the step sizes in time and space, respectively, and $r$ is the degree of the piecewise polynomial space. The average convergence rates for the two methods in time are also investigated, which shows that the average convergence rates of the two methods are $O(\tau^{1.5}+h^{r+1})$. Furthermore, two improved algorithms are constrcted, they are also unconditionally stable and convergent of order $O(\tau^2+h^{r+1})$. Numerical examples are provided to verify the theoretical analysis. The comparisons between the present algorithms and the existing ones are included, which show that our numerical algorithms exhibit better performances than the known ones.
Resumo:
Automatic Vehicle Identification Systems are being increasingly used as a new source of travel information. As in the last decades these systems relied on expensive new technologies, few of them were scattered along a networks making thus Travel-Time and Average Speed estimation their main objectives. However, as their price dropped, the opportunity of building dense AVI networks arose, as in Brisbane where more than 250 Bluetooth detectors are now installed. As a consequence this technology represents an effective means to acquire accurate time dependant Origin Destination information. In order to obtain reliable estimations, however, a number of issues need to be addressed. Some of these problems stem from the structure of a network made out of isolated detectors itself while others are inherent of Bluetooth technology (overlapping detection area, missing detections,\...). The aim of this paper is threefold: First, after having presented the level of details that can be reached with a network of isolated detectors we present how we modelled Brisbane's network, keeping only the information valuable for the retrieval of trip information. Second, we give an overview of the issues inherent to the Bluetooth technology and we propose a method for retrieving the itineraries of the individual Bluetooth vehicles. Last, through a comparison with Brisbane Transport Strategic Model results, we highlight the opportunities and the limits of Bluetooth detectors networks. The aim of this paper is twofold. We first give a comprehensive overview of the aforementioned issues. Further, we propose a methodology that can be followed, in order to cleanse, correct and aggregate Bluetooth data. We postulate that the methods introduced by this paper are the first crucial steps that need to be followed in order to compute accurate Origin-Destination matrices in urban road networks.
Resumo:
The neural basis of Pavlovian fear conditioning is well understood and depends upon neural processes within the amygdala. Stress is known to play a role in the modulation of fear-related behavior, including Pavlovian fear conditioning. Chronic restraint stress has been shown to enhance fear conditioning to discrete and contextual stimuli; however, the time course and extent of restraint that is essential for this modulation of fear learning remains unclear. Thus, we tested the extent to which a single exposure to 1 hr of restraint would alter subsequent auditory fear conditioning in rats.
Resumo:
Passenger flow simulations are an important tool for designing and managing airports. This thesis examines the different boarding strategies for the Boeing 777 and Airbus 380 aircraft in order to investigate their current performance and to determine minimum boarding times. The most optimal strategies have been discovered and new strategies that are more efficient are proposed. The methods presented offer reduced aircraft boarding times which plays an important role for reducing the overall aircraft Turn Time for an airline.
Resumo:
In this paper we present a new method for performing Bayesian parameter inference and model choice for low count time series models with intractable likelihoods. The method involves incorporating an alive particle filter within a sequential Monte Carlo (SMC) algorithm to create a novel pseudo-marginal algorithm, which we refer to as alive SMC^2. The advantages of this approach over competing approaches is that it is naturally adaptive, it does not involve between-model proposals required in reversible jump Markov chain Monte Carlo and does not rely on potentially rough approximations. The algorithm is demonstrated on Markov process and integer autoregressive moving average models applied to real biological datasets of hospital-acquired pathogen incidence, animal health time series and the cumulative number of poison disease cases in mule deer.
Resumo:
Purpose of this paper This research aims to examine the effects of inadequate documentation to the cost management & tendering processes in Managing Contractor Contracts using Fixed Lump Sum as a benchmark. Design/methodology/approach A questionnaire survey was conducted with industry practitioners to solicit their views on documentation quality issues associated with the construction industry. This is followed by a series of semi-structured interviews with a purpose of validating survey findings. Findings and value The results showed that documentation quality remains a significant issue, contributing to the industries inefficiency and poor reputation. The level of satisfaction for individual attributes of documentation quality varies. Attributes that do appear to be affected by the choice of procurement method include coordination, build ability, efficiency, completeness and delivery time. Similarly the use and effectiveness of risk mitigation techniques appears to vary between the methods, based on a number of factors such as documentation completeness, early involvement, fast tracking etc. Originality/value of paper This research fills the gap of existing body of knowledge in terms of limited studies on the choice of a project procurement system has an influence on the documentation quality and the level of impact. Conclusions Ultimately research concludes that the entire project team including the client and designers should carefully consider the individual projects requirements and compare those to the trade-offs associated with documentation quality and the procurement method. While documentation quality is definitely an issue to be improved upon, by identifying the projects performance requirements a procurement method can be chosen to maximise the likelihood that those requirements will be met. This allows the aspects of documentation quality considered most important to the individual project to be managed appropriately.
Resumo:
Objective: To prospectively test two simplified peer review processes, estimate the agreement between the simplified and official processes, and compare the costs of peer review. Design, participants and setting: A prospective parallel study of Project Grant proposals submitted in 2013 to the National Health and Medical Research Council (NHMRC) of Australia. The official funding outcomes were compared with two simplified processes using proposals in Public Health and Basic Science. The two simplified processes were: panels of 7 reviewers who met face-to-face and reviewed only the nine-page research proposal and track record (simplified panel); and 2 reviewers who independently reviewed only the nine-page research proposal (journal panel). The official process used panels of 12 reviewers who met face-to-face and reviewed longer proposals of around 100 pages. We compared the funding outcomes of 72 proposals that were peer reviewed by the simplified and official processes. Main outcome measures: Agreement in funding outcomes; costs of peer review based on reviewers’ time and travel costs. Results: The agreement between the simplified and official panels (72%, 95% CI 61% to 82%), and the journal and official panels (74%, 62% to 83%), was just below the acceptable threshold of 75%. Using the simplified processes would save $A2.1–$A4.9 million per year in peer review costs. Conclusions: Using shorter applications and simpler peer review processes gave reasonable agreement with the more complex official process. Simplified processes save time and money that could be reallocated to actual research. Funding agencies should consider streamlining their application processes.
Resumo:
In the past few years several business process compliance framework based on temporal logic have been proposed. In this paper we investigate whether the use of temporal logic is suitable for the task at hand: namely to check whether the specifications of a business process are compatible with the formalisation of the norms regulating the business process. We provide an example inspired by real life norms where the use of linear temporal logic produces a result that is not compatible with the legal understanding of the norms in the example.
Resumo:
This study tested the utility of a stress and coping model of employee adjustment to a merger. Two hundred and twenty employees completed both questionnaires (Time 1: 3 months after merger implementation; Time 2: 2 years later). Structural equation modeling analyses revealed that positive event characteristics predicted greater appraisals of self-efficacy and less stress at Time 1. Self-efficacy, in turn, predicted greater use of problem-focused coping at Time 2, whereas stress predicted a greater use of problem-focused and avoidance coping. Finally, problem-focused coping predicted higher levels of job satisfaction and identification with the merged organization (Time 2), whereas avoidance coping predicted lower identification.
Resumo:
In this work, we consider subordinated processes controlled by a family of subordinators which consist of a power function of a time variable and a negative power function of an α-stable random variable. The effect of parameters in the subordinators on the subordinated process is discussed. By suitable variable substitutions and the Laplace transform technique, the corresponding fractional Fokker–Planck-type equations are derived. We also compute their mean square displacements in a free force field. By choosing suitable ranges of parameters, the resulting subordinated processes may be subdiffusive, normal diffusive or superdiffusive