955 resultados para Spot
Resumo:
In this article, we analyse bifurcations from stationary stable spots to travelling spots in a planar three-component FitzHugh-Nagumo system that was proposed previously as a phenomenological model of gas-discharge systems. By combining formal analyses, center-manifold reductions, and detailed numerical continuation studies, we show that, in the parameter regime under consideration, the stationary spot destabilizes either through its zeroth Fourier mode in a Hopf bifurcation or through its first Fourier mode in a pitchfork or drift bifurcation, whilst the remaining Fourier modes appear to create only secondary bifurcations. Pitchfork bifurcations result in travelling spots, and we derive criteria for the criticality of these bifurcations. Our main finding is that supercritical drift bifurcations, leading to stable travelling spots, arise in this model, which does not seem possible for its two-component version.
Resumo:
Hot spot identification (HSID) aims to identify potential sites—roadway segments, intersections, crosswalks, interchanges, ramps, etc.—with disproportionately high crash risk relative to similar sites. An inefficient HSID methodology might result in either identifying a safe site as high risk (false positive) or a high risk site as safe (false negative), and consequently lead to the misuse the available public funds, to poor investment decisions, and to inefficient risk management practice. Current HSID methods suffer from issues like underreporting of minor injury and property damage only (PDO) crashes, challenges of accounting for crash severity into the methodology, and selection of a proper safety performance function to model crash data that is often heavily skewed by a preponderance of zeros. Addressing these challenges, this paper proposes a combination of a PDO equivalency calculation and quantile regression technique to identify hot spots in a transportation network. In particular, issues related to underreporting and crash severity are tackled by incorporating equivalent PDO crashes, whilst the concerns related to the non-count nature of equivalent PDO crashes and the skewness of crash data are addressed by the non-parametric quantile regression technique. The proposed method identifies covariate effects on various quantiles of a population, rather than the population mean like most methods in practice, which more closely corresponds with how black spots are identified in practice. The proposed methodology is illustrated using rural road segment data from Korea and compared against the traditional EB method with negative binomial regression. Application of a quantile regression model on equivalent PDO crashes enables identification of a set of high-risk sites that reflect the true safety costs to the society, simultaneously reduces the influence of under-reported PDO and minor injury crashes, and overcomes the limitation of traditional NB model in dealing with preponderance of zeros problem or right skewed dataset.
Resumo:
To obtain accurate Monte Carlo simulations of small radiation fields, it is important model the initial source parameters (electron energy and spot size) accurately. However recent studies have shown that small field dosimetry correction factors are insensitive to these parameters. The aim of this work is to extend this concept to test if these parameters affect dose perturbations in general, which is important for detector design and calculating perturbation correction factors. The EGSnrc C++ user code cavity was used for all simulations. Varying amounts of air between 0 and 2 mm were deliberately introduced upstream to a diode and the dose perturbation caused by the air was quantified. These simulations were then repeated using a range of initial electron energies (5.5 to 7.0 MeV) and electron spot sizes (0.7 to 2.2 FWHM). The resultant dose perturbations were large. For example 2 mm of air caused a dose reduction of up to 31% when simulated with a 6 mm field size. However these values did not vary by more than 2 % when simulated across the full range of source parameters tested. If a detector is modified by the introduction of air, one can be confident that the response of the detector will be the same across all similar linear accelerators and the Monte Carlo modelling of each machine is not required.
Resumo:
Used frequently in food contact materials, bisphenol A (BPA) has been studied extensively in recent years, and ubiquitous exposure in the general population has been demonstrated worldwide. Characterising within- and between-individual variability of BPA concentrations is important for characterising exposure in biomonitoring studies, and this has been investigated previously in adults, but not in children. The aim of this study was to characterise the short-term variability of BPA in spot urine samples in young children. Children aged ≥2-<4 years (n = 25) were recruited from an existing cohort in Queensland Australia, and donated four spot urine samples each over a two day period. Samples were analysed for total BPA using isotope dilution online solid phase extraction-liquid chromatography-tandem mass spectrometry, and concentrations ranged from 0.53–74.5 ng/ml, with geometric mean and standard deviation of 2.70 ng/ml and 2.94 ng/ml, respectively. Sex and time of sample collection were not significant predictors of BPA concentration. The between-individual variability was approximately equal to the within-individual variability (ICC = 0.51), and this ICC is somewhat higher than previously reported literature values. This may be the result of physiological or behavioural differences between children and adults or of the relatively short exposure window assessed. Using a bootstrapping methodology, a single sample resulted in correct tertile classification approximately 70% of the time. This study suggests that single spot samples obtained from young children provide a reliable characterization of absolute and relative exposure over the short time window studied, but this may not hold true over longer timeframes.
Resumo:
Evaluates trends in the imagery built into GIS applications to supplement existing vector data of streets, boundaries, infrastructure and utilities. These include large area digital orthophotos, Landsat and SPOT data. Future developments include 3 to 5 metre pixel resolutions from satellites, 1 to 2 metres from aircraft. GPS and improved image analysis techniques will also assist in improving resolution and accuracy.
Resumo:
An in vivo screen has been devised for NF-κB p50 activity in Escherichia coli exploiting the ability of the mammalian transcription factor to emulate a prokaryotic repressor. Active intracellular p50 was shown to repress the expression of a green fluorescent protein reporter gene allowing for visual screening of colonies expressing active p50 on agar plates. A library of mutants was constructed in which the residues Y267, L269, A308 and V310 of the dimer interface were simultaneously randomised and twenty-five novel functional interfaces were selected which repressed the reporter gene to similar levels as the wild-type protein. The leucine-269 alanine-308 core was repeatedly, but not exclusively, selected from the library whilst a diversity of predominantly non-polar residues were selected at positions 267 and 310. These results indicate that L269 and A308 may form a hot spot of interaction and allow an insight into the processes of dimer selectivity and evolution within this family of transcription factors.
Resumo:
Introduction The consistency of measuring small field output factors is greatly increased by reporting the measured dosimetric field size of each factor, as opposed to simply stating the nominal field size [1] and therefore requires the measurement of cross-axis profiles in a water tank. However, this makes output factor measurements time consuming. This project establishes at which field size the accuracy of output factors are not affected by the use of potentially inaccurate nominal field sizes, which we believe establishes a practical working definition of a ‘small’ field. The physical components of the radiation beam that contribute to the rapid change in output factor at small field sizes are examined in detail. The physical interaction that dominates the cause of the rapid dose reduction is quantified, and leads to the establishment of a theoretical definition of a ‘small’ field. Methods Current recommendations suggest that radiation collimation systems and isocentre defining lasers should both be calibrated to permit a maximum positioning uncertainty of 1 mm [2]. The proposed practical definition for small field sizes is as follows: if the output factor changes by ±1.0 % given a change in either field size or detector position of up to ±1 mm then the field should be considered small. Monte Carlo modelling was used to simulate output factors of a 6 MV photon beam for square fields with side lengths from 4.0 to 20.0 mm in 1.0 mm increments. The dose was scored to a 0.5 mm wide and 2.0 mm deep cylindrical volume of water within a cubic water phantom, at a depth of 5 cm and SSD of 95 cm. The maximum difference due to a collimator error of ±1 mm was found by comparing the output factors of adjacent field sizes. The output factor simulations were repeated 1 mm off-axis to quantify the effect of detector misalignment. Further simulations separated the total output factor into collimator scatter factor and phantom scatter factor. The collimator scatter factor was further separated into primary source occlusion effects and ‘traditional’ effects (a combination of flattening filter and jaw scatter etc.). The phantom scatter was separated in photon scatter and electronic disequilibrium. Each of these factors was plotted as a function of field size in order to quantify how each affected the change in small field size. Results The use of our practical definition resulted in field sizes of 15 mm or less being characterised as ‘small’. The change in field size had a greater effect than that of detector misalignment. For field sizes of 12 mm or less, electronic disequilibrium was found to cause the largest change in dose to the central axis (d = 5 cm). Source occlusion also caused a large change in output factor for field sizes less than 8 mm. Discussion and conclusions The measurement of cross-axis profiles are only required for output factor measurements for field sizes of 15 mm or less (for a 6 MV beam on Varian iX linear accelerator). This is expected to be dependent on linear accelerator spot size and photon energy. While some electronic disequilibrium was shown to occur at field sizes as large as 30 mm (the ‘traditional’ definition of small field [3]), it has been shown that it does not cause a greater change than photon scatter until a field size of 12 mm, at which point it becomes by far the most dominant effect.
Resumo:
Commodity price modeling is normally approached in terms of structural time-series models, in which the different components (states) have a financial interpretation. The parameters of these models can be estimated using maximum likelihood. This approach results in a non-linear parameter estimation problem and thus a key issue is how to obtain reliable initial estimates. In this paper, we focus on the initial parameter estimation problem for the Schwartz-Smith two-factor model commonly used in asset valuation. We propose the use of a two-step method. The first step considers a univariate model based only on the spot price and uses a transfer function model to obtain initial estimates of the fundamental parameters. The second step uses the estimates obtained in the first step to initialize a re-parameterized state-space-innovations based estimator, which includes information related to future prices. The second step refines the estimates obtained in the first step and also gives estimates of the remaining parameters in the model. This paper is part tutorial in nature and gives an introduction to aspects of commodity price modeling and the associated parameter estimation problem.
Resumo:
Plasmonic gold nano-assemblies that self-assemble with the aid of linking molecules or polymers have the potential to yield controlled hierarchies of morphologies and consequently result in materials with tailored optical (e.g. localized surface plasmon resonances (LSPR)) and spectroscopic properties (e.g. surface enhanced Raman scattering (SERS)). Molecular linkers that are structurally well-defined are promising for forming hybrid nano-assemblies which are stable in aqueous solution and are increasingly finding application in nanomedicine. Despite much ongoing research in this field, the precise role of molecular linkers in governing the morphology and properties of the hybrid nano-assemblies remains unclear. Previously we have demonstrated that branched linkers, such as hyperbranched polymers, with specific anchoring end groups can be successfully employed to form assemblies of gold NPs demonstrating near-infrared SPRs and intense SERS scattering. We herein introduce a tailored polymer as a versatile molecular linker, capable of manipulating nano-assembly morphologies and hot-spot density. In addition, this report explores the role of the polymeric linker architecture, specifically the degree of branching of the tailored polymer in determining the formation, morphology and properties of the hybrid nano-assemblies. The degree of branching of the linker polymer, in addition to the concentration and number of anchoring groups, is observed to strongly influence the self-assembly process. The assembly morphology shifts primarily from 1D-like chains to 2D plates and finally to 3D-like globular structures, with increase in degree of branching. Insights have been gained into how the morphology influences the SERS performance of these nano-assemblies with respect to hot-spot density. These findings supplement the understanding of the morphology determining nano-assembly formation and pave the way for the possible application of these nano-assemblies as SERS bio-sensors for medical diagnostics.
Resumo:
Web servers are accessible by anyone who can access the Internet. Although this universal accessibility is attractive for all kinds of Web-based applications, Web servers are exposed to attackers who may want to alter their contents. Alterations range from humorous additions or changes, which are typically easy to spot, to more sinister tampering, such as providing false or damaging information.
Resumo:
In the electricity market environment, load-serving entities (LSEs) will inevitably face risks in purchasing electricity because there are a plethora of uncertainties involved. To maximize profits and minimize risks, LSEs need to develop an optimal strategy to reasonably allocate the purchased electricity amount in different electricity markets such as the spot market, bilateral contract market, and options market. Because risks originate from uncertainties, an approach is presented to address the risk evaluation problem by the combined use of the lower partial moment and information entropy (LPME). The lower partial moment is used to measure the amount and probability of the loss, whereas the information entropy is used to represent the uncertainty of the loss. Electricity purchasing is a repeated procedure; therefore, the model presented represents a dynamic strategy. Under the chance-constrained programming framework, the developed optimization model minimizes the risk of the electricity purchasing portfolio in different markets because the actual profit of the LSE concerned is not less than the specified target under a required confidence level. Then, the particle swarm optimization (PSO) algorithm is employed to solve the optimization model. Finally, a sample example is used to illustrate the basic features of the developed model and method.
Resumo:
A microplasma generated between a stainless-steel capillary and water surface in ambient air with flowing argon as working gas appears as a bright spot at the tube orifice and expands to form a larger footprint on the water surface, and the dimensions of the bell-shaped microplasma are all below 1 mm. The electron density of the microplasma is estimated to be ranging from 5.32 × 109 cm−3 to 2.02 × 1014 cm−3 for the different operating conditions, which is desirable for generating abundant amounts of reactive species. A computational technique is adopted to fit the experimental emission from the N2 second positive system with simulation results. It is concluded that the vibrational temperature (more than 2000 K) is more than twice the gas temperature (more than 800 K), which indicates the non-equilibrium state of the microplasma. Both temperatures showed dependence on the discharge parameters (i.e., gas flow and discharge current). Such a plasma device could be arranged in arrays for applications utilizing plasmainduced liquid chemistry.
Resumo:
Aim A new method of penumbral analysis is implemented which allows an unambiguous determination of field size and penumbra size and quality for small fields and other non-standard fields. Both source occlusion and lateral electronic disequilibrium will affect the size and shape of cross-axis profile penumbrae; each is examined in detail. Method A new method of penumbral analysis is implemented where the square of the derivative of the cross-axis profile is plotted. The resultant graph displays two peaks in the place of the two penumbrae. This allows a strong visualisation of the quality of a field penumbra, as well as a mathematically consistent method of determining field size (distance between the two peak’s maxima), and penumbra (full-widthtenth-maximum of peak). Cross-axis profiles were simulated in a water phantom at a depth of 5 cm using Monte Carlo modelling, for field sizes between 5 and 30 mm. The field size and penumbra size of each field was calculated using the method above, as well as traditional definitions set out in IEC976. The effect of source occlusion and lateral electronic disequilibrium on the penumbrae was isolated by repeating the simulations removing electron transport and using an electron spot size of 0 mm, respectively. Results All field sizes calculated using the traditional and proposed methods agreed within 0.2 mm. The penumbra size measured using the proposed method was systematically 1.8 mm larger than the traditional method at all field sizes. The size of the source had a larger effect on the size of the penumbra than did lateral electronic disequilibrium, particularly at very small field sizes. Conclusion Traditional methods of calculating field size and penumbra are proved to be mathematically adequate for small fields. However, the field size definition proposed in this study would be more robust amongst other nonstandard fields, such as flattening filter free. Source occlusion plays a bigger role than lateral electronic disequilibrium in small field penumbra size.
Resumo:
Urban areas are growing unsustainably around the world; however, the growth patterns and their associated drivers vary between contexts. As a result, research has highlighted the need to adopt case study based approaches to stimulate the development of new theoretic understandings. Using land-cover data sets derived from Landsat images (30 m × 30 m), this research identifies both patterns and drivers of urban growth in a period (1991-2001) when a number of policy acts were enacted aimed at fostering smart growth in Brisbane, Australia. A linear multiple regression model was estimated using the proportion of lands that were converted from non-built-up (1991) to built-up usage (2001) within a suburb as a dependent variable to identify significant drivers of land-cover changes. In addition, the hot spot analysis was conducted to identify spatial biases of land-cover changes, if any. Results show that the built-up areas increased by 1.34% every year. About 19.56% of the non-built-up lands in 1991 were converted into built-up lands in 2001. This conversion pattern was significantly biased in the northernmost and southernmost suburbs in the city. This is due to the fact that, as evident from the regression analysis, these suburbs experienced a higher rate of population growth, and had the availability of habitable green field sites in relatively flat lands. The above findings suggest that the policy interventions undertaken between the periods were not as effective in promoting sustainable changes in the environment as they were aimed for.
Resumo:
In the OHS field increasing use is being made of administrative penalties to enforce OHS legislation. Infringement notices (also known as penalty notices or on-the-spot fines) are used in several Australian jurisdictions and there are plans to introduce them in others. Overseas jurisdictions with some form of OHS administrative penalty include the United States, some Canadian provinces, and the system recently enacted in New Zealand. This article reviews empirical evidence and legal arguments about the use of infringement notices for enforcing OHS legislation. Key factors influencing the impact of these notices are discussed, including the monetary amounts of penalties, the nature of offences, the criteria and processes for issuing notices, and other implementation issues. There is a need for further empirical studies to determine the characteristics of infringement notice schemes that are most effective in motivating preventive action.