989 resultados para Prove
Resumo:
Spatial navigation requires the processing of complex, disparate and often ambiguous sensory data. The neurocomputations underpinning this vital ability remain poorly understood. Controversy remains as to whether multimodal sensory information must be combined into a unified representation, consistent with Tolman's "cognitive map", or whether differential activation of independent navigation modules suffice to explain observed navigation behaviour. Here we demonstrate that key neural correlates of spatial navigation in darkness cannot be explained if the path integration system acted independently of boundary (landmark) information. In vivo recordings demonstrate that the rodent head direction (HD) system becomes unstable within three minutes without vision. In contrast, rodents maintain stable place fields and grid fields for over half an hour without vision. Using a simple HD error model, we show analytically that idiothetic path integration (iPI) alone cannot be used to maintain any stable place representation beyond two to three minutes. We then use a measure of place stability based on information theoretic principles to prove that featureless boundaries alone cannot be used to improve localization above chance level. Having shown that neither iPI nor boundaries alone are sufficient, we then address the question of whether their combination is sufficient and - we conjecture - necessary to maintain place stability for prolonged periods without vision. We addressed this question in simulations and robot experiments using a navigation model comprising of a particle filter and boundary map. The model replicates published experimental results on place field and grid field stability without vision, and makes testable predictions including place field splitting and grid field rescaling if the true arena geometry differs from the acquired boundary map. We discuss our findings in light of current theories of animal navigation and neuronal computation, and elaborate on their implications and significance for the design, analysis and interpretation of experiments.
Resumo:
Background: Periurban agriculture refers to agricultural practice occurring in areas with mixed rural and urban features. It is responsible 25% of the total gross value of economic production in Australia, despite only comprising 3% of the land used for agriculture. As populations grows and cities expand, they are constantly absorbing surrounding fringe areas, thus creating a new fringe, further from the city causing the periurban region to constantly shift outwards. Periurban regions are fundamental in the provision of fresh food to city populations and residential (and industrial) expansion taking over agricultural land has been noted as a major worldwide concern. Another major concern around the increase in urbanisation and resultant decrease in periurban agriculture is its potential effect on food security. Food security is the availability or access to nutritionally-adequate, culturally-relevant and safe foods in culturally-appropriate ways. Thus food insecurity occurs when access to or availability of these foods is compromised. There is an important level of connectedness between food security and food production and a decrease in periurban agriculture may have adverse effects on food security. A decrease in local, seasonal produce may result in a decrease in the availability of products and an increase in cost, as food must travel greater distances, incurring extra costs present at the consumer level. Currently, few Australian studies exist examining the change in periurban agriculture over time. Such information may prove useful for future health policy and interventions as well as infrastructure planning. The aim of this study is to investigate changes in periurban agriculture among capital cities of Australia. Methods: We compared data pertaining to selected commodities from the Australian Bureau of Statistics 2000-01 and 2005 -2006 Agricultural Census. This survey is distributed online or via mail on a five-yearly basis to approximately 175,000 Agricultural business to ascertain information on a range of factors, such as types of crops, livestock and land preparation practices. For the purpose of this study we compared the land being used for total crops, and cereal , oil seed, legume, fruit and vegetable crops separately. Data was analysed using repeated measures anova in spss. Results: Overall, total area available for crops in urbanised areas of Australia increased slightly by 1.8%. However, Sydney, Melbourne, Adelaide and Perth experienced decreases in the area available for fruit crops by 11%, 5%,and 4% respectively. Furthermore, Brisbane and Perth experienced decreases in land available for vegetable crops by 28% and 14% respectively. Finally, Sydney, Adelaide and Perth experienced decreases in land available for cereal crops by 10 – 79%. Conclusions: These findings suggest that population increases and consequent urban sprawl may be resulting in a decrease in peri-urban agriculture, specifically for several core food groups including fruit, breads and grain based foods. In doing so, access to or availability of these foods may be limited, and the cost of these foods is likely to increase, which may compromise food insecurity for certain sub-groups of the population.
Resumo:
Despite the compelling case for moving towards cloud computing, the upstream oil & gas industry faces several technical challenges—most notably, a pronounced emphasis on data security, a reliance on extremely large data sets, and significant legacy investments in information technology (IT) infrastructure—that make a full migration to the public cloud difficult at present. Private and hybrid cloud solutions have consequently emerged within the industry to yield as much benefit from cloud-based technologies as possible while working within these constraints. This paper argues, however, that the move to private and hybrid clouds will very likely prove only to be a temporary stepping stone in the industry’s technological evolution. By presenting evidence from other market sectors that have faced similar challenges in their journey to the cloud, we propose that enabling technologies and conditions will probably fall into place in a way that makes the public cloud a far more attractive option for the upstream oil & gas industry in the years ahead. The paper concludes with a discussion about the implications of this projected shift towards the public cloud, and calls for more of the industry’s services to be offered through cloud-based “apps.”
Resumo:
We consider the space fractional advection–dispersion equation, which is obtained from the classical advection–diffusion equation by replacing the spatial derivatives with a generalised derivative of fractional order. We derive a finite volume method that utilises fractionally-shifted Grünwald formulae for the discretisation of the fractional derivative, to numerically solve the equation on a finite domain with homogeneous Dirichlet boundary conditions. We prove that the method is stable and convergent when coupled with an implicit timestepping strategy. Results of numerical experiments are presented that support the theoretical analysis.
Resumo:
The feasibility of using an in-hardware implementation of a genetic algorithm (GA) to solve the computationally expensive travelling salesman problem (TSP) is explored, especially in regard to hardware resource requirements for problem and population sizes. We investigate via numerical experiments whether a small population size might prove sufficient to obtain reasonable quality solutions for the TSP, thereby permitting relatively resource efficient hardware implementation on field programmable gate arrays (FPGAs). Software experiments on two TSP benchmarks involving 48 and 532 cities were used to explore the extent to which population size can be reduced without compromising solution quality, and results show that a GA allowed to run for a large number of generations with a smaller population size can yield solutions of comparable quality to those obtained using a larger population. This finding is then used to investigate feasible problem sizes on a targeted Virtex-7 vx485T-2 FPGA platform via exploration of hardware resource requirements for memory and data flow operations.
Resumo:
Due to increased complexity, scale, and functionality of information and telecommunication (IT) infrastructures, every day new exploits and vulnerabilities are discovered. These vulnerabilities are most of the time used by ma¬licious people to penetrate these IT infrastructures for mainly disrupting business or stealing intellectual pro¬perties. Current incidents prove that it is not sufficient anymore to perform manual security tests of the IT infra¬structure based on sporadic security audits. Instead net¬works should be continuously tested against possible attacks. In this paper we present current results and challenges towards realizing automated and scalable solutions to identify possible attack scenarios in an IT in¬frastructure. Namely, we define an extensible frame¬work which uses public vulnerability databases to identify pro¬bable multi-step attacks in an IT infrastructure, and pro¬vide recommendations in the form of patching strategies, topology changes, and configuration updates.
Resumo:
In offering a critical review of the problem we call “ADHD” this paper progresses in three stages. The first two parts juxtapose the dominant voices emanating from the literature in medicine and psychology, highlighting some interdependency between these otherwise competing interest groups. In part three, the nature of the relationship between these groups and the institution of the school is considered, as is the role that the school may play in the psycho-pathologisation of fidgety, distractible, active children who prove hard to teach. In so doing, the author provides an insight as to why the problem we call “ADHD” has achieved celebrity status in Australia and what the effects of that may be for children who come to be described in these ways.
Resumo:
Despite the compelling case for moving towards cloud computing, the upstream oil & gas industry faces several technical challenges—most notably, a pronounced emphasis on data security, a reliance on extremely large data sets, and significant legacy investments in information technology infrastructure—that make a full migration to the public cloud difficult at present. Private and hybrid cloud solutions have consequently emerged within the industry to yield as much benefit from cloud-based technologies as possible while working within these constraints. This paper argues, however, that the move to private and hybrid clouds will very likely prove only to be a temporary stepping stone in the industry's technological evolution. By presenting evidence from other market sectors that have faced similar challenges in their journey to the cloud, we propose that enabling technologies and conditions will probably fall into place in a way that makes the public cloud a far more attractive option for the upstream oil & gas industry in the years ahead. The paper concludes with a discussion about the implications of this projected shift towards the public cloud, and calls for more of the industry's services to be offered through cloud-based “apps.”
Resumo:
In this paper, a hybrid smoothed finite element method (H-SFEM) is developed for solid mechanics problems by combining techniques of finite element method (FEM) and Node-based smoothed finite element method (NS-FEM) using a triangular mesh. A parameter is equipped into H-SFEM, and the strain field is further assumed to be the weighted average between compatible stains from FEM and smoothed strains from NS-FEM. We prove theoretically that the strain energy obtained from the H-SFEM solution lies in between those from the compatible FEM solution and the NS-FEM solution, which guarantees the convergence of H-SFEM. Intensive numerical studies are conducted to verify these theoretical results and show that (1) the upper and lower bound solutions can always be obtained by adjusting ; (2) there exists a preferable at which the H-SFEM can produce the ultrasonic accurate solution.
Resumo:
The thermal decomposition and dehydroxylation process of coal-bearing strata kaolinite–potassium acetate intercalation complex (CSKK) has been studied using X-ray diffraction (XRD), infrared spectroscopy (IR), thermal analysis, mass spectrometric analysis and infrared emission spectroscopy. The XRD results showed that the potassium acetate (KAc) have been successfully intercalated into coal-bearing strata kaolinite with an obvious basal distance increase of the first basal peak, and the positive correlation was found between the concentration of intercalation regent KAc and the degree of intercalation. As the temperature of the system is raised, the formation of KHCO3, KCO3 and KAlSiO4, which is derived from the thermal decomposition or phase transition of CSKK, is observed in sequence. The IR results showed that new bands appeared, the position and intensities shift can also be found when the concentration of intercalation agent is raised. The thermal analysis and mass spectrometric analysis results revealed that CSKK is stable below 300 °C, and the thermal decomposition products (H2O and CO2) were further proved by the mass spectrometric analysis. A comparison of thermal analysis results of original coal-bearing strata kaolinite and its intercalation complex gives new discovery that not only a new mass loss peak is observed at 285 °C, but also the temperature of dehydroxylation and dehydration of coal bearing strata kaolinite is decreased about 100 °C. This is explained on the basis of the interlayer space of the kaolinite increased obviously after being intercalated by KAc, which led to the interlayer hydrogen bonds weakened, enables the dehydroxylation from kaolinite surface more easily. Furthermore, the possible structural model for CSKK has been proposed, with further analysis required in order to prove the most possible structures.
Resumo:
Plumbogummite PbAl3(PO4)2(OH,H2O)6 is a mineral of environmental significance and is a member of the alunite-jarosite supergroup. The molecular structure of the mineral has been investigated by Raman spectroscopy. The spectra of different plumbogummite specimens differ although there are many common features. The Raman spectra prove the spectral profile consisting of overlapping bands and shoulders. Raman bands and shoulders observed at 971, 980, 1002 and 1023 cm−1 (China sample) and 913, 981, 996 and 1026 cm−1 (Czech sample) are assigned to the ν1 symmetric stretching modes of the (PO4)3−, at 1002 and 1023 cm−1 (China) and 996 and 1026 cm−1 to the ν1 symmetric stretching vibrations of the (O3POH)2− units, and those at 1057, 1106 and 1182 (China) and at 1102, 1104 and 1179 cm−1 (Czech) to the ν3 (PO4)3− and ν3 (PO3) antisymmetric stretching vibrations. Raman bands and shoulders at 634, 613 and 579 cm−1 (China) and 611 and 596 cm−1 (Czech) are attributed to the ν4 (δ) (PO4)3− bending vibrations and those at 507, 494 and 464 cm−1 (China) and 505 and 464 cm−1 (Czech) to the ν2 (δ) (PO4)3− bending vibrations. The Raman spectrum of the OH stretching region is complex. Raman bands and shoulders are identified at 2824, 3121, 3249, 3372, 3479 and 3602 cm−1 for plumbogummite from China, and at 3077, 3227, 3362, 3480, 3518 and 3601 cm−1 for the Czech Republic sample. These bands are assigned to the ν OH stretching modes of water molecules and hydrogen ions. Approximate O–H⋯O hydrogen bond lengths inferred from the Raman spectra vary in the range >3.2–2.62 Å (China) and >3.2–2.67 Å (Czech). The minority presence of some carbonate ions in the plumbogummite (China sample) is connected with distinctive intensity increasing of the Raman band at 1106 cm−1, in which may participate the ν1 (CO3)2− symmetric stretching vibration overlapped with phosphate stretching vibrations.
Resumo:
An analytical method for the detection of carbonaceous gases by a non-dispersive infrared sensor (NDIR) has been developed. The calibration plots of six carbonaceous gases including CO2, CH4, CO, C2H2, C2H4 and C2H6 were obtained and the reproducibility determined to verify the feasibility of this gas monitoring method. The results prove that squared correlation coefficients for the six gas measurements are greater than 0.999. The reproducibility is excellent, thus indicating that this analytical method is useful to determinate the concentrations of carbonaceous gases.
Resumo:
Small-angle and ultra-small-angle neutron scattering (SANS and USANS) measurements were performed on samples from the Triassic Montney tight gas reservoir in Western Canada in order to determine the applicability of these techniques for characterizing the full pore size spectrum and to gain insight into the nature of the pore structure and its control on permeability. The subject tight gas reservoir consists of a finely laminated siltstone sequence; extensive cementation and moderate clay content are the primary causes of low permeability. SANS/USANS experiments run at ambient pressure and temperature conditions on lithologically-diverse sub-samples of three core plugs demonstrated that a broad pore size distribution could be interpreted from the data. Two interpretation methods were used to evaluate total porosity, pore size distribution and surface area and the results were compared to independent estimates derived from helium porosimetry (connected porosity) and low-pressure N2 and CO2 adsorption (accessible surface area and pore size distribution). The pore structure of the three samples as interpreted from SANS/USANS is fairly uniform, with small differences in the small-pore range (<2000 Å), possibly related to differences in degree of cementation, and mineralogy, in particular clay content. Total porosity interpreted from USANS/SANS is similar to (but systematically higher than) helium porosities measured on the whole core plug. Both methods were used to estimate the percentage of open porosity expressed here as a ratio of connected porosity, as established from helium adsorption, to the total porosity, as estimated from SANS/USANS techniques. Open porosity appears to control permeability (determined using pressure and pulse-decay techniques), with the highest permeability sample also having the highest percentage of open porosity. Surface area, as calculated from low-pressure N2 and CO2 adsorption, is significantly less than surface area estimates from SANS/USANS, which is due in part to limited accessibility of the gases to all pores. The similarity between N2 and CO2-accessible surface area suggests an absence of microporosity in these samples, which is in agreement with SANS analysis. A core gamma ray profile run on the same core from which the core plug samples were taken correlates to profile permeability measurements run on the slabbed core. This correlation is related to clay content, which possibly controls the percentage of open porosity. Continued study of these effects will prove useful in log-core calibration efforts for tight gas.
Resumo:
Graphene has promised many novel applications in nanoscale electronics and sustainable energy due to its novel electronic properties. Computational exploration of electronic functionality and how it varies with architecture and doping presently runs ahead of experimental synthesis yet provides insights into types of structures that may prove profitable for targeted experimental synthesis and characterization. We present here a summary of our understanding on the important aspects of dimension, band gap, defect, and interfacial engineering of graphene based on state-of-the-art ab initio approaches. Some most recent experimental achievements relevant for future theoretical exploration are also covered.
Resumo:
The main objective of this paper is to describe the development of a remote sensing airborne air sampling system for Unmanned Aerial Systems (UAS) and provide the capability for the detection of particle and gas concentrations in real time over remote locations. The design of the air sampling methodology started by defining system architecture, and then by selecting and integrating each subsystem. A multifunctional air sampling instrument, with capability for simultaneous measurement of particle and gas concentrations was modified and integrated with ARCAA’s Flamingo UAS platform and communications protocols. As result of the integration process, a system capable of both real time geo-location monitoring and indexed-link sampling was obtained. Wind tunnel tests were conducted in order to evaluate the performance of the air sampling instrument in controlled nonstationary conditions at the typical operational velocities of the UAS platform. Once the remote fully operative air sampling system was obtained, the problem of mission design was analyzed through the simulation of different scenarios. Furthermore, flight tests of the complete air sampling system were then conducted to check the dynamic characteristics of the UAS with the air sampling system and to prove its capability to perform an air sampling mission following a specific flight path.