432 resultados para Index reduction techniques
Resumo:
The opening phrase of the title is from Charles Darwin’s notebooks (Schweber 1977). It is a double reminder, firstly that mainstream evolutionary theory is not just about describing nature but is particularly looking for mechanisms or ‘causes’, and secondly, that there will usually be several causes affecting any particular outcome. The second part of the title is our concern at the almost universal rejection of the idea that biological mechanisms are sufficient for macroevolutionary changes, thus rejecting a cornerstone of Darwinian evolutionary theory. Our primary aim here is to consider ways of making it easier to develop and to test hypotheses about evolution. Formalizing hypotheses can help generate tests. In an absolute sense, some of the discussion by scientists about evolution is little better than the lack of reasoning used by those advocating intelligent design. Our discussion here is in a Popperian framework where science is defined by that area of study where it is possible, in principle, to find evidence against hypotheses – they are in principle falsifiable. However, with time, the boundaries of science keep expanding. In the past, some aspects of evolution were outside the current boundaries of falsifiable science, but increasingly new techniques and ideas are expanding the boundaries of science and it is appropriate to re-examine some topics. It often appears that over the last few decades there has been an increasingly strong assumption to look first (and only) for a physical cause. This decision is virtually never formally discussed, just an assumption is made that some physical factor ‘drives’ evolution. It is necessary to examine our assumptions much more carefully. What is meant by physical factors ‘driving’ evolution, or what is an ‘explosive radiation’. Our discussion focuses on two of the six mass extinctions, the fifth being events in the Late Cretaceous, and the sixth starting at least 50,000 years ago (and is ongoing). Cretaceous/Tertiary boundary; the rise of birds and mammals. We have had a long-term interest (Cooper and Penny 1997) in designing tests to help evaluate whether the processes of microevolution are sufficient to explain macroevolution. The real challenge is to formulate hypotheses in a testable way. For example the numbers of lineages of birds and mammals that survive from the Cretaceous to the present is one test. Our first estimate was 22 for birds, and current work is tending to increase this value. This still does not consider lineages that survived into the Tertiary, and then went extinct later. Our initial suggestion was probably too narrow in that it lumped four models from Penny and Phillips (2004) into one model. This reduction is too simplistic in that we need to know about survival and ecological and morphological divergences during the Late Cretaceous, and whether Crown groups of avian or mammalian orders may have existed back into the Cretaceous. More recently (Penny and Phillips 2004) we have formalized hypotheses about dinosaurs and pterosaurs, with the prediction that interactions between mammals (and groundfeeding birds) and dinosaurs would be most likely to affect the smallest dinosaurs, and similarly interactions between birds and pterosaurs would particularly affect the smaller pterosaurs. There is now evidence for both classes of interactions, with the smallest dinosaurs and pterosaurs declining first, as predicted. Thus, testable models are now possible. Mass extinction number six: human impacts. On a broad scale, there is a good correlation between time of human arrival, and increased extinctions (Hurles et al. 2003; Martin 2005; Figure 1). However, it is necessary to distinguish different time scales (Penny 2005) and on a finer scale there are still large numbers of possibilities. In Hurles et al. (2003) we mentioned habitat modification (including the use of Geogenes III July 2006 31 fire), introduced plants and animals (including kiore) in addition to direct predation (the ‘overkill’ hypothesis). We need also to consider prey switching that occurs in early human societies, as evidenced by the results of Wragg (1995) on the middens of different ages on Henderson Island in the Pitcairn group. In addition, the presence of human-wary or humanadapted animals will affect the distribution in the subfossil record. A better understanding of human impacts world-wide, in conjunction with pre-scientific knowledge will make it easier to discuss the issues by removing ‘blame’. While continued spontaneous generation was accepted universally, there was the expectation that animals continued to reappear. New Zealand is one of the very best locations in the world to study many of these issues. Apart from the marine fossil record, some human impact events are extremely recent and the remains less disrupted by time.
Resumo:
Fractional differential equations are becoming more widely accepted as a powerful tool in modelling anomalous diffusion, which is exhibited by various materials and processes. Recently, researchers have suggested that rather than using constant order fractional operators, some processes are more accurately modelled using fractional orders that vary with time and/or space. In this paper we develop computationally efficient techniques for solving time-variable-order time-space fractional reaction-diffusion equations (tsfrde) using the finite difference scheme. We adopt the Coimbra variable order time fractional operator and variable order fractional Laplacian operator in space where both orders are functions of time. Because the fractional operator is nonlocal, it is challenging to efficiently deal with its long range dependence when using classical numerical techniques to solve such equations. The novelty of our method is that the numerical solution of the time-variable-order tsfrde is written in terms of a matrix function vector product at each time step. This product is approximated efficiently by the Lanczos method, which is a powerful iterative technique for approximating the action of a matrix function by projecting onto a Krylov subspace. Furthermore an adaptive preconditioner is constructed that dramatically reduces the size of the required Krylov subspaces and hence the overall computational cost. Numerical examples, including the variable-order fractional Fisher equation, are presented to demonstrate the accuracy and efficiency of the approach.
Resumo:
Acoustic emission (AE) analysis is one of the several diagnostic techniques available nowadays for structural health monitoring (SHM) of engineering structures. Some of its advantages over other techniques include high sensitivity to crack growth and capability of monitoring a structure in real time. The phenomenon of rapid release of energy within a material by crack initiation or growth in form of stress waves is known as acoustic emission (AE). In AE technique, these stress waves are recorded by means of suitable sensors placed on the surface of a structure. Recorded signals are subsequently analysed to gather information about the nature of the source. By enabling early detection of crack growth, AE technique helps in planning timely retrofitting or other maintenance jobs or even replacement of the structure if required. In spite of being a promising tool, some challenges do still exist behind the successful application of AE technique. Large amount of data is generated during AE testing, hence effective data analysis is necessary, especially for long term monitoring uses. Appropriate analysis of AE data for quantification of damage level is an area that has received considerable attention. Various approaches available for damage quantification for severity assessment are discussed in this paper, with special focus on civil infrastructure such as bridges. One method called improved b-value analysis is used to analyse data collected from laboratory testing.
Resumo:
In order to support intelligent transportation system (ITS) road safety applications such as collision avoidance, lane departure warnings and lane keeping, Global Navigation Satellite Systems (GNSS) based vehicle positioning system has to provide lane-level (0.5 to 1 m) or even in-lane-level (0.1 to 0.3 m) accurate and reliable positioning information to vehicle users. However, current vehicle navigation systems equipped with a single frequency GPS receiver can only provide road-level accuracy at 5-10 meters. The positioning accuracy can be improved to sub-meter or higher with the augmented GNSS techniques such as Real Time Kinematic (RTK) and Precise Point Positioning (PPP) which have been traditionally used in land surveying and or in slowly moving environment. In these techniques, GNSS corrections data generated from a local or regional or global network of GNSS ground stations are broadcast to the users via various communication data links, mostly 3G cellular networks and communication satellites. This research aimed to investigate the precise positioning system performances when operating in the high mobility environments. This involves evaluation of the performances of both RTK and PPP techniques using: i) the state-of-art dual frequency GPS receiver; and ii) low-cost single frequency GNSS receiver. Additionally, this research evaluates the effectiveness of several operational strategies in reducing the load on data communication networks due to correction data transmission, which may be problematic for the future wide-area ITS services deployment. These strategies include the use of different data transmission protocols, different correction data format standards, and correction data transmission at the less-frequent interval. A series of field experiments were designed and conducted for each research task. Firstly, the performances of RTK and PPP techniques were evaluated in both static and kinematic (highway with speed exceed 80km) experiments. RTK solutions achieved the RMS precision of 0.09 to 0.2 meter accuracy in static and 0.2 to 0.3 meter in kinematic tests, while PPP reported 0.5 to 1.5 meters in static and 1 to 1.8 meter in kinematic tests by using the RTKlib software. These RMS precision values could be further improved if the better RTK and PPP algorithms are adopted. The tests results also showed that RTK may be more suitable in the lane-level accuracy vehicle positioning. The professional grade (dual frequency) and mass-market grade (single frequency) GNSS receivers were tested for their performance using RTK in static and kinematic modes. The analysis has shown that mass-market grade receivers provide the good solution continuity, although the overall positioning accuracy is worse than the professional grade receivers. In an attempt to reduce the load on data communication network, we firstly evaluate the use of different correction data format standards, namely RTCM version 2.x and RTCM version 3.0 format. A 24 hours transmission test was conducted to compare the network throughput. The results have shown that 66% of network throughput reduction can be achieved by using the newer RTCM version 3.0, comparing to the older RTCM version 2.x format. Secondly, experiments were conducted to examine the use of two data transmission protocols, TCP and UDP, for correction data transmission through the Telstra 3G cellular network. The performance of each transmission method was analysed in terms of packet transmission latency, packet dropout, packet throughput, packet retransmission rate etc. The overall network throughput and latency of UDP data transmission are 76.5% and 83.6% of TCP data transmission, while the overall accuracy of positioning solutions remains in the same level. Additionally, due to the nature of UDP transmission, it is also found that 0.17% of UDP packets were lost during the kinematic tests, but this loss doesn't lead to significant reduction of the quality of positioning results. The experimental results from the static and the kinematic field tests have also shown that the mobile network communication may be blocked for a couple of seconds, but the positioning solutions can be kept at the required accuracy level by setting of the Age of Differential. Finally, we investigate the effects of using less-frequent correction data (transmitted at 1, 5, 10, 15, 20, 30 and 60 seconds interval) on the precise positioning system. As the time interval increasing, the percentage of ambiguity fixed solutions gradually decreases, while the positioning error increases from 0.1 to 0.5 meter. The results showed the position accuracy could still be kept at the in-lane-level (0.1 to 0.3 m) when using up to 20 seconds interval correction data transmission.
Resumo:
This paper presents an analysis of the stream cipher Mixer, a bit-based cipher with structural components similar to the well-known Grain cipher and the LILI family of keystream generators. Mixer uses a 128-bit key and 64-bit IV to initialise a 217-bit internal state. The analysis is focused on the initialisation function of Mixer and shows that there exist multiple key-IV pairs which, after initialisation, produce the same initial state, and consequently will generate the same keystream. Furthermore, if the number of iterations of the state update function performed during initialisation is increased, then the number of distinct initial states that can be obtained decreases. It is also shown that there exist some distinct initial states which produce the same keystream, resulting in a further reduction of the effective key space
Resumo:
The possibility of a surface inner sphere electron transfer mechanism leading to the coating of gold via the surface reduction of gold(I) chloride on metal and semi-metal oxide nanoparticles was investigated. Silica and zinc oxide nanoparticles are known to have very different surface chemistry, potentially leading to a new class of gold coated nanoparticles. Monodisperse silica nanoparticles were synthesised by the well known Stöber protocol in conjunction with sonication. The nanoparticle size was regulated solely by varying the amount of ammonia solution added. The presence of surface hydroxyl groups was investigated by liquid proton NMR. The resultant nanoparticle size was directly measured by the use of TEM. The synthesised silica nanoparticles were dispersed in acetonitrile (MeCN) and added to a bis acetonitrile gold(I) co-ordination complex [Au(MeCN)2]+ in MeCN. The silica hydroxyl groups were deprotonated in the presence of MeCN generating a formal negative charge on the siloxy groups. This allowed the [Au(MeCN)2]+ complex to undergo ligand exchange with the silica nanoparticles, which formed a surface co-ordination complex with reduction to gold(0), that proceeded by a surface inner sphere electron transfer mechanism. The residual [Au(MeCN)2]+ complex was allowed to react with water, disproportionating into gold(0) and gold(III) respectively, with gold(0) being added to the reduced gold already bound on the silica surface. The so-formed metallic gold seed surface was found to be suitable for the conventional reduction of gold(III) to gold(0) by ascorbic acid. This process generated a thin and uniform gold coating on the silica nanoparticles. This process was modified to include uniformly gold coated composite zinc oxide nanoparticles (Au@ZnO NPs) using surface co-ordination chemistry. AuCl dissolved in acetonitrile (MeCN) supplied chloride ions which were adsorbed onto ZnO NPs. The co-ordinated gold(I) was reduced on the ZnO surface to gold(0) by the inner sphere electron transfer mechanism. Addition of water disproportionated the remaining gold(I) to gold(0) and gold(III). Gold(0) bonded to gold(0) on the NP surface with gold(III) was reduced to gold(0) by ascorbic acid (ASC), which completed the gold coating process. This gold coating process of Au@ZnO NPs was modified to incorporate iodide instead of chloride. ZnO NPs were synthesised by the use of sodium oxide, zinc iodide and potassium iodide in refluxing basic ethanol with iodide controlling the presence of chemisorbed oxygen. These ZnO NPs were treated by the addition of gold(I) chloride dissolved in acetonitrile leaving chloride anions co-ordinated on the ZnO NP surface. This allowed acetonitrile ligands in the added [Au(MeCN)2]+ complex to surface exchange with adsorbed chloride from the dissolved AuCl on the ZnO NP surface. Gold(I) was then reduced by the surface inner sphere electron transfer mechanism. The presence of the reduced gold on the ZnO NPs allowed adsorption of iodide to generate a uniform deposition of gold onto the ZnO NP surface without the use of additional reducing agents or heat.
Resumo:
Plant tissue culture is a technique that exploits the ability of many plant cells to revert to a meristematic state. Although originally developed for botanical research, plant tissue culture has now evolved into important commercial practices and has become a significant research tool in agriculture, horticulture and in many other areas of plant sciences. Plant tissue culture is the sterile culture of plant cells, tissues, or organs under aseptic conditions leading to cell multiplication or regeneration or organs and whole plants. The steps required to develop reliable systems for plant regeneration and their application in plant biotechnology are reviewed in countless books. Some of the major landmarks in the evolution of in vitro techniques are summarised in Table 5.1. In this chapter the current applications of this technology to agriculture, horticulture, forestry and plant breeding are briefly described with specific examples from Australian plants when applicable.
Resumo:
Navigational collisions are one of the major safety concerns for many seaports. Continuing growth of shipping traffic in number and sizes is likely to result in increased number of traffic movements, which consequently could result higher risk of collisions in these restricted waters. This continually increasing safety concern warrants a comprehensive technique for modeling collision risk in port waters, particularly for modeling the probability of collision events and the associated consequences (i.e., injuries and fatalities). A number of techniques have been utilized for modeling the risk qualitatively, semi-quantitatively and quantitatively. These traditional techniques mostly rely on historical collision data, often in conjunction with expert judgments. However, these techniques are hampered by several shortcomings, such as randomness and rarity of collision occurrence leading to obtaining insufficient number of collision counts for a sound statistical analysis, insufficiency in explaining collision causation, and reactive approach to safety. A promising alternative approach that overcomes these shortcomings is the navigational traffic conflict technique (NTCT), which uses traffic conflicts as an alternative to the collisions for modeling the probability of collision events quantitatively. This article explores the existing techniques for modeling collision risk in port waters. In particular, it identifies the advantages and limitations of the traditional techniques and highlights the potentials of the NTCT in overcoming the limitations. In view of the principles of the NTCT, a structured method for managing collision risk is proposed. This risk management method allows safety analysts to diagnose safety deficiencies in a proactive manner, which consequently has great potential for managing collision risk in a fast, reliable and efficient manner.
Resumo:
A number of mathematical models investigating certain aspects of the complicated process of wound healing are reported in the literature in recent years. However, effective numerical methods and supporting error analysis for the fractional equations which describe the process of wound healing are still limited. In this paper, we consider numerical simulation of fractional model based on the coupled advection-diffusion equations for cell and chemical concentration in a polar coordinate system. The space fractional derivatives are defined in the Left and Right Riemann-Liouville sense. Fractional orders in advection and diffusion terms belong to the intervals (0; 1) or (1; 2], respectively. Some numerical techniques will be used. Firstly, the coupled advection-diffusion equations are decoupled to a single space fractional advection-diffusion equation in a polar coordinate system. Secondly, we propose a new implicit difference method for simulating this equation by using the equivalent of the Riemann-Liouville and Gr¨unwald-Letnikov fractional derivative definitions. Thirdly, its stability and convergence are discussed, respectively. Finally, some numerical results are given to demonstrate the theoretical analysis.
Resumo:
This chapter reviews common barriers to community engagement for Latino youth and suggests ways to move beyond those barriers by empowering them to communicate their experiences, address the challenges they face, and develop recommendations for making their community more youth-friendly. As a case study, this chapter describes a program called Youth FACE IT (Youth Fostering Active Community Engagement for Integration and Transformation)in Boulder County, Colorado. The program enables Latino youth to engage in critical dialogue and participate in a community-based initiative. The chapter concludes by explaining specific strategies that planners can use to support active community engagement and develop a future generation of planners and engaged community members that reflects emerging demographics.
Resumo:
Nitrate reduction with nanoscale zero-valent iron (NZVI) was reported as a potential technology to remove nitrate from nitrate-contaminated water. In this paper, nitrate reduction with NZVI prepared by hydrogen reduction of natural goethite (NZVI-N, -N represents natural goethite) and hydrothermal goethite (NZVI-H, -H represents hydrothermal goethite) was conducted. Besides, the effects of reaction time, nitrate concentration, iron-to-nitrate ratio on nitrate removal rate over NZVI-H and NZVI-N were investigated. To prove their excellent nitrate reduction capacities, NZVI-N and NZVI-H were compared with ordinary zero-valent iron (OZVI-N) through the static experiments. Based on all above investigations, the mechanism of nitrate reduction with NZVI-N was proposed. The result showed that reaction time, nitrate concentration, iron-to-nitrate ratio played an important role in nitrate reduction by NZVI-N and NZVI-H. Compared with OZVI, NZVI-N and NZVI-H showed little relationship with pH. And NZVI-N for nitrate composition offers a higher stability than NZVI-H because of the existence of Al-substitution. Furthermore, NZVI-N, prepared by hydrogen reduction of goethite, has higher activity for nitrate reduction and the products contain hydrogen, nitrogen, NH 4 +, a little nitrite, but no NOx, meanwhile NZVI-N was oxidized to Fe 2+. It is a relatively easy and cost-effective method for nitrate removal, so NZVI-N reducing nitrate has a great potential application in nitrate removal of groundwater. © 2012 Elsevier B.V.
Resumo:
In this paper we consider the variable order time fractional diffusion equation. We adopt the Coimbra variable order (VO) time fractional operator, which defines a consistent method for VO differentiation of physical variables. The Coimbra variable order fractional operator also can be viewed as a Caputo-type definition. Although this definition is the most appropriate definition having fundamental characteristics that are desirable for physical modeling, numerical methods for fractional partial differential equations using this definition have not yet appeared in the literature. Here an approximate scheme is first proposed. The stability, convergence and solvability of this numerical scheme are discussed via the technique of Fourier analysis. Numerical examples are provided to show that the numerical method is computationally efficient. Crown Copyright © 2012 Published by Elsevier Inc. All rights reserved.
Resumo:
"There once was a man who aspired to be the author of the general theory of holes. When asked ‘What kind of hole—holes dug by children in the sand for amusement, holes dug by gardeners to plant lettuce seedlings, tank traps, holes made by road makers?’ he would reply indignantly that he wished for a general theory that would explain all of these. He rejected ab initio the—as he saw it—pathetically common-sense view that of the digging of different kinds of holes there are quite different kinds of explanations to be given; why then he would ask do we have the concept of a hole? Lacking the explanations to which he originally aspired, he then fell to discovering statistically significant correlations; he found for example that there is a correlation between the aggregate hole-digging achievement of a society as measured, or at least one day to be measured, by econometric techniques, and its degree of techno- logical development. The United States surpasses both Paraguay and Upper Volta in hole-digging; there are more holes in Vietnam than there were. These observations, he would always insist, were neutral and value-free. This man’s achievement has passed totally unnoticed except by me. Had he however turned his talents to political science, had he concerned himself not with holes, but with modernization, urbanization or violence, I find it difficult to believe that he might not have achieved high office in the APSA." (MacIntyre 1971, 260)