899 resultados para INTERSECTION
Resumo:
Red light cameras were introduced in Victoria in August 1983, with the intention of reducing the number of accidents that result from motorists disobeying red traffic signals at signalised intersections. Accident data from 46 treated and 46 control sites from 1981 to 1986 were analysed. The analysis indicated that red light camera use resulted in a reduction in the incidence of right angle accidents, and in the number of accident casualties. Legislation was introduced in March 1986 to place the onus for red light camera offences onto the vehicle owner. This legislation was intended to improve Police efficiency and therefore increase the number of red light cameras in operation. Data supplied by the Police indicated that these aims have beneficial road safety effects.
Resumo:
EXECUTIVE SUMMARY (excerpts) The red light camera (RLC) program commenced in July 1988, with five cameras operating at 15 sites in metropolitan Adelaide. This report deals with the first eighteen months of operation, to December 1989. A number of recommendations have been made… PROGRAM EVALUATION … In 1989 dollars, the program was estimated to have achieved an accident reduction benefit of $1.4m in the first 12 months of operation, which is almost twice the benefit expected using the assumptions made when selecting the sites. (There are 8 recommendations, mostly specific to the particular program characteristics)
Resumo:
Red light cameras were introduced in August 1983 to deter run-the-red offences and therefore to reduce the incidence of right-angle accidents at signalised intersections in Melbourne. This report was prepared after two years of operation of the program. It provides a detailed account of the technical aspects of the program, but does not provide any detailed, evaluative analyses of accident data.
Resumo:
The impression creep behaviour of zinc is studied in the range 300 to 500 K and the results are compared with the data from conventional creep tests. The steady-state impression velocity is found to exhibit the same stress and temperature dependence as in conventional tensile creep with the same power law stress exponent. Also studied is the effect of indenter size on the impression velocity. The thermal activation parameters for plastic flow at high temperatures derived from a number of testing techniques agree reasonably well. Grain boundary sliding is shown to be unimportant in controlling the rate of plastic flow at high temperatures. It is observed that the Cottrell-Stokes law is obeyed during high-temperature deformation of zinc. It is concluded that a mechanism such as forest intersection involving attractive trees controls the high-temperature flow rather than a diffusion mechanism.
Resumo:
We prove that any arithmetically Gorenstein curve on a smooth, general hypersurface of degree at least 6, is a complete intersection. This gives a characterisation of complete intersection curves on general type hypersurfaces in . We also verify that certain 1-cycles on a general quintic hypersurface are non-trivial elements of the Griffiths group.
Resumo:
Geometric constraints present in A2BO4 compounds with the tetragonal-T structure of K2NiF4 impose a strong pressure on the B---OII---B bonds and a stretching of the A---OI---A bonds in the basal planes if the tolerance factor is t congruent with RAO/√2 RBO < 1, where RAO and RBO are the sums of the A---O and B---O ionic radii. The tetragonal-T phase of La2NiO4 becomes monoclinic for Pr2NiO4, orthorhombic for La2CuO4, and tetragonal-T′ for Pr2CuO4. The atomic displacements in these distorted phases are discussed and rationalized in terms of the chemistry of the various compounds. The strong pressure on the B---OII---B bonds produces itinerant σ*x2−y2 bands and a relative stabilization of localized dz2 orbitals. Magnetic susceptibility and transport data reveal an intersection of the Fermi energy with the d2z2 levels for half the copper ions in La2CuO4; this intersection is responsible for an intrinsic localized moment associated with a configuration fluctuation; below 200 K the localized moment smoothly vanishes with decreasing temperature as the d2z2 level becomes filled. In La2NiO4, the localized moments for half-filled dz2 orbitals induce strong correlations among the σ*x2−y2 electrons above Td reverse similar, equals 200 K; at lower temperatures the σ*x2−y2 electrons appear to contribute nothing to the magnetic susceptibility, which obeys a Curie-Weiss law giving a μeff corresponding to S = 1/2, but shows no magnetic order to lowest temperatures. These surprising results are verified by comparison with the mixed systems La2Ni1−xCuxO4 and La2−2xSr2xNi1−xTixO4. The onset of a charge-density wave below 200 K is proposed for both La2CuO4 and La2NiO4, but the atomic displacements would be short-range cooperative in mixed systems. The semiconductor-metallic transitions observed in several systems are found in many cases to obey the relation Ea reverse similar, equals kTmin, where varrho = varrho0exp(−Ea/kT) and Tmin is the temperature of minimum resistivity varrho. This relation is interpreted in terms of a diffusive charge-carrier mobility with Ea reverse similar, equals ΔHm reverse similar, equals kT at T = Tmin.
Resumo:
Research on the achievement of rural and remote students in science and mathematics is located within a context of falling levels of participation in physical science and mathematics courses in Australian schools, and underrepresentation of rural students in higher education. International studies such as the Programme of International Student Assessment (PISA), have reported lower levels of mathematical and scientific literacy in Australian students from rural and remote schools (Thomson et al, 2011). The SiMERR national survey of science, mathematics and ICT education in rural and regional Australia (Lyons et al, 2006) identified factors affecting student achievement in rural and remote schools. Many of the issues faced by rural and remote students in their schools are likely to have implications on their university enrolments in science, technology, engineering and mathematics (STEM) courses. For example, rural and remote students are less likely to attend university in general than their city counterparts and higher university attrition rates have been reported for remote students nationally. This paper examines the responses of a sample of rural/remote Australian first year STEM students at Australian universities to two questions. These related to their intentions to complete the course; and whether -and if so, why- they had ever considered withdrawing from their course. Results indicated that rural students who were still in their course by the end of first year were no more or less likely to consider withdrawing than were their peers from more populous centres. However, almost 20% of the rural cohort had considered withdrawing at some stage in their course, and their explanations provide insights into the reasoning of those who may not persist with their courses at university. These results, in the context of the greater attrition rate of remote students from university, point to the need to identify factors that positively impact on rural and remote students’ interest and achievement in science and mathematics. It also highlights a need for future research into the particular issues remote students may face in deciding whether or not to do science at the two key transition points of senior school and university/TAFE studies, and whether or not to persist in their tertiary studies. This paper is positioned at the intersection of two problems in Australian education. The first is a context of falling levels of participation in physical science and mathematics courses in Australian universities. The second is persistent inequitable access to, and retention in, tertiary education for students from rural and remote areas. Despite considerable research attention to both of these areas over recent years these problems have thus far proved to be intractable. This paper therefore aims to briefly review the relevant Australian literature pertaining to these issues; that is, declining STEM enrolments, and the underrepresentation and retention of rural/remote students in higher education. Given the related problems in these two overlapping domains, we then explore the views of first year rural students enrolled in courses, in relation to their intentions of withdrawing (or not) and the associated reasons for their views.
Resumo:
The topic of this dissertation lies in the intersection of harmonic analysis and fractal geometry. We particulary consider singular integrals in Euclidean spaces with respect to general measures, and we study how the geometric structure of the measures affects certain analytic properties of the operators. The thesis consists of three research articles and an overview. In the first article we construct singular integral operators on lower dimensional Sierpinski gaskets associated with homogeneous Calderón-Zygmund kernels. While these operators are bounded their principal values fail to exist almost everywhere. Conformal iterated function systems generate a broad range of fractal sets. In the second article we prove that many of these limit sets are porous in a very strong sense, by showing that they contain holes spread in every direction. In the following we connect these results with singular integrals. We exploit the fractal structure of these limit sets, in order to establish that singular integrals associated with very general kernels converge weakly. Boundedness questions consist a central topic of investigation in the theory of singular integrals. In the third article we study singular integrals of different measures. We prove a very general boundedness result in the case where the two underlying measures are separated by a Lipshitz graph. As a consequence we show that a certain weak convergence holds for a large class of singular integrals.
Resumo:
A composition operator is a linear operator between spaces of analytic or harmonic functions on the unit disk, which precomposes a function with a fixed self-map of the disk. A fundamental problem is to relate properties of a composition operator to the function-theoretic properties of the self-map. During the recent decades these operators have been very actively studied in connection with various function spaces. The study of composition operators lies in the intersection of two central fields of mathematical analysis; function theory and operator theory. This thesis consists of four research articles and an overview. In the first three articles the weak compactness of composition operators is studied on certain vector-valued function spaces. A vector-valued function takes its values in some complex Banach space. In the first and third article sufficient conditions are given for a composition operator to be weakly compact on different versions of vector-valued BMOA spaces. In the second article characterizations are given for the weak compactness of a composition operator on harmonic Hardy spaces and spaces of Cauchy transforms, provided the functions take values in a reflexive Banach space. Composition operators are also considered on certain weak versions of the above function spaces. In addition, the relationship of different vector-valued function spaces is analyzed. In the fourth article weighted composition operators are studied on the scalar-valued BMOA space and its subspace VMOA. A weighted composition operator is obtained by first applying a composition operator and then a pointwise multiplier. A complete characterization is given for the boundedness and compactness of a weighted composition operator on BMOA and VMOA. Moreover, the essential norm of a weighted composition operator on VMOA is estimated. These results generalize many previously known results about composition operators and pointwise multipliers on these spaces.
Composition operators, Aleksandrov measures and value distribution of analytic maps in the unit disc
Resumo:
A composition operator is a linear operator that precomposes any given function with another function, which is held fixed and called the symbol of the composition operator. This dissertation studies such operators and questions related to their theory in the case when the functions to be composed are analytic in the unit disc of the complex plane. Thus the subject of the dissertation lies at the intersection of analytic function theory and operator theory. The work contains three research articles. The first article is concerned with the value distribution of analytic functions. In the literature there are two different conditions which characterize when a composition operator is compact on the Hardy spaces of the unit disc. One condition is in terms of the classical Nevanlinna counting function, defined inside the disc, and the other condition involves a family of certain measures called the Aleksandrov (or Clark) measures and supported on the boundary of the disc. The article explains the connection between these two approaches from a function-theoretic point of view. It is shown that the Aleksandrov measures can be interpreted as kinds of boundary limits of the Nevanlinna counting function as one approaches the boundary from within the disc. The other two articles investigate the compactness properties of the difference of two composition operators, which is beneficial for understanding the structure of the set of all composition operators. The second article considers this question on the Hardy and related spaces of the disc, and employs Aleksandrov measures as its main tool. The results obtained generalize those existing for the case of a single composition operator. However, there are some peculiarities which do not occur in the theory of a single operator. The third article studies the compactness of the difference operator on the Bloch and Lipschitz spaces, improving and extending results given in the previous literature. Moreover, in this connection one obtains a general result which characterizes the compactness and weak compactness of the difference of two weighted composition operators on certain weighted Hardy-type spaces.
Resumo:
In this thesis we study a few games related to non-wellfounded and stationary sets. Games have turned out to be an important tool in mathematical logic ranging from semantic games defining the truth of a sentence in a given logic to for example games on real numbers whose determinacies have important effects on the consistency of certain large cardinal assumptions. The equality of non-wellfounded sets can be determined by a so called bisimulation game already used to identify processes in theoretical computer science and possible world models for modal logic. Here we present a game to classify non-wellfounded sets according to their branching structure. We also study games on stationary sets moving back to classical wellfounded set theory. We also describe a way to approximate non-wellfounded sets with hereditarily finite wellfounded sets. The framework used to do this is domain theory. In the Banach-Mazur game, also called the ideal game, the players play a descending sequence of stationary sets and the second player tries to keep their intersection stationary. The game is connected to precipitousness of the corresponding ideal. In the pressing down game first player plays regressive functions defined on stationary sets and the second player responds with a stationary set where the function is constant trying to keep the intersection stationary. This game has applications in model theory to the determinacy of the Ehrenfeucht-Fraisse game. We show that it is consistent that these games are not equivalent.
Resumo:
This study provides a detailed insight into the changing writing demands from the last year of university study to the first year in the workforce of engineering and accounting professionals. The study relates these to the demands of the writing component of IELTS, which is increasingly used for exit testing. The number of international and local students whose first language is not English and who are studying in English-medium universities has increased significantly in the past decade. Many of these students aim to start working in the country they studied in; however, some employers have suggested that graduates seeking employment have insufficient language skills. This study provides a detailed insight into the changing writing demands from the last year of university study to the first year in the workforce of engineering and accounting professionals (our two case study professions). It relates these to the demands of the writing component of IELTS, which is increasingly used for exit or professional entry testing, although not expressly designed for this purpose. Data include interviews with final year students, lecturers, employers and new graduates in their first few years in the workforce, as well as professional board members. Employers also reviewed both final year assignments, as well as IELTS writing samples at different levels. Most stakeholders agreed that graduates entering the workforce are underprepared for the writing demands in their professions. When compared with the university writing tasks, the workplace writing expected of new graduates was perceived as different in terms of genre, the tailoring of a text for a specific audience, and processes of review and editing involved. Stakeholders expressed a range of views on the suitability of the use of academic proficiency tests (such as IELTS) as university exit tests and for entry into the professions. With regard to IELTS, while some saw the relevance of the two writing tasks, particularly in relation to academic writing, others questioned the extent to which two timed tasks representing limited genres could elicit a representative sample of the professional writing required, particularly in the context of engineering. The findings are discussed in relation to different test purposes, the intersection between academic and specific purpose testing and the role of domain experts in test validation.
Resumo:
This study examines philosophically the main theories and methodological assumptions of the field known as the cognitive science of religion (CSR). The study makes a philosophically informed reconstruction of the methodological principles of the CSR, indicates problems with them, and examines possible solutions to these problems. The study focuses on several different CSR writers, namely, Scott Atran, Justin Barrett, Pascal Boyer and Dan Sperber. CSR theorising is done in the intersection between cognitive sciences, anthropology and evolutionary psychology. This multidisciplinary nature makes CSR a fertile ground for philosophical considerations coming from philosophy of psychology, philosophy of mind and philosophy of science. The study begins by spelling out the methodological assumptions and auxiliary theories of CSR writers by situating these theories and assumptions in the nexus of existing approaches to religion. The distinctive feature of CSR is its emphasis on information processing: CSR writers claim that contemporary cognitive sciences can inform anthropological theorising about the human mind and offer tools for producing causal explanations. Further, they claim to explain the prevalence and persistence of religion by cognitive systems that undergird religious thinking. I also examine the core theoretical contributions of the field focusing mainly on the (1) “minimally counter-intuitiveness hypothesis” and (2) the different ways in which supernatural agent representations activate our cognitive systems. Generally speaking, CSR writers argue for the naturalness of religion: religious ideas and practices are widespread and pervasive because human cognition operates in such a way that religious ideas are easy to acquire and transmit. The study raises two philosophical problems, namely, the “problem of scope” and the “problem of religious relevance”. The problem of scope is created by the insistence of several critics of the CSR that CSR explanations are mostly irrelevant for explaining religion. Most CSR writers themselves hold that cognitive explanations can answer most of our questions about religion. I argue that the problem of scope is created by differences in explanation-begging questions: the former group is interested in explaining different things than the latter group. I propose that we should not stick too rigidly to one set of methodological assumptions, but rather acknowledge that different assumptions might help us to answer different questions about religion. Instead of adhering to some robust metaphysics as some strongly naturalistic writers argue, we should adopt a pragmatic and explanatory pluralist approach which would allow different kinds of methodological presuppositions in the study of religion provided that they attempt to answer different kinds of why-questions, since religion appears to be a multi-faceted phenomenon that spans over a variety of fields of special sciences. The problem of religious relevance is created by the insistence of some writers that CSR theories show religious beliefs to be false or irrational, whereas others invoke CSR theories to defend certain religious ideas. The problem is interesting because it reveals the more general philosophical assumptions of those who make such interpretations. CSR theories can (and have been) interpreted in terms of three different philosophical frameworks: strict naturalism, broad naturalism and theism. I argue that CSR theories can be interpreted inside all three frameworks without doing violence to the theories and that these frameworks give different kinds of results regarding the religious relevance of CSR theories.
Resumo:
Motivated by a problem from fluid mechanics, we consider a generalization of the standard curve shortening flow problem for a closed embedded plane curve such that the area enclosed by the curve is forced to decrease at a prescribed rate. Using formal asymptotic and numerical techniques, we derive possible extinction shapes as the curve contracts to a point, dependent on the rate of decreasing area; we find there is a wider class of extinction shapes than for standard curve shortening, for which initially simple closed curves are always asymptotically circular. We also provide numerical evidence that self-intersection is possible for non-convex initial conditions, distinguishing between pinch-off and coalescence of the curve interior.
Resumo:
A geodesic-based approach using Lamb waves is proposed to locate the acoustic emission (AE) source and damage in an isotropic metallic structure. In the case of the AE (passive) technique, the elastic waves take the shortest path from the source to the sensor array distributed in the structure. The geodesics are computed on the meshed surface of the structure using graph theory based on Dijkstra's algorithm. By propagating the waves in reverse virtually from these sensors along the geodesic path and by locating the first intersection point of these waves, one can get the AE source location. The same approach is extended for detection of damage in a structure. The wave response matrix of the given sensor configuration for the healthy and the damaged structure is obtained experimentally. The healthy and damage response matrix is compared and their difference gives the information about the reflection of waves from the damage. These waves are backpropagated from the sensors and the above method is used to locate the damage by finding the point where intersection of geodesics occurs. In this work, the geodesic approach is shown to be suitable to obtain a practicable source location solution in a more general set-up on any arbitrary surface containing finite discontinuities. Experiments were conducted on aluminum specimens of simple and complex geometry to validate this new method.