913 resultados para point-to-point speed cameras
Resumo:
Se ha desarrollado un sistema electrónico computerizado, portátil y de bajo consumo, denominado Medidor de Velocidad de Vehículos por Ultrasonidos de Alta Exactitud, VUAE. La alta exactitud de la medida conseguida en el VUAE hace que pueda servir de medida de referencia de la velocidad de un vehículo circulando en carretera. Por tanto el VUAE puede usarse como medida de referencia que permita estimar el error de los cinemómetros comerciales. El VUAE está compuesto por n (n≥2) parejas de emisores y receptores piezoeléctricos de ultrasonidos, denominados E-Rult. Los emisores de las n parejas E-Rult generan n barreras de ultrasonidos, y los receptores piezoeléctricos captan la señal de los ecos cuando el vehículo atraviesa las barreras. Estos ecos se procesan digitalmente para conseguir señales representativas. Posteriormente, utilizando la técnica de la correlación cruzada de señales, se ha podido estimar con alta exactitud la diferencia de tiempos entre los ecos captados en cada barrera. Con los tiempos entre ecos y con la distancia entre cada una de las n barreras de ultrasonidos se puede realizar una estimación de la velocidad del vehículo con alta exactitud. El VUAE se ha contrastado con un sistema de velocidad de referencia, basado en cables piezoeléctricos. ABSTRACT We have developed a portable computerized and low consumption, our system is called High Accuracy Piezoelectric Kinemometer measurement, herein VUAE. By the high accuracy obtained by VUAE it make able to use the VUAE to obtain references measurements of system for measuring Speeds in Vehicles. Therefore VUAE could be used how reference equipment to estimate the error of installed kinemometers. The VUAE was created with n (n≥2) pairs of ultrasonic transmitter-receiver, herein E-Rult. The transmitters used in the n couples E-Rult generate n ultrasonic barriers and receivers receive the echoes when the vehicle crosses the barriers. Digital processing of the echoes signals let us to obtain acceptable signals. Later, by mean of cross correlation technics is possible make a highly exact estimation of speed of the vehicle. The log of the moments of interception and the distance between each of the n ultrasounds allows for a highly exact estimation of speed of the vehicle. VUAE speed measurements were compared to a speed reference system based on piezoelectric cables.
Resumo:
The design of containment walls suffering seismic loads traditionally has been realized with methods based on pseudoanalitic procedures such as Mononobe-Okabe's method, which it has led in certain occasions to insecure designs, that they have produced the ruin of many containment walls suffering the action of an earthquake. The recommendations gathered in Mononobe-Okabe's theory have been included in numerous Codes of Seismic Design. It is clear that a revision of these recommendations must be done. At present there is taking place an important review of the design methods of anti-seismic structures such as containment walls placed in an area of numerous earthquakes, by means of the introduction at the beginning of the decade of 1990 the Displacement Response Spectrum (DRS) and the Capacity Demand Diagram (CDD) that suppose an important change in the way of presenting the Elastic Response Spectrum (ERS). On the other hand in case of action of an earthquake, the dynamic characteristics of a soil have been referred traditionally to the speed of the shear waves that can be generated in a site, together with the characteristics of plasticity and damping of the soil. The Principle of the energy conservation explains why a shear upward propagating seismic wave can be amplified when travelling from a medium with high shear wave velocity (rock) to other medium with lower velocity (soil deposit), as it happened in the earthquake of Mexico of 1985. This amplification is a function of the speed gradient or of the contrast of impedances in the border of both types of mediums. A method is proposed in this paper for the design of containment walls in different soils, suffering to the action of an earthquake, based on the Performance-Based Seismic Design.
Resumo:
In this paper a model for the measuring process of sonic anemometers (ultrasound pulse based) is presented. The differential equations that describe the travel of ultrasound pulses are solved in the general case of non-steady, non-uniform atmospheric flow field. The concepts of instantaneous line-average and travelling pulse-referenced average are established and employed to explain and calculate the differences between the measured turbulent speed (travelling pulse-referenced average) and the line-averaged one. The limit k1l=1 established by Kaimal in 1968, as the maximum value which permits the neglect of the influence of the sonic measuring process on the measurement of turbulent components is reviewed here. Three particular measurement cases are analysed: A non-steady, uniform flow speed field, a steady, non-uniform flow speed field and finally an atmospheric flow speed field. In the first case, for a harmonic time-dependent flow field, Mach number, M (flow speed to sound speed ratio) and time delay between pulses have revealed themselves to be important parameters in the behaviour of sonic anemometers, within the range of operation. The second case demonstrates how the spatial non-uniformity of the flow speed field leads to an influence of the finite transit time of the pulses (M≠0) even in the absence of non-steady behaviour of the wind speed. In the last case, a model of the influence of the sonic anemometer processes on the measurement of wind speed spectral characteristics is presented. The new solution is compared to the line-averaging models existing in the literature. Mach number and time delay significantly distort the measurement in the normal operational range. Classical line averaging solutions are recovered when Mach number and time delay between pulses go to zero in the new proposed model. The results obtained from the mathematical model have been applied to the calculation of errors in different configurations of practical interest, such as an anemometer located on a meteorological mast and the transfer function of a sensor in an atmospheric wind. The expressions obtained can be also applied to determine the quality requirements of the flow in a wind tunnel used for ultrasonic anemometer calibrations.
Resumo:
Pesquisadores e indústrias de todo o mundo estão firmemente comprometidos com o propósito de fazer o processo de usinagem ser precisamente veloz e produtivo. A forte concorrência mundial gerou a procura por processos de usinagem econômicos, com grande capacidade de produção de cavacos e que produzam peças com elevada qualidade. Dentre as novas tecnologias que começaram a ser empregadas, e deve tornar-se o caminho certo a ser trilhado na busca da competitividade em curto espaço de tempo, está a tecnologia de usinagem com altas velocidades (HSM de High Speed Machining). A tecnologia HSM surge como componente essencial na otimização dos esforços para manutenção e aumento da competitividade global das empresas. Durante os últimos anos a usinagem com alta velocidade tem ganhado grande importância, sendo dada uma maior atenção ao desenvolvimento e à disponibilização no mercado de máquinas-ferramentas com rotações muito elevadas (20.000 - 100.000 rpm). O processo de usinagem com alta velocidade está sendo usado não apenas para ligas de alumínio e cobre, mas também para materiais de difícil usinabilidade, como os aços temperados e superligas à base de níquel. Porém, quando se trata de materiais de difícil corte, têm-se observado poucas publicações, principalmente no processo de torneamento. Mas, antes que a tecnologia HSM possa ser empregada de uma forma econômica, todos os componentes envolvidos no processo de usinagem, incluindo a máquina, o eixo-árvore, a ferramenta e o pessoal, precisam estar afinados com as peculiaridades deste novo processo. No que diz respeito às máquinas-ferramenta, isto significa que elas têm que satisfazer requisitos particulares de segurança. As ferramentas, devido à otimização de suas geometrias, substratos e revestimentos, contribuem para o sucesso deste processo. O presente trabalho objetiva estudar o comportamento de diversas geometrias ) de insertos de cerâmica (Al2O3 + SiCw e Al2O3 + TIC) e PCBN com duas concentrações de CBN na forma padrão, assim como modificações na geometria das arestas de corte empregadas em torneamento com alta velocidade em superligas à base de níquel (Inconel 718 e Waspaloy). Os materiais foram tratados termicamente para dureza de 44 e 40 HRC respectivamente, e usinados sob condição de corte a seco e com utilização da técnica de mínima quantidade de lubrificante (minimal quantity lubricant - MQL) visando atender requisitos ambientais. As superligas à base de níquel são conhecidas como materiais de difícil usinabilidade devido à alta dureza, alta resistência mecânica em alta temperatura, afinidade para reagir com materiais da ferramenta e baixa condutividade térmica. A usinagem de superligas afeta negativamente a integridade da peça. Por essa razão, cuidados especiais devem ser tomados para assegurar a vida da ferramenta e a integridade superficial de componentes usinados por intermédio de controle dos principais parâmetros de usinagem. Experimentos foram realizados sob diversas condições de corte e geometrias de ferramentas para avaliação dos parâmetros: força de corte, temperatura, emissão acústica e integridade superficial (rugosidade superficial, tensão residual, microdureza e microestrutura) e mecanismos de desgaste. Mediante os resultados apresentados, recomenda-se à geometria de melhor desempenho nos parâmetros citados e confirma-se a eficiência da técnica MQL. Dentre as ferramentas e geometrias testadas, a que apresentou melhor desempenho foi a ferramenta cerâmica CC650 seguida da ferramenta cerâmica CC670 ambas com formato redondo e geometria 2 (chanfro em T de 0,15 x 15º com raio de aresta de 0,03 mm).
Resumo:
Tuning compilations is the process of adjusting the values of a compiler options to improve some features of the final application. In this paper, a strategy based on the use of a genetic algorithm and a multi-objective scheme is proposed to deal with this task. Unlike previous works, we try to take advantage of the knowledge of this domain to provide a problem-specific genetic operation that improves both the speed of convergence and the quality of the results. The evaluation of the strategy is carried out by means of a case of study aimed to improve the performance of the well-known web server Apache. Experimental results show that a 7.5% of overall improvement can be achieved. Furthermore, the adaptive approach has shown an ability to markedly speed-up the convergence of the original strategy.
Resumo:
Initiated in May 2011, several months after the Fukushima nuclear disaster, Germany’s energy transformation (Energiewende) has been presented as an irrevocable plan, and – due to the speed of change required – it represents a new quality in Germany’s energy strategy. Its main objectives include: nuclear energy being phased out by 2022, the development of renewable energy sources (OZE), the expansion of transmission networks, the construction of new conventional power plants and an improvement in energy efficiency.The cornerstone of the strategy is the development of renewable energy. Under Germany's amended renewable energy law, the proportion of renewable energy in electricity generation is supposed to increase steadily from the current level of around 20% to approximately 38% in 2020. In 2030, renewable energy is expected to account for 50% of electricity generation. This is expected to increase to 65% in 2040 and to as much as 80% in 2050. The impact of the Energiewende is not limited to the sphere of energy supplies. In the medium and long term, it will change not only to the way the German economy operates, but also the functioning of German society and the state. Facing difficulties with the expansion of transmission networks, the excessive cost of building wind farms, and problems with the stability of electricity supplies, especially during particularly cold winters, the federal government has so far tended to centralise power and limit the independence of the German federal states with regard to their respective energy policies, justifying this with the need for greater co-ordination. The Energiewende may also become the beginning of a "third industrial revolution", i.e. a transition to a green economy and a society based on sustainable development. This will require a new "social contract" that will redefine the relations between the state, society and the economy. Negotiating such a contract will be one of the greatest challenges for German policy in the coming years.
Resumo:
Particle flow patterns were investigated for wet granulation and dry powder mixing in ploughshare mixers using Positron Emission Particle Tracking (PEPT). In a 4-1 mixer, calcium carbonate with mean size 45 mum was granulated using a 50 wt.% solution of glycerol and water as binding fluid, and particle movement was followed using a 600-mum calcium hydroxy-phosphate tracer particle. In a 20-1 mixer, dry powder flow was studied using a 600-mum resin bead tracer particle to simulate the bulk polypropylene powder with mean size 600 mum. Important differences were seen between particle flow patterns for wet and dry systems. Particle speed relative to blade speed was lower in the wet system than in the dry system, with the ratios of average particle speed to blade tip speed for all experiments in the range 0.01-015. In the axial plane, the same particle motion was observed around each blade; this provides a significant advance for modelling flow in ploughshare mixers. For the future, a detailed understanding of the local velocity, acceleration and density variations around a plough blade will reveal the effects of flow patterns in granulating systems on the resultant distribution of granular product attributes such as size, density and strength. (C) 2002 Elsevier Science B.V All rights reserved.
Resumo:
The n-tuple recognition method is briefly reviewed, summarizing the main theoretical results. Large-scale experiments carried out on Stat-Log project datasets confirm this method as a viable competitor to more popular methods due to its speed, simplicity, and accuracy on the majority of a wide variety of classification problems. A further investigation into the failure of the method on certain datasets finds the problem to be largely due to a mismatch between the scales which describe generalization and data sparseness.
Resumo:
A major application of computers has been to control physical processes in which the computer is embedded within some large physical process and is required to control concurrent physical processes. The main difficulty with these systems is their event-driven characteristics, which complicate their modelling and analysis. Although a number of researchers in the process system community have approached the problems of modelling and analysis of such systems, there is still a lack of standardised software development formalisms for the system (controller) development, particular at early stage of the system design cycle. This research forms part of a larger research programme which is concerned with the development of real-time process-control systems in which software is used to control concurrent physical processes. The general objective of the research in this thesis is to investigate the use of formal techniques in the analysis of such systems at their early stages of development, with a particular bias towards an application to high speed machinery. Specifically, the research aims to generate a standardised software development formalism for real-time process-control systems, particularly for software controller synthesis. In this research, a graphical modelling formalism called Sequential Function Chart (SFC), a variant of Grafcet, is examined. SFC, which is defined in the international standard IEC1131 as a graphical description language, has been used widely in industry and has achieved an acceptable level of maturity and acceptance. A comparative study between SFC and Petri nets is presented in this thesis. To overcome identified inaccuracies in the SFC, a formal definition of the firing rules for SFC is given. To provide a framework in which SFC models can be analysed formally, an extended time-related Petri net model for SFC is proposed and the transformation method is defined. The SFC notation lacks a systematic way of synthesising system models from the real world systems. Thus a standardised approach to the development of real-time process control systems is required such that the system (software) functional requirements can be identified, captured, analysed. A rule-based approach and a method called system behaviour driven method (SBDM) are proposed as a development formalism for real-time process-control systems.
Resumo:
When viewing a drifting plaid stimulus, perceived motion alternates over time between coherent pattern motion and a transparent impression of the two component gratings. It is known that changing the intrinsic attributes of such patterns (e.g. speed, orientation and spatial frequency of components) can influence percept predominance. Here, we investigate the contribution of extrinsic factors to perception; specifically contextual motion and eye movements. In the first experiment, the percept most similar to the speed and direction of surround motion increased in dominance, implying a tuned integration process. This shift primarily involved an increase in dominance durations of the consistent percept. The second experiment measured eye movements under similar conditions. Saccades were not associated with perceptual transitions, though blink rate increased around the time of a switch. This indicates that saccades do not cause switches, yet saccades in a congruent direction might help to prolong a percept because i) more saccades were directionally congruent with the currently reported percept than expected by chance, and ii) when observers were asked to make deliberate eye movements along one motion axis, this increased percept reports in that direction. Overall, we find evidence that perception of bistable motion can be modulated by information from spatially adjacent regions, and changes to the retinal image caused by blinks and saccades.
Resumo:
Masking, adaptation, and summation paradigms have been used to investigate the characteristics of early spatio-temporal vision. Each has been taken to provide evidence for (i) oriented and (ii) nonoriented spatial-filtering mechanisms. However, subsequent findings suggest that the evidence for nonoriented mechanisms has been misinterpreted: those experiments might have revealed the characteristics of suppression (eg, gain control), not excitation, or merely the isotropic subunits of the oriented detecting mechanisms. To shed light on this, we used all three paradigms to focus on the ‘high-speed’ corner of spatio-temporal vision (low spatial frequency, high temporal frequency), where cross-oriented achromatic effects are greatest. We used flickering Gabor patches as targets and a 2IFC procedure for monocular, binocular, and dichoptic stimulus presentations. To account for our results, we devised a simple model involving an isotropic monocular filter-stage feeding orientation-tuned binocular filters. Both filter stages are adaptable, and their outputs are available to the decision stage following nonlinear contrast transduction. However, the monocular isotropic filters (i) adapt only to high-speed stimuli—consistent with a magnocellular subcortical substrate—and (ii) benefit decision making only for high-speed stimuli (ie, isotropic monocular outputs are available only for high-speed stimuli). According to this model, the visual processes revealed by masking, adaptation, and summation are related but not identical.
Resumo:
Greenhouse cultivation is an energy intensive process therefore it is worthwhile to introduce energy saving measures and alternative energy sources. Here we show that there is scope for energy saving in fan ventilated greenhouses. Measurements of electricity usage as a function of fan speed have been performed for two models of 1.25 m diameter greenhouse fans and compared to theoretical values. Reducing the speed can cut the energy usage per volume of air moved by more than 70%. To minimize the capital cost of low-speed operation, a cooled greenhouse has been built in which the fan speed responds to sunlight such that full speed is reached only around noon. The energy saving is about 40% compared to constant speed operation. Direct operation of fans from solar-photovoltaic modules is also viable as shown from experiments with a fan driven by a brushless DC motor. On comparing the Net Present Value costs of the different systems over a 10 year amortization period (with and without a carbon tax to represent environmental costs) we find that sunlight-controlled system saves money under all assumptions about taxation and discount rates. The solar-powered system, however, is only profitable for very low discount rates, due to the high initial capital costs. Nonetheless this system could be of interest for its reliability in developing countries where mains electricity is intermittent. We recommend that greenhouse fan manufacturers improve the availability of energy-saving designs such as those described here.
Resumo:
The European Union with its sophisticated institutional system is the most important regional integration on Earth. This tight form of economic integration converges to the level that Dani Rodrik calls hyperglobalization in his model, the political trilemma of globalisation. In our paper we develop the mentioned model and then we apply it to the case of the European Integration. We argue that if we want to maintain the deep integration among member states in the EU we have to pass more and more functions of the nation states to the federation level. In case of the EMU that means that federal fiscal policy is needed which could lead to multi-speed Europe considering new member states’ reluctance to give up their specific institutions.
Resumo:
Az Európai Unió a világgazdaság egyik legfontosabb integrációja. A benne megvalósuló gazdasági integráció szorossága megfelel annak a szintnek, amit Rodrik hiperglobalizációnak nevez. Az elmélet szerint a politika szintjén egyszerre nem megvalósítható a demokratikus politikai döntéshozatal, a teljes világgazdasági integráció, illetve a nemzetállam. A trilemma a globalizáció útjában álló intézményi különbségeken alapszik. Megoldása három módon lehetséges: a demokrácia kiiktatásával a megoldás az arany kényszerzubbony, ahol a piaci mechanizmusok veszik át az állami gazdaságpolitika szerepét; a globális kormányzás megvalósulása esetén a szuverén nemzetállamok tűnnek el a nemzetközi rendszerből; végül a Bretton Woods kompromisszum esetében a globalizáció útjába állítunk akadályokat. Írásunkban a modellt az európai integrációra, egészen pontosan a Gazdasági és Monetáris Unióra alkalmazzuk. Érvelésünk szerint, ha fent kívánjuk tartani az integráció szorosságát, erősíteni kell az integráció szintjén a gazdasági kormányzást, ami pedig csak a tagállami szuverenitás rovására mehet. Ez, mely a GMU esetében leginkább a fiskális föderáció erősítését jelenti ugyanakkor, megnövelve az integráció költségeit, egy többsebességes Európa kialakulása irányába hathat. _____ The European Union with its sophisticated institutional system is the most important regional integration on Earth. This tight form of economic integration converges to the level that Dani Rodrik calls hyperglobalization in his model, the political trilemma of globalisation. In this model Rodrik assumes that from the three desired element of world politics (deep economic integration, the nation state, and democratic politics) only two can be chosen. We can either choose deep integration and the nation state but then we have to abandon democracy; or we can choose deep integration and democracy, but then we have to forfeit the nation state; or we have to circumscribe globalisation to maintain democracy and the nation state. In our paper we develop the mentioned model and then we apply it to the case of the European integration. We argue that if we want to maintain the deep integration among member states in the EU we have to pass more and more functions of the nation states to the federation level. In case of the EMU that means that federal fiscal policy is needed which could lead to multi-speed Europe considering new member states reluctance to give up their specific institutions.
Resumo:
Memory (cache, DRAM, and disk) is in charge of providing data and instructions to a computer's processor. In order to maximize performance, the speeds of the memory and the processor should be equal. However, using memory that always match the speed of the processor is prohibitively expensive. Computer hardware designers have managed to drastically lower the cost of the system with the use of memory caches by sacrificing some performance. A cache is a small piece of fast memory that stores popular data so it can be accessed faster. Modern computers have evolved into a hierarchy of caches, where a memory level is the cache for a larger and slower memory level immediately below it. Thus, by using caches, manufacturers are able to store terabytes of data at the cost of cheapest memory while achieving speeds close to the speed of the fastest one.^ The most important decision about managing a cache is what data to store in it. Failing to make good decisions can lead to performance overheads and over-provisioning. Surprisingly, caches choose data to store based on policies that have not changed in principle for decades. However, computing paradigms have changed radically leading to two noticeably different trends. First, caches are now consolidated across hundreds to even thousands of processes. And second, caching is being employed at new levels of the storage hierarchy due to the availability of high-performance flash-based persistent media. This brings four problems. First, as the workloads sharing a cache increase, it is more likely that they contain duplicated data. Second, consolidation creates contention for caches, and if not managed carefully, it translates to wasted space and sub-optimal performance. Third, as contented caches are shared by more workloads, administrators need to carefully estimate specific per-workload requirements across the entire memory hierarchy in order to meet per-workload performance goals. And finally, current cache write policies are unable to simultaneously provide performance and consistency guarantees for the new levels of the storage hierarchy.^ We addressed these problems by modeling their impact and by proposing solutions for each of them. First, we measured and modeled the amount of duplication at the buffer cache level and contention in real production systems. Second, we created a unified model of workload cache usage under contention to be used by administrators for provisioning, or by process schedulers to decide what processes to run together. Third, we proposed methods for removing cache duplication and to eliminate wasted space because of contention for space. And finally, we proposed a technique to improve the consistency guarantees of write-back caches while preserving their performance benefits.^