913 resultados para optimal trigger speed
Resumo:
The purpose of the work was to realize a high-speed digital data transfer system for RPC muon chambers in the CMS experiment on CERN’s new LHC accelerator. This large scale system took many years and many stages of prototyping to develop, and required the participation of tens of people. The system interfaces to Frontend Boards (FEB) at the 200,000-channel detector and to the trigger and readout electronics in the control room of the experiment. The distance between these two is about 80 metres and the speed required for the optic links was pushing the limits of available technology when the project was started. Here, as in many other aspects of the design, it was assumed that the features of readily available commercial components would develop in the course of the design work, just as they did. By choosing a high speed it was possible to multiplex the data from some the chambers into the same fibres to reduce the number of links needed. Further reduction was achieved by employing zero suppression and data compression, and a total of only 660 optical links were needed. Another requirement, which conflicted somewhat with choosing the components a late as possible was that the design needed to be radiation tolerant to an ionizing dose of 100 Gy and to a have a moderate tolerance to Single Event Effects (SEEs). This required some radiation test campaigns, and eventually led to ASICs being chosen for some of the critical parts. The system was made to be as reconfigurable as possible. The reconfiguration needs to be done from a distance as the electronics is not accessible except for some short and rare service breaks once the accelerator starts running. Therefore reconfigurable logic is extensively used, and the firmware development for the FPGAs constituted a sizable part of the work. Some special techniques needed to be used there too, to achieve the required radiation tolerance. The system has been demonstrated to work in several laboratory and beam tests, and now we are waiting to see it in action when the LHC will start running in the autumn 2008.
Resumo:
The purpose of this study was to test the hypothesis that the potentiation of dynamic function was dependent upon both length change speed and direction. Mouse EDL was cycled in vitro (25º C) about optimal length (Lo) with constant peak strain (± 2.5% Lo) at 1.5, 3.3 and 6.9 Hz before and after a conditioning stimulus. A single pulse was applied during shortening or lengthening and peak dynamic (concentric or eccentric) forces were assessed at Lo. Stimulation increased peak concentric force at all frequencies (range: 19 ± 1 to 30 ± 2%) but this increase was proportional to shortening speed, as were the related changes to concentric work/power (range: -15 ± 1 to 39 ± 1 %). In contrast, stimulation did not increase eccentric force, work or power at any frequency. Thus, results reveal a unique hysteresis like effect for the potentiation of dynamic output wherein concentric and eccentric forces increase and decrease, respectively, with work cycle frequency.
Resumo:
The purpose of this study was to test the hypothesis that the potentiation of dynamic function was dependent upon both length change speed and direction. Mouse EDL was cycled in vitro (250 C) about optimal length (Lo) with constant peak strain (± 2.5% Lo) at 1.5,3.3 and 6.9 Hz before and after a conditioning stimulus. A single pulse was applied during shortening or lengthening and peak dynamic (concentric or eccentric) forces were assessed at Lo. Stimulation increased peak concentric force at all frequencies (range: 19±1 to 30 ± 2%) but this increase was proportional to shortening speed, as were the related changes to concentric work/power (range: -15 ± 1 to 39 ± 1 %). In contrast, stimulation did not increase eccentric force, work or power at any frequency. Thus, results reveal a unique hysteresis like effect for the potentiation of dynamic output wherein concentric and eccentric forces increase and decrease, respectively, with work cycle frequency.
Resumo:
One of the primary goals of the Center for Integrated Space Weather Modeling (CISM) effort is to assess and improve prediction of the solar wind conditions in near‐Earth space, arising from both quasi‐steady and transient structures. We compare 8 years of L1 in situ observations to predictions of the solar wind speed made by the Wang‐Sheeley‐Arge (WSA) empirical model. The mean‐square error (MSE) between the observed and model predictions is used to reach a number of useful conclusions: there is no systematic lag in the WSA predictions, the MSE is found to be highest at solar minimum and lowest during the rise to solar maximum, and the optimal lead time for 1 AU solar wind speed predictions is found to be 3 days. However, MSE is shown to frequently be an inadequate “figure of merit” for assessing solar wind speed predictions. A complementary, event‐based analysis technique is developed in which high‐speed enhancements (HSEs) are systematically selected and associated from observed and model time series. WSA model is validated using comparisons of the number of hit, missed, and false HSEs, along with the timing and speed magnitude errors between the forecasted and observed events. Morphological differences between the different HSE populations are investigated to aid interpretation of the results and improvements to the model. Finally, by defining discrete events in the time series, model predictions from above and below the ecliptic plane can be used to estimate an uncertainty in the predicted HSE arrival times.
Resumo:
This paper considers the motion planning problem for oriented vehicles travelling at unit speed in a 3-D space. A Lie group formulation arises naturally and the vehicles are modeled as kinematic control systems with drift defined on the orthonormal frame bundles of particular Riemannian manifolds, specifically, the 3-D space forms Euclidean space E-3, the sphere S-3, and the hyperboloid H'. The corresponding frame bundles are equal to the Euclidean group of motions SE(3), the rotation group SO(4), and the Lorentz group SO (1, 3). The maximum principle of optimal control shifts the emphasis for these systems to the associated Hamiltonian formalism. For an integrable case, the extremal curves are explicitly expressed in terms of elliptic functions. In this paper, a study at the singularities of the extremal curves are given, which correspond to critical points of these elliptic functions. The extremal curves are characterized as the intersections of invariant surfaces and are illustrated graphically at the singular points. It. is then shown that the projections, of the extremals onto the base space, called elastica, at these singular points, are curves of constant curvature and torsion, which in turn implies that the oriented vehicles trace helices.
Resumo:
Planning of autonomous vehicles in the absence of speed lanes is a less-researched problem. However, it is an important step toward extending the possibility of autonomous vehicles to countries where speed lanes are not followed. The advantages of having nonlane-oriented traffic include larger traffic bandwidth and more overtaking, which are features that are highlighted when vehicles vary in terms of speed and size. In the most general case, the road would be filled with a complex grid of static obstacles and vehicles of varying speeds. The optimal travel plan consists of a set of maneuvers that enables a vehicle to avoid obstacles and to overtake vehicles in an optimal manner and, in turn, enable other vehicles to overtake. The desired characteristics of this planning scenario include near completeness and near optimality in real time with an unstructured environment, with vehicles essentially displaying a high degree of cooperation and enabling every possible(safe) overtaking procedure to be completed as soon as possible. Challenges addressed in this paper include a (fast) method for initial path generation using an elastic strip, (re-)defining the notion of completeness specific to the problem, and inducing the notion of cooperation in the elastic strip. Using this approach, vehicular behaviors of overtaking, cooperation, vehicle following,obstacle avoidance, etc., are demonstrated.
Resumo:
The current state of the art in the planning and coordination of autonomous vehicles is based upon the presence of speed lanes. In a traffic scenario where there is a large diversity between vehicles the removal of speed lanes can generate a significantly higher traffic bandwidth. Vehicle navigation in such unorganized traffic is considered. An evolutionary based trajectory planning technique has the advantages of making driving efficient and safe, however it also has to surpass the hurdle of computational cost. In this paper, we propose a real time genetic algorithm with Bezier curves for trajectory planning. The main contribution is the integration of vehicle following and overtaking behaviour for general traffic as heuristics for the coordination between vehicles. The resultant coordination strategy is fast and near-optimal. As the vehicles move, uncertainties may arise which are constantly adapted to, and may even lead to either the cancellation of an overtaking procedure or the initiation of one. Higher level planning is performed by Dijkstra's algorithm which indicates the route to be followed by the vehicle in a road network. Re-planning is carried out when a road blockage or obstacle is detected. Experimental results confirm the success of the algorithm subject to optimal high and low-level planning, re-planning and overtaking.
Resumo:
Inverse methods are widely used in various fields of atmospheric science. However, such methods are not commonly used within the boundary-layer community, where robust observations of surface fluxes are a particular concern. We present a new technique for deriving surface sensible heat fluxes from boundary-layer turbulence observations using an inverse method. Doppler lidar observations of vertical velocity variance are combined with two well-known mixed-layer scaling forward models for a convective boundary layer (CBL). The inverse method is validated using large-eddy simulations of a CBL with increasing wind speed. The majority of the estimated heat fluxes agree within error with the proscribed heat flux, across all wind speeds tested. The method is then applied to Doppler lidar data from the Chilbolton Observatory, UK. Heat fluxes are compared with those from a mast-mounted sonic anemometer. Errors in estimated heat fluxes are on average 18 %, an improvement on previous techniques. However, a significant negative bias is observed (on average −63%) that is more pronounced in the morning. Results are improved for the fully-developed CBL later in the day, which suggests that the bias is largely related to the choice of forward model, which is kept deliberately simple for this study. Overall, the inverse method provided reasonable flux estimates for the simple case of a CBL. Results shown here demonstrate that this method has promise in utilizing ground-based remote sensing to derive surface fluxes. Extension of the method is relatively straight-forward, and could include more complex forward models, or other measurements.
Resumo:
The aim of this study was 1) to validate the 0.5 body-mass exponent for maximal oxygen uptake (V. O2max) as the optimal predictor of performance in a 15 km classical-technique skiing competition among elite male cross-country skiers and 2) to evaluate the influence of distance covered on the body-mass exponent for V. O2max among elite male skiers. Twenty-four elite male skiers (age: 21.4±3.3 years [mean ± standard deviation]) completed an incremental treadmill roller-skiing test to determine their V. O2max. Performance data were collected from a 15 km classicaltechnique cross-country skiing competition performed on a 5 km course. Power-function modeling (ie, an allometric scaling approach) was used to establish the optimal body-mass exponent for V . O2max to predict the skiing performance. The optimal power-function models were found to be race speed = 8.83⋅(V . O2max m-0.53) 0.66 and lap speed = 5.89⋅(V . O2max m-(0.49+0.018lap)) 0.43e0.010age, which explained 69% and 81% of the variance in skiing speed, respectively. All the variables contributed to the models. Based on the validation results, it may be recommended that V. O2max divided by the square root of body mass (mL⋅min−1 ⋅kg−0.5) should be used when elite male skiers’ performance capability in 15 km classical-technique races is evaluated. Moreover, the body-mass exponent for V . O2max was demonstrated to be influenced by the distance covered, indicating that heavier skiers have a more pronounced positive pacing profile (ie, race speed gradually decreasing throughout the race) compared to that of lighter skiers.
Resumo:
This paper examines the output losses caused by disinflation and the role of credibility in a model where pricing mIes are optimal and individual prices are rigid. Individual nominal rigidity is modeled as resulting from menu costs. The interaction between optimal pricing mIes and credibility is essential in determining the inflationary inertia. A continued period of high inflation generates an asymmetric distribution of price deviations, with more prices that are substantially lower than their desired leveIs than prices that are substantially higher than the optimal ones. When disinflation is not credible, inflationary inertia is engendered by this asymmetry: idiosyncratic shocks trigger more upward than downward adjustments. A perfect1y credible disinflation causes an immediate change of pricing rules which, by rendering the price deviation distribution less asymmetric, practically annihilates inflationary inertia. An implication of our model is that stabilization may be sucessful even when credibility is low, provided that it is preceded by a mechanism of price alignment. We also develop an analytical framework for analyzing imperfect credibility cases.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
This paper presents for the first time how to easily incorporate facts devices in an optimal active power flow model such that an efficient interior-point method may be applied. The optimal active power flow model is based on a network flow approach instead of the traditional nodal formulation that allows the use of an efficiently predictor-corrector interior point method speed up by sparsity exploitation. The mathematical equivalence between the network flow and the nodal models is addressed, as well as the computational advantages of the former considering the solution by interior point methods. The adequacy of the network flow model for representing facts devices is presented and illustrated on a small 5-bus system. The model was implemented using Matlab and its performance was evaluated with the 3,397-bus and 4,075-branch Brazilian power system which show the robustness and efficiency of the formulation proposed. The numerical results also indicate an efficient tool for optimal active power flow that is suitable for incorporating facts devices.
Resumo:
A new approach called the Modified Barrier Lagrangian Function (MBLF) to solve the Optimal Reactive Power Flow problem is presented. In this approach, the inequality constraints are treated by the Modified Barrier Function (MBF) method, which has a finite convergence property: i.e. the optimal solution in the MBF method can actually be in the bound of the feasible set. Hence, the inequality constraints can be precisely equal to zero. Another property of the MBF method is that the barrier parameter does not need to be driven to zero to attain the solution. Therefore, the conditioning of the involved Hessian matrix is greatly enhanced. In order to show this, a comparative analysis of the numeric conditioning of the Hessian matrix of the MBLF approach, by the decomposition in singular values, is carried out. The feasibility of the proposed approach is also demonstrated with comparative tests to Interior Point Method (IPM) using various IEEE test systems and two networks derived from Brazilian generation/transmission system. The results show that the MBLF method is computationally more attractive than the IPM in terms of speed, number of iterations and numerical conditioning. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Resuscitation from hemorrhagic shock relies on fluid retransfusion. However, the optimal properties of the fluid have not been established. The aim of the present study was to test the influence of the concentration of hydroxyethyl starch (HES) solution on plasma viscosity and colloid osmotic pressure (COP), systemic and microcirculatory recovery, and oxygen delivery and consumption after resuscitation, which were assessed in the hamster chamber window preparation by intravital microscopy. Awake hamsters were subjected to 50% hemorrhage and were resuscitated with 25% of the estimated blood volume with 5%, 10%, or 20% HES solution. The increase in concentration led to an increase in COP (from 20 to 70 and 194 mmHg) and viscosity (from 1.7 to 3.8 and 14.4 cP). Cardiac index and microcirculatory and metabolic recovery were improved with HES 10% and 20% when compared with 5% HES. Oxygen delivery and consumption in the dorsal skinfold chamber was more than doubled with HES 10% and 20% when compared with HES 5%. This was attributed to the beneficial effect of restored or increased plasma COP and plasma viscosity as obtained with HES 10% and 20%, leading to improved microcirculatory blood flow values early in the resuscitation period. The increase in COP led to an increase in blood volume as shown by a reduction in hematocrit. Mean arterial pressure was significantly improved in animals receiving 10% and 20% solutions. In conclusion, the present results show that the increase in the concentration of HES, leading to hyperoncotic and hyperviscous solutions, is beneficial for resuscitation from hemorrhagic shock because normalization of COP and viscosity led to a rapid recovery of microcirculatory parameters.
Resumo:
Two important and upcoming technologies, microgrids and electricity generation from wind resources, are increasingly being combined. Various control strategies can be implemented, and droop control provides a simple option without requiring communication between microgrid components. Eliminating the single source of potential failure around the communication system is especially important in remote, islanded microgrids, which are considered in this work. However, traditional droop control does not allow the microgrid to utilize much of the power available from the wind. This dissertation presents a novel droop control strategy, which implements a droop surface in higher dimension than the traditional strategy. The droop control relationship then depends on two variables: the dc microgrid bus voltage, and the wind speed at the current time. An approach for optimizing this droop control surface in order to meet a given objective, for example utilizing all of the power available from a wind resource, is proposed and demonstrated. Various cases are used to test the proposed optimal high dimension droop control method, and demonstrate its function. First, the use of linear multidimensional droop control without optimization is demonstrated through simulation. Next, an optimal high dimension droop control surface is implemented with a simple dc microgrid containing two sources and one load. Various cases for changing load and wind speed are investigated using simulation and hardware-in-the-loop techniques. Optimal multidimensional droop control is demonstrated with a wind resource in a full dc microgrid example, containing an energy storage device as well as multiple sources and loads. Finally, the optimal high dimension droop control method is applied with a solar resource, and using a load model developed for a military patrol base application. The operation of the proposed control is again investigated using simulation and hardware-in-the-loop techniques.