937 resultados para hardware redundancy
Resumo:
Real time monitoring allows the determination of the line state and the calculation of the actual rating value. The real time monitoring systems measure sag, conductor tension, conductor temperature or weather related magnitudes. In this paper, a new ampacity monitoring system for overhead lines, based on the conductor tension, the ambient temperature, the solar radiation and the current intensity, is presented. The measurements are transmitted via GPRS to a control center where a software program calculates the ampacity value. The system takes into account the creep deformation experienced by the conductors during their lifetime and calibrates the tension-temperature reference and the maximum allowable temperature in order to obtain the ampacity. The system includes both hardware implementation and remote control software.
Resumo:
Duración (en horas): De 31 a 40 horas. Destinatario: Estudiante
Resumo:
Since May 2012 Paul Cocker, Operation Executive and the Senior Management Team of Alliance Learning have introduced an online learner management system for every learner, requiring significant investment in systems, hardware, acceptance by staff and above all, time and commitment from the management team. This organisation has taken the radical step to overcome one of the major barriers to achieve its goal by dedicating three periods of two weeks where the business has closed for staff CPD training. A total of 500 man hours were invested to implement the online system. This is an excellent model of how to make these major changes effective in the shortest time.
Resumo:
Duración (en horas): Más de 50 horas. Destinatario: Estudiante y Docente
Resumo:
The 2009/28/EC Directive requires Member States of the European Union to adopt a National Action Plan for Renewable Energy. In this context, the Basque Energy Board, EVE, is committed to research activities such as the Mutriku Oscillating Water Column plant, OWC. This is an experimental facility whose concept consists of a turbine located in a pneumatic energy collection chamber and a doubly fed induction generator that converts energy extracted by the turbine into a form that can be returned to the network. The turbo-generator control requires a precise knowledge of system parameters and of the rotor angular velocity in particular. Thus, to remove the rotor speed sensor implies a simplification of the hardware that is always convenient in rough working conditions. In this particular case, a Luenberger based observer is considered and the effectiveness of the proposed control is shown by numerical simulations. Comparing these results with those obtained using a traditional speed sensor, it is shown that the proposed solution provides better performance since it increases power extraction in the sense that it allows a more reliable and robust performance of the plant, which is even more relevant in a hostile environment as the ocean.
Resumo:
The Alliance for Coastal Technologies (ACT) convened a workshop, sponsored by the Hawaii-Pacific and Alaska Regional Partners, entitled Underwater Passive Acoustic Monitoring for Remote Regions at the Hawaii Institute of Marine Biology from February 7-9, 2007. The workshop was designed to summarize existing passive acoustic technologies and their uses, as well as to make strategic recommendations for future development and collaborative programs that use passive acoustic tools for scientific investigation and resource management. The workshop was attended by 29 people representing three sectors: research scientists, resource managers, and technology developers. The majority of passive acoustic tools are being developed by individual scientists for specific applications and few tools are available commercially. Most scientists are developing hydrophone-based systems to listen for species-specific information on fish or cetaceans; a few scientists are listening for biological indicators of ecosystem health. Resource managers are interested in passive acoustics primarily for vessel detection in remote protected areas and secondarily to obtain biological and ecological information. The military has been monitoring with hydrophones for decades;however, data and signal processing software has not been readily available to the scientific community, and future collaboration is greatly needed. The challenges that impede future development of passive acoustics are surmountable with greater collaboration. Hardware exists and is accessible; the limits are in the software and in the interpretation of sounds and their correlation with ecological events. Collaboration with the military and the private companies it contracts will assist scientists and managers with obtaining and developing software and data analysis tools. Collaborative proposals among scientists to receive larger pools of money for exploratory acoustic science will further develop the ability to correlate noise with ecological activities. The existing technologies and data analysis are adequate to meet resource managers' needs for vessel detection. However, collaboration is needed among resource managers to prepare large-scale programs that include centralized processing in an effort to address the lack of local capacity within management agencies to analyze and interpret the data. Workshop participants suggested that ACT might facilitate such collaborations through its website and by providing recommendations to key agencies and programs, such as DOD, NOAA, and I00s. There is a need to standardize data formats and archive acoustic environmental data at the national and international levels. Specifically, there is a need for local training and primers for public education, as well as by pilot demonstration projects, perhaps in conjunction with National Marine Sanctuaries. Passive acoustic technologies should be implemented immediately to address vessel monitoring needs. Ecological and health monitoring applications should be developed as vessel monitoring programs provide additional data and opportunities for more exploratory research. Passive acoustic monitoring should also be correlated with water quality monitoring to ease integration into long-term monitoring programs, such as the ocean observing systems. [PDF contains 52 pages]
Resumo:
The ACT workshop "Enabling Sensor Interoperability" addressed the need for protocols at the hardware, firmware, and higher levels in order to attain instrument interoperability within and between ocean observing systems. For the purpose of the workshop, participants spoke in tern of "instruments" rather than "sensors," defining an instrument as a device that contains one or more sensors or actuators and can convert signals from analog to digital. An increase in the abundance, variety, and complexity of instruments and observing systems suggests that effective standards would greatly improve "plug-and-work" capabilities. However, there are few standards or standards bodies that currently address instrument interoperability and configuration. Instrument interoperability issues span the length and breadth of these systems, from the measurement to the end user, including middleware services. There are three major components of instrument interoperability including physical, communication, and application/control layers. Participants identified the essential issues, current obstacles, and enabling technologies and standards, then came up with a series of short and long term solutions. The top three recommended actions, deemed achievable within 6 months of the release of this report are: A list of recommendations for enabling instrument interoperability should be put together and distributed to instrument developers. A recommendation for funding sources to achieve instrument interoperability should be drafted. Funding should be provided (for example through NOPP or an IOOS request for proposals) to develop and demonstrate instrument interoperability technologies involving instrument manufacturers, observing system operators, and cyberinfrastructure groups. Program managers should be identified and made to understand that milestones for achieving instrument interoperability include a) selection of a methodology for uniquely identifying an instrument, b) development of a common protocol for automatic instrument discovery, c) agreement on uniform methods for measurements, d) enablement of end user controlled power cycling, and e) implementation of a registry component for IDS and attributes. The top three recommended actions, deemed achievable within S years of the release of this report are: An ocean observing interoperability standards body should be established that addresses standards for a) metadata, b) commands, c) protocols, d) processes, e) exclusivity, and f) naming authorities.[PDF contains 48 pages]
Resumo:
The Alliance for Coastal Technologies (ACT) Workshop on Optical Remote Sensing of Coastal Habitats was convened January 9-11, 2006 at Moss Landing Marine Laboratories in Moss Landing, California, sponsored by the ACT West Coast regional partnership comprised of the Moss Landing Marine Laboratories (MLML) and the Monterey Bay Aquarium Research Institute (MBARI). The "Optical Remote Sensing of Coastal Habitats" (ORS) Workshop completes ACT'S Remote Sensing Technology series by building upon the success of ACT'S West Coast Regional Partner Workshop "Acoustic Remote Sensing Technologies for Coastal Imaging and Resource Assessment" (ACT 04-07). Drs. Paul Bissett of the Florida Environmental Research Institute (FERI) and Scott McClean of Satlantic, Inc. were the ORS workshop co-chairs. Invited participants were selected to provide a uniform representation of the academic researchers, private sector product developers, and existing and potential data product users from the resource management community to enable development of broad consensus opinions on the role of ORS technologies in coastal resource assessment and management. The workshop was organized to examine the current state of multi- and hyper-spectral imaging technologies with the intent to assess the current limits on their routine application for habitat classification and resource monitoring of coastal watersheds, nearshore shallow water environments, and adjacent optically deep waters. Breakout discussions focused on the capabilities, advantages ,and limitations of the different technologies (e.g., spectral & spatial resolution), as well as practical issues related to instrument and platform availability, reliability, hardware, software, and technical skill levels required to exploit the data products generated by these instruments. Specifically, the participants were charged to address the following: (1) Identify the types of ORS data products currently used for coastal resource assessment and how they can assist coastal managers in fulfilling their regulatory and management responsibilities; (2) Identify barriers and challenges to the application of ORS technologies in management and research activities; (3) Recommend a series of community actions to overcome identified barriers and challenges. Plenary presentations by Drs. Curtiss 0. Davis (Oregon State University) and Stephan Lataille (ITRES Research, Ltd.) provided background summaries on the varieties of ORS technologies available, deployment platform options, and tradeoffs for application of ORS data products with specific applications to the assessment of coastal zone water quality and habitat characterization. Dr. Jim Aiken (CASIX) described how multiscale ground-truth measurements were essential for developing robust assessment of modeled biogeochemical interpretations derived from optically based earth observation data sets. While continuing improvements in sensor spectral resolution, signal to noise and dynamic range coupled with sensor-integrated GPS, improved processing algorithms for georectification, and atmospheric correction have made ORS data products invaluable synoptic tools for oceanographic research, their adoption as management tools has lagged. Seth Blitch (Apalachicola National Estuarine Research Reserve) described the obvious needs for, yet substantial challenges hindering the adoption of advanced spectroscopic imaging data products to supplement the current dominance of digital ortho-quad imagery by the resource management community, especially when they impinge on regulatory issues. (pdf contains 32 pages)
Resumo:
Singular Value Decomposition (SVD) is a key linear algebraic operation in many scientific and engineering applications. In particular, many computational intelligence systems rely on machine learning methods involving high dimensionality datasets that have to be fast processed for real-time adaptability. In this paper we describe a practical FPGA (Field Programmable Gate Array) implementation of a SVD processor for accelerating the solution of large LSE problems. The design approach has been comprehensive, from the algorithmic refinement to the numerical analysis to the customization for an efficient hardware realization. The processing scheme rests on an adaptive vector rotation evaluator for error regularization that enhances convergence speed with no penalty on the solution accuracy. The proposed architecture, which follows a data transfer scheme, is scalable and based on the interconnection of simple rotations units, which allows for a trade-off between occupied area and processing acceleration in the final implementation. This permits the SVD processor to be implemented both on low-cost and highend FPGAs, according to the final application requirements.
Resumo:
In this project, a system to detect and control traffic through Arduino has been designed and developed. The system has been divided in three parts. On the one hand, we have a software simulator which have been designed and developed to manage the traffic from a computer. The simulator is written in the Java Language and it is able to control four different types of crossroads, offering several options to the user for each one of them. On the other hand, with relation to the hardware, an Arduino board to make a scale model of one of the crossroads that controls the application has been used. This Arduino receives and processes the messages sent from the computer, next it shows the traffic light of the scale model in the same way that are seen in the simulator. And finally, to detect the traffic by the system, it has also been designed and developed a traffic sensor using another Arduino. To communicate the simulator in the computer and the Arduino which has been used to control the hardware of the scale model, and share information among them, the serial communication of each one of them has been used. Once completely developed each part of the system, several tests have been made to validate the correctness of both, software and hardware.
Resumo:
Abstract to Part I
The inverse problem of seismic wave attenuation is solved by an iterative back-projection method. The seismic wave quality factor, Q, can be estimated approximately by inverting the S-to-P amplitude ratios. Effects of various uncertain ties in the method are tested and the attenuation tomography is shown to be useful in solving for the spatial variations in attenuation structure and in estimating the effective seismic quality factor of attenuating anomalies.
Back-projection attenuation tomography is applied to two cases in southern California: Imperial Valley and the Coso-Indian Wells region. In the Coso-Indian Wells region, a highly attenuating body (S-wave quality factor (Q_β ≈ 30) coincides with a slow P-wave anomaly mapped by Walck and Clayton (1987). This coincidence suggests the presence of a magmatic or hydrothermal body 3 to 5 km deep in the Indian Wells region. In the Imperial Valley, slow P-wave travel-time anomalies and highly attenuating S-wave anomalies were found in the Brawley seismic zone at a depth of 8 to 12 km. The effective S-wave quality factor is very low (Q_β ≈ 20) and the P-wave velocity is 10% slower than the surrounding areas. These results suggest either magmatic or hydrothermal intrusions, or fractures at depth, possibly related to active shear in the Brawley seismic zone.
No-block inversion is a generalized tomographic method utilizing the continuous form of an inverse problem. The inverse problem of attenuation can be posed in a continuous form , and the no-block inversion technique is applied to the same data set used in the back-projection tomography. A relatively small data set with little redundancy enables us to apply both techniques to a similar degree of resolution. The results obtained by the two methods are very similar. By applying the two methods to the same data set, formal errors and resolution can be directly computed for the final model, and the objectivity of the final result can be enhanced.
Both methods of attenuation tomography are applied to a data set of local earthquakes in Kilauea, Hawaii, to solve for the attenuation structure under Kilauea and the East Rift Zone. The shallow Kilauea magma chamber, East Rift Zone and the Mauna Loa magma chamber are delineated as attenuating anomalies. Detailed inversion reveals shallow secondary magma reservoirs at Mauna Ulu and Puu Oo, the present sites of volcanic eruptions. The Hilina Fault zone is highly attenuating, dominating the attenuating anomalies at shallow depths. The magma conduit system along the summit and the East Rift Zone of Kilauea shows up as a continuous supply channel extending down to a depth of approximately 6 km. The Southwest Rift Zone, on the other hand, is not delineated by attenuating anomalies, except at a depth of 8-12 km, where an attenuating anomaly is imaged west of Puu Kou. The Ylauna Loa chamber is seated at a deeper level (about 6-10 km) than the Kilauea magma chamber. Resolution in the Mauna Loa area is not as good as in the Kilauea area, and there is a trade-off between the depth extent of the magma chamber imaged under Mauna Loa and the error that is due to poor ray coverage. Kilauea magma chamber, on the other hand, is well resolved, according to a resolution test done at the location of the magma chamber.
Abstract to Part II
Long period seismograms recorded at Pasadena of earthquakes occurring along a profile to Imperial Valley are studied in terms of source phenomena (e.g., source mechanisms and depths) versus path effects. Some of the events have known source parameters, determined by teleseismic or near-field studies, and are used as master events in a forward modeling exercise to derive the Green's functions (SH displacements at Pasadena that are due to a pure strike-slip or dip-slip mechanism) that describe the propagation effects along the profile. Both timing and waveforms of records are matched by synthetics calculated from 2-dimensional velocity models. The best 2-dimensional section begins at Imperial Valley with a thin crust containing the basin structure and thickens towards Pasadena. The detailed nature of the transition zone at the base of the crust controls the early arriving shorter periods (strong motions), while the edge of the basin controls the scattered longer period surface waves. From the waveform characteristics alone, shallow events in the basin are easily distinguished from deep events, and the amount of strike-slip versus dip-slip motion is also easily determined. Those events rupturing the sediments, such as the 1979 Imperial Valley earthquake, can be recognized easily by a late-arriving scattered Love wave that has been delayed by the very slow path across the shallow valley structure.
Resumo:
Using neuromorphic analog VLSI techniques for modeling large neural systems has several advantages over software techniques. By designing massively-parallel analog circuit arrays which are ubiquitous in neural systems, analog VLSI models are extremely fast, particularly when local interactions are important in the computation. While analog VLSI circuits are not as flexible as software methods, the constraints posed by this approach are often very similar to the constraints faced by biological systems. As a result, these constraints can offer many insights into the solutions found by evolution. This dissertation describes a hardware modeling effort to mimic the primate oculomotor system which requires both fast sensory processing and fast motor control. A one-dimensional hardware model of the primate eye has been built which simulates the physical dynamics of the biological system. It is driven by analog VLSI circuits mimicking brainstem and cortical circuits that control eye movements. In this framework, a visually-triggered saccadic system is demonstrated which generates averaging saccades. In addition, an auditory localization system, based on the neural circuits of the barn owl, is used to trigger saccades to acoustic targets in parallel with visual targets. Two different types of learning are also demonstrated on the saccadic system using floating-gate technology allowing the non-volatile storage of analog parameters directly on the chip. Finally, a model of visual attention is used to select and track moving targets against textured backgrounds, driving both saccadic and smooth pursuit eye movements to maintain the image of the target in the center of the field of view. This system represents one of the few efforts in this field to integrate both neuromorphic sensory processing and motor control in a closed-loop fashion.
Resumo:
This thesis discusses various methods for learning and optimization in adaptive systems. Overall, it emphasizes the relationship between optimization, learning, and adaptive systems; and it illustrates the influence of underlying hardware upon the construction of efficient algorithms for learning and optimization. Chapter 1 provides a summary and an overview.
Chapter 2 discusses a method for using feed-forward neural networks to filter the noise out of noise-corrupted signals. The networks use back-propagation learning, but they use it in a way that qualifies as unsupervised learning. The networks adapt based only on the raw input data-there are no external teachers providing information on correct operation during training. The chapter contains an analysis of the learning and develops a simple expression that, based only on the geometry of the network, predicts performance.
Chapter 3 explains a simple model of the piriform cortex, an area in the brain involved in the processing of olfactory information. The model was used to explore the possible effect of acetylcholine on learning and on odor classification. According to the model, the piriform cortex can classify odors better when acetylcholine is present during learning but not present during recall. This is interesting since it suggests that learning and recall might be separate neurochemical modes (corresponding to whether or not acetylcholine is present). When acetylcholine is turned off at all times, even during learning, the model exhibits behavior somewhat similar to Alzheimer's disease, a disease associated with the degeneration of cells that distribute acetylcholine.
Chapters 4, 5, and 6 discuss algorithms appropriate for adaptive systems implemented entirely in analog hardware. The algorithms inject noise into the systems and correlate the noise with the outputs of the systems. This allows them to estimate gradients and to implement noisy versions of gradient descent, without having to calculate gradients explicitly. The methods require only noise generators, adders, multipliers, integrators, and differentiators; and the number of devices needed scales linearly with the number of adjustable parameters in the adaptive systems. With the exception of one global signal, the algorithms require only local information exchange.
Resumo:
Rhythmic motor behaviors in all animals appear to be under the control of "central pattern generator" circuits, neural circuits which can produce output patterns appropriate for behavior even when isolated from their normal peripheral inputs. Insects have been a useful model system in which to study the control of legged terrestrial locomotion. Much is known about walking in insects at the behavioral level, but to date there has been no clear demonstration that a central pattern generator for walking exists. The focus of this thesis is to explore the central neural basis for locomotion in the locust, Schistocerca americana.
Rhythmic motor patterns could be evoked in leg motor neurons of isolated thoracic ganglia of locusts by the muscarinic agonist pilocarpine. These motor patterns would be appropriate for the movement of single legs during walking. Rhythmic patterns could be evoked in all three thoracic ganglia, but the segmental rhythms differed in their sensitivities to pilocarpine, their frequencies, and the phase relationships of motor neuron antagonists. These different patterns could be generated by a simple adaptable model circuit, which was both simulated and implemented in VLSI hardware. The intersegmental coordination of leg motor rhythms was then examined in preparations of isolated chains of thoracic ganglia. Correlations between motor patterns in different thoracic ganglia indicated that central coupling between segmental pattern generators is likely to contribute to the coordination of the legs during walking.
The work described here clearly demonstrates that segmental pattern generators for walking exist in insects. The pattern generators produce motor outputs which are likely to contribute to the coordination of the joints of a limb, as well as the coordination of different limbs. These studies lay the groundwork for further studies to determine the relative contributions of central and sensory neural mechanisms to terrestrial walking.
Resumo:
The scalability of CMOS technology has driven computation into a diverse range of applications across the power consumption, performance and size spectra. Communication is a necessary adjunct to computation, and whether this is to push data from node-to-node in a high-performance computing cluster or from the receiver of wireless link to a neural stimulator in a biomedical implant, interconnect can take up a significant portion of the overall system power budget. Although a single interconnect methodology cannot address such a broad range of systems efficiently, there are a number of key design concepts that enable good interconnect design in the age of highly-scaled CMOS: an emphasis on highly-digital approaches to solving ‘analog’ problems, hardware sharing between links as well as between different functions (such as equalization and synchronization) in the same link, and adaptive hardware that changes its operating parameters to mitigate not only variation in the fabrication of the link, but also link conditions that change over time. These concepts are demonstrated through the use of two design examples, at the extremes of the power and performance spectra.
A novel all-digital clock and data recovery technique for high-performance, high density interconnect has been developed. Two independently adjustable clock phases are generated from a delay line calibrated to 2 UI. One clock phase is placed in the middle of the eye to recover the data, while the other is swept across the delay line. The samples produced by the two clocks are compared to generate eye information, which is used to determine the best phase for data recovery. The functions of the two clocks are swapped after the data phase is updated; this ping-pong action allows an infinite delay range without the use of a PLL or DLL. The scheme's generalized sampling and retiming architecture is used in a sharing technique that saves power and area in high-density interconnect. The eye information generated is also useful for tuning an adaptive equalizer, circumventing the need for dedicated adaptation hardware.
On the other side of the performance/power spectra, a capacitive proximity interconnect has been developed to support 3D integration of biomedical implants. In order to integrate more functionality while staying within size limits, implant electronics can be embedded onto a foldable parylene (‘origami’) substrate. Many of the ICs in an origami implant will be placed face-to-face with each other, so wireless proximity interconnect can be used to increase communication density while decreasing implant size, as well as facilitate a modular approach to implant design, where pre-fabricated parylene-and-IC modules are assembled together on-demand to make custom implants. Such an interconnect needs to be able to sense and adapt to changes in alignment. The proposed array uses a TDC-like structure to realize both communication and alignment sensing within the same set of plates, increasing communication density and eliminating the need to infer link quality from a separate alignment block. In order to distinguish the communication plates from the nearby ground plane, a stimulus is applied to the transmitter plate, which is rectified at the receiver to bias a delay generation block. This delay is in turn converted into a digital word using a TDC, providing alignment information.