818 resultados para Hiking -- Tools and equipment


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have recently exchanged and integrated into a single database tag detections for conch, teleost and elasmobranch fish from four separately maintained arrays in the U.S. Virgin Islands including the NMFS queen conch array (St. John nearshore), NOAA’s Biogeography Branch array (St. John nearshore & midshelf reef); UVI shelf edge arrays (Marine Conservation District, Grammanik & other shelf edge); NOAA NMFS Apex Predator array COASTSPAN (St. John nearshore). The integrated database has over 7.5 million hits. Data is shared only with consent of partners and full acknowledgements. Thus, the summary of integrated data here uses data from NOAA and UVI arrays under a cooperative agreement. The benefits of combining and sharing data have included increasing the total area of detection resulting in an understanding of broader scale connectivity than would have been possible with a single array. Partnering has also been cost-effectiveness through sharing of field work, staff time and equipment and exchanges of knowledge and experience across the network. Use of multiple arrays has also helped in optimizing the design of arrays when additional receivers are deployed. The combined arrays have made the USVI network one of the most extensive acoustic arrays in the world with a total of 150+ receivers available, although not necessarily all deployed at all times. Currently, two UVI graduate student projects are using acoustic array data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the last few years great changes have taken place in the fishing industry as a result of which production of fish in the world has increased enormously. From an insignificant trade employing tools and methods of primitive nature, fishing in many countries has become an important industry utilizing complex modern vessels equipped with electronic equipment and operating in the high seas with highly mechanized fishing gear.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several small scleractinian coral colonies were collected from a remote reef and transferred [to] the Louisiana Universities Marine Center (LUMCON) for in vitro reproductive and larval studies. The species used here were Porites astreoides and Diploria strigosa. Colony size was ~20 cm in diameter. Colonies were brought to the surface by liftbag and stored in modified ice coolers. They were transported from Freeport, TX to Cocodrie, LA by truck for nearly 15 hours where field conditions were simulated in waiting aquaria. This document describes the techniques and equipment that were used, how to outfit such aquaria, proper handling techniques for coral colonies, and several eventualities that the mariculturist should be prepared for in undertaking this endeavor. It will hopefully prevent many mistakes from being made.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This communication describes the design aspect and functions of individual pieces of equipment of a pilot plant for the production of fish ensilage based on lactic acid fermentation process. Details about the equipment, process flow sheet and equipment layout of the pilot plant have been given. An attempt has been made to prepare an estimate of the cost of production of liquid ensilage and solid feed mix.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the development of a new building physics and energy supply systems simulation platform. It has been adapted from both existing commercial models and empirical works, but designed to provide expedient exhaustive simulation of all salient types of energy- and carbon-reducing retrofit options. These options may include any combination of behavioural measures, building fabric and equipment upgrades, improved HVAC control strategies, or novel low-carbon energy supply technologies. We provide a methodological description of the proposed model, followed by two illustrative case studies of the tool when used to investigate retrofit options of a mixed-use office building and primary school in the UK. It is not the intention of this paper, nor would it be feasible, to provide a complete engineering decomposition of the proposed model, describing all calculation processes in detail. Instead, this paper concentrates on presenting the particular engineering aspects of the model which steer away from conventional practise. © 2011 Elsevier Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two main perspectives have been developed within the Multidisciplinary Design Optimization (MDO) literature for classifying and comparing MDO architectures: a numerical point of view and a formulation/data flow point of view. Although significant work has been done here, these perspectives have not provided much in the way of a priori information or predictive power about architecture performance. In this report, we outline a new perspective, called the geometric perspective, which we believe will be able to provide such predictive power. Using tools from differential geometry, we take several prominent architectures and describe mathematically how each constructs the space through which it moves. We then consider how the architecture moves through the space which it has constructed. Taken together, these investigations show how each architecture relates to the original feasible design manifold, how the architectures relate to each other, and how each architecture deals with the design coupling inherent to the original system. This in turn lays the groundwork for further theoretical comparisons between and analyses of MDO architectures and their behaviour using tools and techniques derived from differential geometry. © 2012 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we discuss key implementation challenges of a systems approach that combines System Dynamics, Scenario Planning and Qualitative Data Analysis methods in tackling a complex problem. We present the methods and the underlying framework. We then detail the main difficulties encountered in designing and planning the Scenario Planning workshop and how they were overcome, such as finding and involving the stakeholders and customising the process to fit within timing constraints. After presenting the results from this application, we argue that the consultants or system analysts need to engage with the stakeholders as process facilitators and not as system experts in order to gain commitment, trust and to improve information sharing. They also need be ready to adapt their tools and processes as well as their own thinking for more effective complex problem solving.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this article is to examine the methods and equipment for abating waste gases and water produced during the manufacture of semiconductor materials and devices. Three separating methods and equipment are used to control three different groups of electronic wastes. The first group includes arsine and phosphine emitted during the processes of semiconductor materials manufacture. The abatement procedure for this group of pollutants consists of adding iodates, cupric and manganese salts to a multiple shower tower (MST) structure. The second group includes pollutants containing arsenic, phosphorus, HF, HCl, NO2, and SO3 emitted during the manufacture of semiconductor materials and devices. The abatement procedure involves mixing oxidants and bases in an oval column with a separator in the middle. The third group consists of the ions of As, P and heavy metals contained in the waste water. The abatement procedure includes adding CaCO3 and ferric salts in a flocculation-sedimentation compact device equipment. Test results showed that all waste gases and water after the abatement procedures presented in this article passed the discharge standards set by the State Environmental Protection Administration of China.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research is to design a differential pumping system not only to achieve the pressure transition with a large throughput, but also to achieve a clean system without back-oil. In the paper, the pressure in differential stages is calculated; the differential pumping system design and equipment choice are introduced; the tests of Molecular/Booster Pump (MBP), a new kind of molecular-drag pump with large throughout and clean vacuum are described and the system experimental result and analysis are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automated assembly of mechanical devices is studies by researching methods of operating assembly equipment in a variable manner; that is, systems which may be configured to perform many different assembly operations are studied. The general parts assembly operation involves the removal of alignment errors within some tolerance and without damaging the parts. Two methods for eliminating alignment errors are discussed: a priori suppression and measurement and removal. Both methods are studied with the more novel measurement and removal technique being studied in greater detail. During the study of this technique, a fast and accurate six degree-of-freedom position sensor based on a light-stripe vision technique was developed. Specifications for the sensor were derived from an assembly-system error analysis. Studies on extracting accurate information from the sensor by optimally reducing redundant information, filtering quantization noise, and careful calibration procedures were performed. Prototype assembly systems for both error elimination techniques were implemented and used to assemble several products. The assembly system based on the a priori suppression technique uses a number of mechanical assembly tools and software systems which extend the capabilities of industrial robots. The need for the tools was determined through an assembly task analysis of several consumer and automotive products. The assembly system based on the measurement and removal technique used the six degree-of-freedom position sensor to measure part misalignments. Robot commands for aligning the parts were automatically calculated based on the sensor data and executed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Malicious software (malware) have significantly increased in terms of number and effectiveness during the past years. Until 2006, such software were mostly used to disrupt network infrastructures or to show coders’ skills. Nowadays, malware constitute a very important source of economical profit, and are very difficult to detect. Thousands of novel variants are released every day, and modern obfuscation techniques are used to ensure that signature-based anti-malware systems are not able to detect such threats. This tendency has also appeared on mobile devices, with Android being the most targeted platform. To counteract this phenomenon, a lot of approaches have been developed by the scientific community that attempt to increase the resilience of anti-malware systems. Most of these approaches rely on machine learning, and have become very popular also in commercial applications. However, attackers are now knowledgeable about these systems, and have started preparing their countermeasures. This has lead to an arms race between attackers and developers. Novel systems are progressively built to tackle the attacks that get more and more sophisticated. For this reason, a necessity grows for the developers to anticipate the attackers’ moves. This means that defense systems should be built proactively, i.e., by introducing some security design principles in their development. The main goal of this work is showing that such proactive approach can be employed on a number of case studies. To do so, I adopted a global methodology that can be divided in two steps. First, understanding what are the vulnerabilities of current state-of-the-art systems (this anticipates the attacker’s moves). Then, developing novel systems that are robust to these attacks, or suggesting research guidelines with which current systems can be improved. This work presents two main case studies, concerning the detection of PDF and Android malware. The idea is showing that a proactive approach can be applied both on the X86 and mobile world. The contributions provided on this two case studies are multifolded. With respect to PDF files, I first develop novel attacks that can empirically and optimally evade current state-of-the-art detectors. Then, I propose possible solutions with which it is possible to increase the robustness of such detectors against known and novel attacks. With respect to the Android case study, I first show how current signature-based tools and academically developed systems are weak against empirical obfuscation attacks, which can be easily employed without particular knowledge of the targeted systems. Then, I examine a possible strategy to build a machine learning detector that is robust against both empirical obfuscation and optimal attacks. Finally, I will show how proactive approaches can be also employed to develop systems that are not aimed at detecting malware, such as mobile fingerprinting systems. In particular, I propose a methodology to build a powerful mobile fingerprinting system, and examine possible attacks with which users might be able to evade it, thus preserving their privacy. To provide the aforementioned contributions, I co-developed (with the cooperation of the researchers at PRALab and Ruhr-Universität Bochum) various systems: a library to perform optimal attacks against machine learning systems (AdversariaLib), a framework for automatically obfuscating Android applications, a system to the robust detection of Javascript malware inside PDF files (LuxOR), a robust machine learning system to the detection of Android malware, and a system to fingerprint mobile devices. I also contributed to develop Android PRAGuard, a dataset containing a lot of empirical obfuscation attacks against the Android platform. Finally, I entirely developed Slayer NEO, an evolution of a previous system to the detection of PDF malware. The results attained by using the aforementioned tools show that it is possible to proactively build systems that predict possible evasion attacks. This suggests that a proactive approach is crucial to build systems that provide concrete security against general and evasion attacks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As an animator and practice-based researcher with a background in games development, I am interested in technological change in the video game medium, with a focus on the tools and technologies that drive game character animation and interactive story. In particular, I am concerned with the issue of ‘user agency’, or the ability of the end user to affect story development—a key quality of the gaming experience and essential to the aesthetics of gaming, which is defined in large measure by its interactive elements. In this paper I consider the unique qualities of the video game1 as an artistic medium and the impact that these qualities have on the production of animated virtual character performances. I discuss the somewhat oppositional nature of animated character performances found in games from recent years, which range from inactive to active—in other words, low to high agency. Where procedural techniques (based on coded rules of movement) are used to model dynamic character performances, the user has the ability to interactively affect characters in real-time within the larger sphere of the game. This game play creates a high degree of user agency. However, it lacks the aesthetic nuances of the more crafted sections of games: the short cut-scenes, or narrative interludes where entire acted performances are mapped onto game characters (often via performance capture)2 and constructed into relatively cinematic representations. While visually spectacular, cut-scenes involve minimal interactivity, so user agency is low. Contemporary games typically float between these two distinct methods of animation, from a focus on user agency and dynamically responsive animation to a focus on animated character performance in sections where the user is a passive participant. We tend to think of the majority of action in games as taking place via playable figures: an avatar or central character that represents a player. However, there is another realm of characters that also partake in actions ranging from significant to incidental: non-playable characters, or NPCs, which populate action sequences where game play takes place as well as cut scenes that unfold without much or any interaction on the part of the player. NPCs are the equivalent to supporting roles, bit characters, or extras in the world of cinema. Minor NPCs may simply be background characters or enemies to defeat, but many NPCs are crucial to the overall game story. It is my argument that, thus far, no game has successfully utilized the full potential of these characters to contribute toward development of interactive, high performance action. In particular, a type of NPC that I have identified as ‘pivotal’3—those constituting the supporting cast of a video game—are essential to the telling of a game story, particularly in genres that focus on story and characters: adventure games, action games, and role-playing games. A game story can be defined as the entirety of the narrative, told through non-interactive cut-scenes as well a interactive sections of play, and development of more complex stories in games clearly impacts the animation of NPCs. I argue that NPCs in games must be capable of acting with emotion throughout a game—in the cutscenes, which are tightly controlled, but also in sections of game play, where player agency can potentially alter the story in real-time. When the animated performance of NPCs and user agency are not continuous throughout the game, the implication is that game stories may be primarily told through short movies within games, making it more difficult to define video games animation as a distinct artistic medium.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modal matching is a new method for establishing correspondences and computing canonical descriptions. The method is based on the idea of describing objects in terms of generalized symmetries, as defined by each object's eigenmodes. The resulting modal description is used for object recognition and categorization, where shape similarities are expressed as the amounts of modal deformation energy needed to align the two objects. In general, modes provide a global-to-local ordering of shape deformation and thus allow for selecting which types of deformations are used in object alignment and comparison. In contrast to previous techniques, which required correspondence to be computed with an initial or prototype shape, modal matching utilizes a new type of finite element formulation that allows for an object's eigenmodes to be computed directly from available image information. This improved formulation provides greater generality and accuracy, and is applicable to data of any dimensionality. Correspondence results with 2-D contour and point feature data are shown, and recognition experiments with 2-D images of hand tools and airplanes are described.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis initially gives an overview of the wave industry and the current state of some of the leading technologies as well as the energy storage systems that are inherently part of the power take-off mechanism. The benefits of electrical energy storage systems for wave energy converters are then outlined as well as the key parameters required from them. The options for storage systems are investigated and the reasons for examining supercapacitors and lithium-ion batteries in more detail are shown. The thesis then focusses on a particular type of offshore wave energy converter in its analysis, the backward bent duct buoy employing a Wells turbine. Variable speed strategies from the research literature which make use of the energy stored in the turbine inertia are examined for this system, and based on this analysis an appropriate scheme is selected. A supercapacitor power smoothing approach is presented in conjunction with the variable speed strategy. As long component lifetime is a requirement for offshore wave energy converters, a computer-controlled test rig has been built to validate supercapacitor lifetimes to manufacturer’s specifications. The test rig is also utilised to determine the effect of temperature on supercapacitors, and determine application lifetime. Cycle testing is carried out on individual supercapacitors at room temperature, and also at rated temperature utilising a thermal chamber and equipment programmed through the general purpose interface bus by Matlab. Application testing is carried out using time-compressed scaled-power profiles from the model to allow a comparison of lifetime degradation. Further applications of supercapacitors in offshore wave energy converters are then explored. These include start-up of the non-self-starting Wells turbine, and low-voltage ride-through examined to the limits specified in the Irish grid code for wind turbines. These applications are investigated with a more complete model of the system that includes a detailed back-to-back converter coupling a permanent magnet synchronous generator to the grid. Supercapacitors have been utilised in combination with battery systems for many applications to aid with peak power requirements and have been shown to improve the performance of these energy storage systems. The design, implementation, and construction of coupling a 5 kW h lithium-ion battery to a microgrid are described. The high voltage battery employed a continuous power rating of 10 kW and was designed for the future EV market with a controller area network interface. This build gives a general insight to some of the engineering, planning, safety, and cost requirements of implementing a high power energy storage system near or on an offshore device for interface to a microgrid or grid.