342 resultados para Special purpose vehicles.
Resumo:
With the continued development of renewable energy generation technologies and increasing pressure to combat the global effects of greenhouse warming, plug-in hybrid electric vehicles (PHEVs) have received worldwide attention, finding applications in North America and Europe. When a large number of PHEVs are introduced into a power system, there will be extensive impacts on power system planning and operation, as well as on electricity market development. It is therefore necessary to properly control PHEV charging and discharging behaviors. Given this background, a new unit commitment model and its solution method that takes into account the optimal PHEV charging and discharging controls is presented in this paper. A 10-unit and 24-hour unit commitment (UC) problem is employed to demonstrate the feasibility and efficiency of the developed method, and the impacts of the wide applications of PHEVs on the operating costs and the emission of the power system are studied. Case studies are also carried out to investigate the impacts of different PHEV penetration levels and different PHEV charging modes on the results of the UC problem. A 100-unit system is employed for further analysis on the impacts of PHEVs on the UC problem in a larger system application. Simulation results demonstrate that the employment of optimized PHEV charging and discharging modes is very helpful for smoothing the load curve profile and enhancing the ability of the power system to accommodate more PHEVs. Furthermore, an optimal Vehicle to Grid (V2G) discharging control provides economic and efficient backups and spinning reserves for the secure and economic operation of the power system
Resumo:
Ocean processes are complex and have high variability in both time and space. Thus, ocean scientists must collect data over long time periods to obtain a synoptic view of ocean processes and resolve their spatiotemporal variability. One way to perform these persistent observations is to utilise an autonomous vehicle that can remain on deployment for long time periods. However, such vehicles are generally underactuated and slow moving. A challenge for persistent monitoring with these vehicles is dealing with currents while executing a prescribed path or mission. Here we present a path planning method for persistent monitoring that exploits ocean currents to increase navigational accuracy and reduce energy consumption.
Resumo:
Purpose Managers generally have discretion in determining how components of earnings are presented in financial statements in distinguishing between ‘normal’ earnings and items classified as unusual, special, significant, exceptional or abnormal. Prior research has found that such intra-period classificatory choice is used as a form of earnings management. Prior to 2001, Australian accounting standards mandated that unusually large items of revenue and expense be classified as ‘abnormal items’ for financial reporting, but this classification was removed from accounting standards from 2001. This move by the regulators was partly in response to concerns that the abnormal classification was being used opportunistically to manage reported pre-abnormal earnings. This study extends the earnings management literature by examining the reporting of abnormal items for evidence of intra-period classificatory earnings management in the unique Australian setting. Design/methodology/approach This study investigates associations between reporting of abnormal items and incentives in the form of analyst following and the earnings benchmarks of analysts’ forecasts, earnings levels, and earnings changes, for a sample of Australian top-500 firms for the seven-year period from 1994 to 2000. Findings The findings suggest there are systematic differences between firms reporting abnormal items and those with no abnormal items. Results show evidence that, on average, firms shifted expense items from pre-abnormal earnings to bottom line net income through reclassification as abnormal losses. Originality/value These findings suggest that the standard setters were justified in removing the ‘abnormal’ classification from the accounting standard. However, it cannot be assumed that all firms acted opportunistically in the classification of items as abnormal. With the removal of the standardised classification of items outside normal operations as ‘abnormal’, firms lost the opportunity to use such disclosures as a signalling device, with the consequential effect of limiting the scope of effectively communicating information about the nature of items presented in financial reports.
Resumo:
The world is facing problems due to the effects of increased atmospheric pollution, climate change and global warming. Innovative technologies to identify, quantify and assess fluxes exchange of the pollutant gases between the Earth’s surface and atmosphere are required. This paper proposes the development of a gas sensor system for a small UAV to monitor pollutant gases, collect data and geo-locate where the sample was taken. The prototype has two principal systems: a light portable gas sensor and an optional electric–solar powered UAV. The prototype will be suitable to: operate in the lower troposphere (100-500m); collect samples; stamp time and geo-locate each sample. One of the limitations of a small UAV is the limited power available therefore a small and low power consumption payload is designed and built for this research. The specific gases targeted in this research are NO2, mostly produce by traffic, and NH3 from farming, with concentrations above 0.05 ppm and 35 ppm respectively which are harmful to human health. The developed prototype will be a useful tool for scientists to analyse the behaviour and tendencies of pollutant gases producing more realistic models of them.
Resumo:
Vertical vegetation is vegetation growing on, or adjacent to, the unused sunlit exterior surfaces of buildings in cities. Vertical vegetation can improve the energy efficiency of the building on which it is installed mainly by insulating, shading and transpiring moisture from foliage and substrate. Several design parameters may affect the extent of the vertical vegetation's improvement of energy performance. Examples are choice of vegetation, growing medium geometry, north/south aspect and others. The purpose of this study is to quantitatively map out the contribution of several parameters to energy savings in a subtropical setting. The method is thermal simulation based on EnergyPlus configured to reflect the special characteristics of vertical vegetation. Thermal simulation results show that yearly cooling energy savings can reach 25% with realistic design choices in subtropical environments. Heating energy savings are negligible. The most important parameter is the aspect of walls covered by vegetation. Vertical vegetation covering walls facing north (south for the northern hemisphere) will result in the highest energy savings. In making plant selections, the most significant parameter is Leaf Area Index (LAI). Plants with larger LAI, preferably LAI>4, contribute to greater savings whereas vertical vegetation with LAI<2 can actually consume energy. The choice of growing media and its thickness influence both heating and cooling energy consumption. Change of growing medium thickness from 6cm to 8cm causes dramatic increase in energy savings from 2% to 18%. For cooling, it is best to use a growing material with high water retention, due to the importance of evapotranspiration for cooling. Similarly, for increased savings in cooling energy, sufficient irrigation is required. Insufficient irrigation results in the vertical vegetation requiring more energy to cool the building. To conclude, the choice of design parameters for vertical vegetation is crucial in making sure that it contributes to energy savings rather than energy consumption. Optimal design decisions can create a dramatic sustainability enhancement for the built environment in subtropical climates.
Resumo:
Purpose: This study provides insight into the histories and current statuses of queer community archives in California and explores what the archives profession can learn from the queer community archives and archivists. Through the construction of histories of three community archives (GLBT Historical Society; Lavender Library, Archives, and Cultural Exchange of Sacramento, Inc.; and ONE National Gay & Lesbian Archives), the study discovered why these independent, community-based archives were created, the issues that influenced their evolution, and the similarities and differences among them. Additionally, it compared the community archives to institutional archives which collect queer materials to explore the similarities and differences among the archives and determine possible implications for the archives profession. Significance: The study contributes to the literature in several significant ways: it is the first in-depth comparative history of the queer community archives; it adds to the cross-disciplinary research in archives and history; it contributes to the current debates on the nature of the archives and the role of the professional archivist; and it has implications for changing archival practice. Methodology: This study used social constructionism for epistemological positioning and new social history theory for theoretical framework. Information was gathered through seven oral history interviews with community archivists and volunteers and from materials in the archives’ collections. This evidence was used to construct the histories of the archives and determine their current statuses. The institutional archives used in the comparisons are the: University of California, Berkeley’s Bancroft Library; University of California, Santa Cruz’s Special Collections and University Archives; and San Francisco Public Library’s James C. Hormel Gay and Lesbian Center. The collection policies, finding aids, and archival collections related to the queer communities at the institutional and community archives were compared to determine commonalities and differences among the archives. Findings: The findings revealed striking similarities in the histories of the community archives and important implications for the archives’ survival and their relevancy to the archives profession. Each archives was started by an individual or small group collecting materials to preserve history that would otherwise have been lost as institutional archives were not collecting queer materials. These private collections grew and became the basis for the community archives. The community archives differ in their staffing models, circulation policies, and descriptive practices. The community archives have grown to incorporate more public programming functions than most institutional archives. While in the past, the community archives had little connection to institutional archives, today they have varying degrees of partnerships. However, the historical lack of collecting queer materials by institutional archives makes some members of the communities reluctant to donate materials to institutional archives or collaborate with them. All three queer community archives are currently managed by professionally trained and educated archivists and face financial issues impacting their continued survival. The similarities and differences between the community and institutional archives include differences in collection policies, language differences in the finding aids, and differing levels of relationships between the archives. However, they share similar sensitivity in the use of language in describing the queer communities and overlap in the types of materials collected. Implications: This study supports previous research on community archives showing that communities take the preservation of history into their own hands when ignored by mainstream archives (Flinn, 2007; Flinn & Stevens, 2009; Nestle, 1990). Based on the study’s findings, institutional archivists could learn from their community archivist counterparts better ways to become involved in and relevant to the communities whose records they possess. This study also expands the understanding of history of the queer communities to include in-depth research into the archives which preserve and make available material for constructing history. Furthermore, this study supports reflective practice for archivists, especially in terms of descriptions used in finding aids. It also supports changes in graduate education for archives students to enable archivists in the United States to be more fully cognizant of community archives and able to engage in collaborative, international projects. Through this more activist role of the archivists, partnerships between the community and institutional archives would be built to establish more collaborative, respectful relationships with the communities in this post-custodial age of the archives (Stevens, Flinn, & Shepherd, 2010). Including community archives in discussions of archival practice and theory is one way of ensuring archives represent and serve a diversity of voices.
Resumo:
The development of any new profession is dependent on the development of a special body of knowledge which is the domain of the profession and key to this is the conduct of research. In 2007, as part of the settlement of an Enterprise Bargaining Agreement and following sustained lobbying by Emergency Physicians, the Queensland Government agreed to establish an Emergency Medicine Research Fund to foster the development of research activities in Emergency Medicine in Queensland. That fund is now managed by the Queensland Emergency Medicine Research Foundation. The aims of this article are to describe the strategic approaches taken by the Foundation and its first three years of experience, to describe the application of research funds and to foreshadow an evaluative framework for determining the strategic value of this community investment. The Foundation has developed a range of personnel and project support funding programs and competition for funding has increased. Ongoing evaluation will seek to determine the effectiveness of this funding strategy on improving the effectiveness of research performance and the clinical and organisational outcomes that may derive from that initiative.
Resumo:
Recent efforts in mission planning for underwater vehicles have utilised predictive models to aid in navigation, optimal path planning and drive opportunistic sampling. Although these models provide information at a unprecedented resolutions and have proven to increase accuracy and effectiveness in multiple campaigns, most are deterministic in nature. Thus, predictions cannot be incorporated into probabilistic planning frameworks, nor do they provide any metric on the variance or confidence of the output variables. In this paper, we provide an initial investigation into determining the confidence of ocean model predictions based on the results of multiple field deployments of two autonomous underwater vehicles. For multiple missions conducted over a two-month period in 2011, we compare actual vehicle executions to simulations of the same missions through the Regional Ocean Modeling System in an ocean region off the coast of southern California. This comparison provides a qualitative analysis of the current velocity predictions for areas within the selected deployment region. Ultimately, we present a spatial heat-map of the correlation between the ocean model predictions and the actual mission executions. Knowing where the model provides unreliable predictions can be incorporated into planners to increase the utility and application of the deterministic estimations.
Resumo:
Exploiting wind-energy is one possible way to extend flight duration for Unmanned Arial Vehicles. Wind-energy can also be used to minimise energy consumption for a planned path. In this paper, we consider uncertain time-varying wind fields and plan a path through them. A Gaussian distribution is used to determine uncertainty in the Time-varying wind fields. We use Markov Decision Process to plan a path based upon the uncertainty of Gaussian distribution. Simulation results that compare the direct line of flight between start and target point and our planned path for energy consumption and time of travel are presented. The result is a robust path using the most visited cell while sampling the Gaussian distribution of the wind field in each cell.
The use of virtual prototyping to rehearse the sequence of construction work involving mobile cranes
Resumo:
Purpose – Rehearsing practical site operations is without doubt one of the most effective methods for minimising planning mistakes, because of the learning that takes place during the rehearsal activity. However, real rehearsal is not a practical solution for on-site construction activities, as it not only involves a considerable amount of cost but can also have adverse environmental implications. One approach to overcoming this is by the use of virtual rehearsals. The purpose of this paper is to investigate an approach to simulation of the motion of cranes in order to test the feasibility of associated construction sequencing and generate construction schedules for review and visualisation. Design/methodology/approach – The paper describes a system involving two technologies, virtual prototyping (VP) and four-dimensional (4D) simulation, to assist construction planners in testing the sequence of construction activities when mobile cranes are involved. The system consists of five modules, comprising input, database, equipment, process and output, and is capable of detecting potential collisions. A real-world trial is described in which the system was tested and validated. Findings – Feedback from the planners involved in the trial indicated that they found the system to be useful in its present form and that they would welcome its further development into a fully automated platform for validating construction sequencing decisions. Research limitations/implications – The tool has the potential to provide a cost-effective means of improving construction planning. However, it is limited at present to the specific case of crane movement under special consideration. Originality/value – This paper presents a large-scale, real life case of applying VP technology in planning construction processes and activities.
Resumo:
Considerable attention has been given to development of renewable energy due to imminent depletion of fossil fuels and environmental concerns over global warming. Therefore, it is necessary to find out all the available alternative sources of energy immediately to meet the increasing energy demand of Bangladesh. Among the available alternative sources of energy in Bangladesh bio-oil is recognized to be a promising alternative energy source. In these days bio-oil is merely used in vehicles and power plants after some up gradation .However, it is not used for domestic purposes like cooking and lighting due to it’s high density and viscosity. A gravity stove is designed to use this high dense and viscous bio-oil for cooking purpose. Efficiency of gravity stove with high dense and viscous bio-oil (karanj) is 11.81% which of kerosene stove is 17.80% also the discharge of karanj oil through gravity stove is sufficient for continuous burning.
Resumo:
The automotive industry has been the focus of digital human modeling (DHM) research and application for many years. In the highly competitive marketplace for personal transportation, the desire to improve the customer’s experience has driven extensive research in both the physical and cognitive interaction between the vehicle and its occupants. Human models provide vehicle designers with tools to view and analyze product interactions before the first prototypes are built, potentially improving the design while reducing cost and development time. The focus of DHM research and applications began with prediction and representation of static postures for purposes of driver workstation layout, including assessments of seat adjustment ranges and exterior vision. Now DHMs are used for seat design and assessment of driver reach and ingress/egress. DHMs and related simulation tools are expanding into the cognitive domain, with computational models of perception and motion, and into the dynamic domain with models of physical responses to ride and vibration. Moreover, DHMs are now widely used to analyze the ergonomics of vehicle assembly tasks. In this case, the analysis aims to determine whether workers can be expected to complete the tasks safely and with good quality. This preface provides a review of the literature to provide context for the nine new papers presented in this special issue.
Resumo:
A significant issue encountered when fusing data received from multiple sensors is the accuracy of the timestamp associated with each piece of data. This is particularly important in applications such as Simultaneous Localisation and Mapping (SLAM) where vehicle velocity forms an important part of the mapping algorithms; on fastmoving vehicles, even millisecond inconsistencies in data timestamping can produce errors which need to be compensated for. The timestamping problem is compounded in a robot swarm environment due to the use of non-deterministic readily-available hardware (such as 802.11-based wireless) and inaccurate clock synchronisation protocols (such as Network Time Protocol (NTP)). As a result, the synchronisation of the clocks between robots can be out by tens-to-hundreds of milliseconds making correlation of data difficult and preventing the possibility of the units performing synchronised actions such as triggering cameras or intricate swarm manoeuvres. In this thesis, a complete data fusion unit is designed, implemented and tested. The unit, named BabelFuse, is able to accept sensor data from a number of low-speed communication buses (such as RS232, RS485 and CAN Bus) and also timestamp events that occur on General Purpose Input/Output (GPIO) pins referencing a submillisecondaccurate wirelessly-distributed "global" clock signal. In addition to its timestamping capabilities, it can also be used to trigger an attached camera at a predefined start time and frame rate. This functionality enables the creation of a wirelessly-synchronised distributed image acquisition system over a large geographic area; a real world application for this functionality is the creation of a platform to facilitate wirelessly-distributed 3D stereoscopic vision. A ‘best-practice’ design methodology is adopted within the project to ensure the final system operates according to its requirements. Initially, requirements are generated from which a high-level architecture is distilled. This architecture is then converted into a hardware specification and low-level design, which is then manufactured. The manufactured hardware is then verified to ensure it operates as designed and firmware and Linux Operating System (OS) drivers are written to provide the features and connectivity required of the system. Finally, integration testing is performed to ensure the unit functions as per its requirements. The BabelFuse System comprises of a single Grand Master unit which is responsible for maintaining the absolute value of the "global" clock. Slave nodes then determine their local clock o.set from that of the Grand Master via synchronisation events which occur multiple times per-second. The mechanism used for synchronising the clocks between the boards wirelessly makes use of specific hardware and a firmware protocol based on elements of the IEEE-1588 Precision Time Protocol (PTP). With the key requirement of the system being submillisecond-accurate clock synchronisation (as a basis for timestamping and camera triggering), automated testing is carried out to monitor the o.sets between each Slave and the Grand Master over time. A common strobe pulse is also sent to each unit for timestamping; the correlation between the timestamps of the di.erent units is used to validate the clock o.set results. Analysis of the automated test results show that the BabelFuse units are almost threemagnitudes more accurate than their requirement; clocks of the Slave and Grand Master units do not di.er by more than three microseconds over a running time of six hours and the mean clock o.set of Slaves to the Grand Master is less-than one microsecond. The common strobe pulse used to verify the clock o.set data yields a positive result with a maximum variation between units of less-than two microseconds and a mean value of less-than one microsecond. The camera triggering functionality is verified by connecting the trigger pulse output of each board to a four-channel digital oscilloscope and setting each unit to output a 100Hz periodic pulse with a common start time. The resulting waveform shows a maximum variation between the rising-edges of the pulses of approximately 39¥ìs, well below its target of 1ms.