978 resultados para DISTRIBUTED STRAIN
Resumo:
In rural low-voltage networks, distribution lines are usually highly resistive. When many distributed generators are connected to such lines, power sharing among them is difficult when using conventional droop control, as the real and reactive power have strong coupling with each other. A high droop gain can alleviate this problem but may lead the system to instability. To overcome4 this, two droop control methods are proposed for accurate load sharing with frequency droop controller. The first method considers no communication among the distributed generators and regulates the output voltage and frequency, ensuring acceptable load sharing. The droop equations are modified with a transformation matrix based on the line R/X ration for this purpose. The second proposed method, with minimal low bandwidth communication, modifies the reference frequency of the distributed generators based on the active and reactive power flow in the lines connected to the points of common coupling. The performance of these two proposed controllers is compared with that of a controller, which includes an expensive high bandwidth communication system through time-domain simulation of a test system. The magnitude of errors in power sharing between these three droop control schemes are evaluated and tabulated.
Resumo:
In this paper we present a novel distributed coding protocol for multi-user cooperative networks. The proposed distributed coding protocol exploits the existing orthogonal space-time block codes to achieve higher diversity gain by repeating the code across time and space (available relay nodes). The achievable diversity gain depends on the number of relay nodes that can fully decode the signal from the source. These relay nodes then form space-time codes to cooperatively relay to the destination using number of time slots. However, the improved diversity gain is archived at the expense of the transmission rate. The design principles of the proposed space-time distributed code and the issues related to transmission rate and diversity trade off is discussed in detail. We show that the proposed distributed space-time coding protocol out performs existing distributed codes with a variable transmission rate.
Resumo:
The culture of mashups which is examined by the contributions collected in this volume is a symptom of a wider paradigm shift in our engagement with information – a term which should be understood here in its broadest sense, ranging from factual material to creative works. It is a shift which has been a long time coming and has had many precedents, from the collage art of the Dadaists in the 1920s to the music mixtapes of the 70s and 80s, and finally to the explosion of mashup‐style practices that was enabled by modern computing technologies.
Resumo:
This paper explores design thinking from the perspective of designing new forms of interaction to engage people in community change initiatives. A case study of an agile ridesharing system is presented. We describe the fundamental premise of the design approach taken—deploying simple interactive prototypes for use by communities in order to test the design hypothesis, evolve the design in use and grow the community of participants. Real-time use data and feedback from participants influences our understanding of the design approach and feeds into the gradual evolution of the prototype while it continues to be used. We then reflect upon this form of evolutionary distributed design thinking. In contrast to the conventional IT wisdom of building systems to automate ride matching and fare calculation using structured forms, our initial phase of design revealed a preference for informal messaging, negotiation and caution in the sharing of specific location information.
Resumo:
This study examines the relationships between job demands (in the form of role stressors and emotional management) and employee burnout amongst high contact service employees. Employees in customer facing roles are frequently required to manage overwhelming, conflicting or ambiguous demands, which they may feel ill-equipped to handle. Simultaneously, they must manage the emotions they display towards customers, suppressing some, and expressing others, be they genuine or contrived. If the in-role effort required of employees exceeds their inherent capacity to cope, burnout may result. Burnout, in turn, can have serious detrimental consequences for the psychological well being of employees. We find that both emotional management and role stressors impact burnout. We also confirm that burnout predicts psychological strain. In line with the Job Demands and Resources Model, we examine the mitigating impact of perceived support on these relationships but do not find a significant mitigating impact.
Resumo:
The complete nucleotide sequence of rice tungro spherical virus (RTSV) strain Vt6, originally from Mindanao, the Philippines, with higher virulence to resistant rice cultivars, was determined and compared with the published sequence for the Philippine-type strain A (RTSV-A-Shen). It was reported that RTSV-A was not able to infect a rice resistant cultivar TKM 6 (10). RTSV-Vt6 and RTSV-A-Shen share 90% and 95% homology at nucleotide and amino-acid levels, respectively. The N-terminal leader sequence of RTSV-Vt6 contained a 39-amino acids-region (positions 65 to 103) which was totally different from that of RTSV-A-Shen; the difference resulted from frame shifting by nucleotide insertions and deletions. To confirm the amino-acid sequence differences of the leader polypeptide, the same region was cloned and sequenced using a newly obtained variant of RTSV-type 6, which had been collected in the field of IRRI, and seven field isolates from Mindanao, the Philippines. Since all the sequences of the target region are identical to that of the Vt6 leader polypeptide, the sequence difference in the leader region seems not to correlate with the virulence of Vt6.
Resumo:
We consider the problem of object tracking in a wireless multimedia sensor network (we mainly focus on the camera component in this work). The vast majority of current object tracking techniques, either centralised or distributed, assume unlimited energy, meaning these techniques don't translate well when applied within the constraints of low-power distributed systems. In this paper we develop and analyse a highly-scalable, distributed strategy to object tracking in wireless camera networks with limited resources. In the proposed system, cameras transmit descriptions of objects to a subset of neighbours, determined using a predictive forwarding strategy. The received descriptions are then matched at the next camera on the objects path using a probability maximisation process with locally generated descriptions. We show, via simulation, that our predictive forwarding and probabilistic matching strategy can significantly reduce the number of object-misses, ID-switches and ID-losses; it can also reduce the number of required transmissions over a simple broadcast scenario by up to 67%. We show that our system performs well under realistic assumptions about matching objects appearance using colour.
Resumo:
We describe a novel two stage approach to object localization and tracking using a network of wireless cameras and a mobile robot. In the first stage, a robot travels through the camera network while updating its position in a global coordinate frame which it broadcasts to the cameras. The cameras use this information, along with image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to track the objects. We present results with a nine node indoor camera network to demonstrate that this approach is feasible and offers acceptable level of accuracy in terms of object locations.
Resumo:
A trend in design and implementation of modern industrial automation systems is to integrate computing, communication and control into a unified framework at different levels of machine/factory operations and information processing. These distributed control systems are referred to as networked control systems (NCSs). They are composed of sensors, actuators, and controllers interconnected over communication networks. As most of communication networks are not designed for NCS applications, the communication requirements of NCSs may be not satisfied. For example, traditional control systems require the data to be accurate, timely and lossless. However, because of random transmission delays and packet losses, the control performance of a control system may be badly deteriorated, and the control system rendered unstable. The main challenge of NCS design is to both maintain and improve stable control performance of an NCS. To achieve this, communication and control methodologies have to be designed. In recent decades, Ethernet and 802.11 networks have been introduced in control networks and have even replaced traditional fieldbus productions in some real-time control applications, because of their high bandwidth and good interoperability. As Ethernet and 802.11 networks are not designed for distributed control applications, two aspects of NCS research need to be addressed to make these communication networks suitable for control systems in industrial environments. From the perspective of networking, communication protocols need to be designed to satisfy communication requirements for NCSs such as real-time communication and high-precision clock consistency requirements. From the perspective of control, methods to compensate for network-induced delays and packet losses are important for NCS design. To make Ethernet-based and 802.11 networks suitable for distributed control applications, this thesis develops a high-precision relative clock synchronisation protocol and an analytical model for analysing the real-time performance of 802.11 networks, and designs a new predictive compensation method. Firstly, a hybrid NCS simulation environment based on the NS-2 simulator is designed and implemented. Secondly, a high-precision relative clock synchronization protocol is designed and implemented. Thirdly, transmission delays in 802.11 networks for soft-real-time control applications are modeled by use of a Markov chain model in which real-time Quality-of- Service parameters are analysed under a periodic traffic pattern. By using a Markov chain model, we can accurately model the tradeoff between real-time performance and throughput performance. Furthermore, a cross-layer optimisation scheme, featuring application-layer flow rate adaptation, is designed to achieve the tradeoff between certain real-time and throughput performance characteristics in a typical NCS scenario with wireless local area network. Fourthly, as a co-design approach for both a network and a controller, a new predictive compensation method for variable delay and packet loss in NCSs is designed, where simultaneous end-to-end delays and packet losses during packet transmissions from sensors to actuators is tackled. The effectiveness of the proposed predictive compensation approach is demonstrated using our hybrid NCS simulation environment.
Resumo:
Distributed pipeline assets systems are crucial to society. The deterioration of these assets and the optimal allocation of limited budget for their maintenance correspond to crucial challenges for water utility managers. Decision makers should be assisted with optimal solutions to select the best maintenance plan concerning available resources and management strategies. Much research effort has been dedicated to the development of optimal strategies for maintenance of water pipes. Most of the maintenance strategies are intended for scheduling individual water pipe. Consideration of optimal group scheduling replacement jobs for groups of pipes or other linear assets has so far not received much attention in literature. It is a common practice that replacement planners select two or three pipes manually with ambiguous criteria to group into one replacement job. This is obviously not the best solution for job grouping and may not be cost effective, especially when total cost can be up to multiple million dollars. In this paper, an optimal group scheduling scheme with three decision criteria for distributed pipeline assets maintenance decision is proposed. A Maintenance Grouping Optimization (MGO) model with multiple criteria is developed. An immediate challenge of such modeling is to deal with scalability of vast combinatorial solution space. To address this issue, a modified genetic algorithm is developed together with a Judgment Matrix. This Judgment Matrix is corresponding to various combinations of pipe replacement schedules. An industrial case study based on a section of a real water distribution network was conducted to test the new model. The results of the case study show that new schedule generated a significant cost reduction compared with the schedule without grouping pipes.
Resumo:
The broad research questions of the book are: How can successful, interdisciplinary collaboration contribute to research innovation through Practice-led research? What contributes to the design, production and curation of successful new media art? What are the implications of exhibiting it across dual sites for artists, curators and participant audiences? Is it possible to create an 'intimate transaction' between people who are separated by vast distances but joined by interfaces and distributed networks? Centred on a new media work of the same name by the Transmute Collective (led by Keith Armstrong), this book provides insights from multidisciplinary perspectives. Visual, sound and performance artists, furniture designers, spatial architects, technology systems designers, and curators who collaborated in the production of Intimate Transactions discuss their design philosophies, working processes and resolution of this major new media work. Analytical and philosophical essays by international writers complement these writings on production. They consider how new media art, like Intimate Transactions, challenges traditional understandings of art, curatorial installation and exhibition experience because of the need to take into account interaction, the reconfiguration of space, co-presence, performativity and inter-site collaboration.
Resumo:
This paper presents a study into the behaviour of extruded polystyrene foam at low strain rates. The foam is being studied in order assess its potential for use as part of a new innovative design of portable road safety barrier the aim to consume less water and reduce rates of serious injury. The foam was tested at a range of low strain rates, with the stress and strain behaviour of the foam specimens being recorded. The energy absorption capabilities of the foam were assessed as well as the response of the foam to multiple loadings. The experimental data was then used to create a material model of the foam for use in the explicit finite element solver LS-DYNA. Simulations were carried out using the material model which showed excellent correlation between the numerical material model and the experimental data.
Resumo:
This study investigated the grain size dependence of mechanical properties and deformation mechanisms of microcrystalline (mc) and nanocrystalline (nc: grain size below 100 nm) Mg-5wt% Al alloys. The Hall-Petch relationship was investigated by both instrumented indentation tests and compression tests. The test results from the indentation tests and compression tests match well with each other. The breakdown of Hall-Petch relationship and the elevated strain rate sensitivity (SRS) of present Mg-5wt% Al alloys when the grain size was reduced below 58nm indicated the more significant role of GB mediated mechanisms in plastic deformation process. However, the relatively smaller SRS values compared to GB sliding and coble creep process suggested the plastic deformation in the current study is still dislocation mediated mechanism dominant.
Resumo:
Background In order to provide insights into the complex biochemical processes inside a cell, modelling approaches must find a balance between achieving an adequate representation of the physical phenomena and keeping the associated computational cost within reasonable limits. This issue is particularly stressed when spatial inhomogeneities have a significant effect on system's behaviour. In such cases, a spatially-resolved stochastic method can better portray the biological reality, but the corresponding computer simulations can in turn be prohibitively expensive. Results We present a method that incorporates spatial information by means of tailored, probability distributed time-delays. These distributions can be directly obtained by single in silico or a suitable set of in vitro experiments and are subsequently fed into a delay stochastic simulation algorithm (DSSA), achieving a good compromise between computational costs and a much more accurate representation of spatial processes such as molecular diffusion and translocation between cell compartments. Additionally, we present a novel alternative approach based on delay differential equations (DDE) that can be used in scenarios of high molecular concentrations and low noise propagation. Conclusions Our proposed methodologies accurately capture and incorporate certain spatial processes into temporal stochastic and deterministic simulations, increasing their accuracy at low computational costs. This is of particular importance given that time spans of cellular processes are generally larger (possibly by several orders of magnitude) than those achievable by current spatially-resolved stochastic simulators. Hence, our methodology allows users to explore cellular scenarios under the effects of diffusion and stochasticity in time spans that were, until now, simply unfeasible. Our methodologies are supported by theoretical considerations on the different modelling regimes, i.e. spatial vs. delay-temporal, as indicated by the corresponding Master Equations and presented elsewhere.