425 resultados para distributed generation
Resumo:
We consider the problem of object tracking in a wireless multimedia sensor network (we mainly focus on the camera component in this work). The vast majority of current object tracking techniques, either centralised or distributed, assume unlimited energy, meaning these techniques don't translate well when applied within the constraints of low-power distributed systems. In this paper we develop and analyse a highly-scalable, distributed strategy to object tracking in wireless camera networks with limited resources. In the proposed system, cameras transmit descriptions of objects to a subset of neighbours, determined using a predictive forwarding strategy. The received descriptions are then matched at the next camera on the objects path using a probability maximisation process with locally generated descriptions. We show, via simulation, that our predictive forwarding and probabilistic matching strategy can significantly reduce the number of object-misses, ID-switches and ID-losses; it can also reduce the number of required transmissions over a simple broadcast scenario by up to 67%. We show that our system performs well under realistic assumptions about matching objects appearance using colour.
Resumo:
Voluminous (≥3·9 × 105 km3), prolonged (∼18 Myr) explosive silicic volcanism makes the mid-Tertiary Sierra Madre Occidental province of Mexico one of the largest intact silicic volcanic provinces known. Previous models have proposed an assimilation–fractional crystallization origin for the rhyolites involving closed-system fractional crystallization from crustally contaminated andesitic parental magmas, with <20% crustal contributions. The lack of isotopic variation among the lower crustal xenoliths inferred to represent the crustal contaminants and coeval Sierra Madre Occidental rhyolite and basaltic andesite to andesite volcanic rocks has constrained interpretations for larger crustal contributions. Here, we use zircon age populations as probes to assess crustal involvement in Sierra Madre Occidental silicic magmatism. Laser ablation-inductively coupled plasma-mass spectrometry analyses of zircons from rhyolitic ignimbrites from the northeastern and southwestern sectors of the province yield U–Pb ages that show significant age discrepancies of 1–4 Myr compared with previously determined K/Ar and 40Ar/39Ar ages from the same ignimbrites; the age differences are greater than the errors attributable to analytical uncertainty. Zircon xenocrysts with new overgrowths in the Late Eocene to earliest Oligocene rhyolite ignimbrites from the northeastern sector provide direct evidence for some involvement of Proterozoic crustal materials, and, potentially more importantly, the derivation of zircon from Mesozoic and Eocene age, isotopically primitive, subduction-related igneous basement. The youngest rhyolitic ignimbrites from the southwestern sector show even stronger evidence for inheritance in the age spectra, but lack old inherited zircon (i.e. Eocene or older). Instead, these Early Miocene ignimbrites are dominated by antecrystic zircons, representing >33 to ∼100% of the dated population; most antecrysts range in age between ∼20 and 32 Ma. A sub-population of the antecrystic zircons is chemically distinct in terms of their high U (>1000 ppm to 1·3 wt %) and heavy REE contents; these are not present in the Oligocene ignimbrites in the northeastern sector of the Sierra Madre Occidental. The combination of antecryst zircon U–Pb ages and chemistry suggests that much of the zircon in the youngest rhyolites was derived by remelting of partially molten to solidified igneous rocks formed during preceding phases of Sierra Madre Occidental volcanism. Strong Zr undersaturation, and estimations for very rapid dissolution rates of entrained zircons, preclude coeval mafic magmas being parental to the rhyolite magmas by a process of lower crustal assimilation followed by closed-system crystal fractionation as interpreted in previous studies of the Sierra Madre Occidental rhyolites. Mafic magmas were more probably important in providing a long-lived heat and material flux into the crust, resulting in the remelting and recycling of older crust and newly formed igneous materials related to Sierra Madre Occidental magmatism.
Resumo:
SAP and its research partners have been developing a lan- guage for describing details of Services from various view- points called the Unified Service Description Language (USDL). At the time of writing, version 3.0 describes technical implementation aspects of services, as well as stakeholders, pricing, lifecycle, and availability. Work is also underway to address other business and legal aspects of services. This language is designed to be used in service portfolio management, with a repository of service descriptions being available to various stakeholders in an organisation to allow for service prioritisation, development, deployment and lifecycle management. The structure of the USDL metadata is specified using an object-oriented metamodel that conforms to UML, MOF and EMF Ecore. As such it is amenable to code gener-ation for implementations of repositories that store service description instances. Although Web services toolkits can be used to make these programming language objects available as a set of Web services, the practicalities of writing dis- tributed clients against over one hundred class definitions, containing several hundred attributes, will make for very large WSDL interfaces and highly inefficient “chatty” implementations. This paper gives the high-level design for a completely model-generated repository for any version of USDL (or any other data-only metamodel), which uses the Eclipse Modelling Framework’s Java code generation, along with several open source plugins to create a robust, transactional repository running in a Java application with a relational datastore. However, the repository exposes a generated WSDL interface at a coarse granularity, suitable for distributed client code and user-interface creation. It uses heuristics to drive code generation to bridge between the Web service and EMF granularities.
Resumo:
hSSB1 is a newly discovered single-stranded DNA (ssDNA)-binding protein that is essential for efficient DNA double-strand break signalling through ATM. However, the mechanism by which hSSB1 functions to allow efficient signalling is unknown. Here, we show that hSSB1 is recruited rapidly to sites of double-strand DNA breaks (DSBs) in all interphase cells (G1, S and G2) independently of, CtIP, MDC1 and the MRN complex (Rad50, Mre11, NBS1). However expansion of hSSB1 from the DSB site requires the function of MRN. Strikingly, silencing of hSSB1 prevents foci formation as well as recruitment of MRN to sites of DSBs and leads to a subsequent defect in resection of DSBs as evident by defective RPA and ssDNA generation. Our data suggests that hSSB1 functions upstream of MRN to promote its recruitment at DSBs and is required for efficient resection of DSBs. These findings, together with previous work establish essential roles of hSSB1 in controlling ATM activation and activity, and subsequent DSB resection and homologous recombination (HR).
Resumo:
DNA double-strand break (DSB) repair via the homologous recombination pathway is a multi-stage process, which results in repair of the DSB without loss of genetic information or fidelity. One essential step in this process is the generation of extended single-stranded DNA (ssDNA) regions at the break site. This ssDNA serves to induce cell cycle checkpoints and is required for Rad51 mediated strand invasion of the sister chromatid. Here, we show that human Exonuclease 1 (Exo1) is required for the normal repair of DSBs by HR. Cells depleted of Exo1 show chromosomal instability and hypersensitivity to ionising radiation (IR) exposure. We find that Exo1 accumulates rapidly at DSBs and is required for the recruitment of RPA and Rad51 to sites of DSBs, suggesting a role for Exo1 in ssDNA generation. Interestingly, the phosphorylation of Exo1 by ATM appears to regulate the activity of Exo1 following resection, allowing optimal Rad51 loading and the completion of HR repair. These data establish a role for Exo1 in resection of DSBs in human cells, highlighting the critical requirement of Exo1 for DSB repair via HR and thus the maintenance of genomic stability.
Resumo:
This work reviews the rationale and processes for raising revenue and allocating funds to perform information intensive activities that are pertinent to the work of democratic government. ‘Government of the people, by the people, for the people’ expresses an idea that democratic government has no higher authority than the people who agree to be bound by its rules. Democracy depends on continually learning how to develop understandings and agreements that can sustain voting majorities on which democratic law making and collective action depends. The objective expressed in constitutional terms is to deliver ‘peace, order and good government’. Meeting this objective requires a collective intellectual authority that can understand what is possible; and a collective moral authority to understand what ought to happen in practice. Facts of life determine that a society needs to retain its collective competence despite a continual turnover of its membership as people die but life goes on. Retaining this ‘collective competence’ in matters of self-government depends on each new generation: • acquiring a collective knowledge of how to produce goods and services needed to sustain a society and its capacity for self-government; • Learning how to defend society diplomatically and militarily in relation to external forces to prevent overthrow of its self-governing capacity; and • Learning how to defend society against divisive internal forces to preserve the authority of representative legislatures, allow peaceful dispute resolution and maintain social cohesion.
Resumo:
We describe a novel two stage approach to object localization and tracking using a network of wireless cameras and a mobile robot. In the first stage, a robot travels through the camera network while updating its position in a global coordinate frame which it broadcasts to the cameras. The cameras use this information, along with image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to track the objects. We present results with a nine node indoor camera network to demonstrate that this approach is feasible and offers acceptable level of accuracy in terms of object locations.
Resumo:
This chapter explores a research project involving teachers working with some of the most disadvantaged young people in South Australia, children growing up in poverty, in families struggling with homelessness and ill-health, in the outer southern suburbs. Additionally, there were particular children were struggling with intellectual, emotional and social difficulties which were extreme enough for them not be included in a mainstream class. The research project made two crucial interrelated moves to support teachers to tackle this tough work. First, the project had an explicit social justice agenda. We were not simply researching literacy outcomes, but literacy pedagogies for the students teachers were most worried about. And we wanted to understand how the material conditions of students’ everyday lifeworlds impacted on the working conditions of teachers’ schoolworlds. We sought to open up a discursive space where teachers could talk about poverty, violence, racism and classism in ways that would take them beyond despair and into new imaginings and positive action. Second, the project was designed to start from the urgent questions of early career teachers and to draw on the accumulated practice wisdom of their chosen mentors. Hence we designed not only a teacher-researcher community, but cross-generational networks. Our aim was to build the capacities of both generations to address long-standing educational problems in new ways that drew overtly on their different and complementary resources.
Resumo:
Trust can be used for neighbor formation to generate automated recommendations. User assigned explicit rating data can be used for this purpose. However, the explicit rating data is not always available. In this paper we present a new method of generating trust network based on user’s interest similarity. To identify the interest similarity, we use user’s personalized tag information. This trust network can be used to find the neighbors to make automated recommendation. Our experiment result shows that the precision of the proposed method outperforms the traditional collaborative filtering approach.
Resumo:
Australia’s efforts to transition to a low-emissions economy have stagnated following the successive defeats of the Carbon Pollution Reduction Scheme. This failure should not, however, be regarded as the end of Australia’s efforts to make this transition. In fact, the opportunity now exists for Australia to refine its existing arrangements to enable this transition to occur more effectively. The starting point for this analysis is the legal arrangements applying to the electricity generation sector, which is the largest sectoral emitter of anthropogenic greenhouse gas emissions in Australia. Without an effective strategy to mitigate this sector’s contribution to anthropogenic climate change, it is unlikely that Australia will be able to transition towards a low-emissions economy. It is on this basis that this article assesses the dominant national legal arrangement – the Renewable Energy Target – underpinning the electricity generation sector's efforts to become a low-emissions sector.
Resumo:
Smart matrices are required in bone tissueengineered grafts that provide an optimal environment for cells and retain osteo-inductive factors for sustained biological activity. We hypothesized that a slow-degrading heparin-incorporated hyaluronan (HA) hydrogel can preserve BMP-2; while an arterio–venous (A–V) loop can support axial vascularization to provide nutrition for a bioartificial bone graft. HA was evaluated for osteoblast growth and BMP-2 release. Porous PLDLLA–TCP–PCL scaffolds were produced by rapid prototyping technology and applied in vivo along with HA-hydrogel, loaded with either primary osteoblasts or BMP-2. A microsurgically created A–V loop was placed around the scaffold, encased in an isolation chamber in Lewis rats. HA-hydrogel supported growth of osteoblasts over 8 weeks and allowed sustained release of BMP-2 over 35 days. The A–V loop provided an angiogenic stimulus with the formation of vascularized tissue in the scaffolds. Bone-specific genes were detected by real time RT-PCR after 8 weeks. However, no significant amount of bone was observed histologically. The heterotopic isolation chamber in combination with absent biomechanical stimulation might explain the insufficient bone formation despite adequate expression of bone-related genes. Optimization of the interplay of osteogenic cells and osteo-inductive factors might eventually generate sufficient amounts of axially vascularized bone grafts for reconstructive surgery.
Resumo:
A trend in design and implementation of modern industrial automation systems is to integrate computing, communication and control into a unified framework at different levels of machine/factory operations and information processing. These distributed control systems are referred to as networked control systems (NCSs). They are composed of sensors, actuators, and controllers interconnected over communication networks. As most of communication networks are not designed for NCS applications, the communication requirements of NCSs may be not satisfied. For example, traditional control systems require the data to be accurate, timely and lossless. However, because of random transmission delays and packet losses, the control performance of a control system may be badly deteriorated, and the control system rendered unstable. The main challenge of NCS design is to both maintain and improve stable control performance of an NCS. To achieve this, communication and control methodologies have to be designed. In recent decades, Ethernet and 802.11 networks have been introduced in control networks and have even replaced traditional fieldbus productions in some real-time control applications, because of their high bandwidth and good interoperability. As Ethernet and 802.11 networks are not designed for distributed control applications, two aspects of NCS research need to be addressed to make these communication networks suitable for control systems in industrial environments. From the perspective of networking, communication protocols need to be designed to satisfy communication requirements for NCSs such as real-time communication and high-precision clock consistency requirements. From the perspective of control, methods to compensate for network-induced delays and packet losses are important for NCS design. To make Ethernet-based and 802.11 networks suitable for distributed control applications, this thesis develops a high-precision relative clock synchronisation protocol and an analytical model for analysing the real-time performance of 802.11 networks, and designs a new predictive compensation method. Firstly, a hybrid NCS simulation environment based on the NS-2 simulator is designed and implemented. Secondly, a high-precision relative clock synchronization protocol is designed and implemented. Thirdly, transmission delays in 802.11 networks for soft-real-time control applications are modeled by use of a Markov chain model in which real-time Quality-of- Service parameters are analysed under a periodic traffic pattern. By using a Markov chain model, we can accurately model the tradeoff between real-time performance and throughput performance. Furthermore, a cross-layer optimisation scheme, featuring application-layer flow rate adaptation, is designed to achieve the tradeoff between certain real-time and throughput performance characteristics in a typical NCS scenario with wireless local area network. Fourthly, as a co-design approach for both a network and a controller, a new predictive compensation method for variable delay and packet loss in NCSs is designed, where simultaneous end-to-end delays and packet losses during packet transmissions from sensors to actuators is tackled. The effectiveness of the proposed predictive compensation approach is demonstrated using our hybrid NCS simulation environment.
Resumo:
Distributed pipeline assets systems are crucial to society. The deterioration of these assets and the optimal allocation of limited budget for their maintenance correspond to crucial challenges for water utility managers. Decision makers should be assisted with optimal solutions to select the best maintenance plan concerning available resources and management strategies. Much research effort has been dedicated to the development of optimal strategies for maintenance of water pipes. Most of the maintenance strategies are intended for scheduling individual water pipe. Consideration of optimal group scheduling replacement jobs for groups of pipes or other linear assets has so far not received much attention in literature. It is a common practice that replacement planners select two or three pipes manually with ambiguous criteria to group into one replacement job. This is obviously not the best solution for job grouping and may not be cost effective, especially when total cost can be up to multiple million dollars. In this paper, an optimal group scheduling scheme with three decision criteria for distributed pipeline assets maintenance decision is proposed. A Maintenance Grouping Optimization (MGO) model with multiple criteria is developed. An immediate challenge of such modeling is to deal with scalability of vast combinatorial solution space. To address this issue, a modified genetic algorithm is developed together with a Judgment Matrix. This Judgment Matrix is corresponding to various combinations of pipe replacement schedules. An industrial case study based on a section of a real water distribution network was conducted to test the new model. The results of the case study show that new schedule generated a significant cost reduction compared with the schedule without grouping pipes.
Resumo:
The broad research questions of the book are: How can successful, interdisciplinary collaboration contribute to research innovation through Practice-led research? What contributes to the design, production and curation of successful new media art? What are the implications of exhibiting it across dual sites for artists, curators and participant audiences? Is it possible to create an 'intimate transaction' between people who are separated by vast distances but joined by interfaces and distributed networks? Centred on a new media work of the same name by the Transmute Collective (led by Keith Armstrong), this book provides insights from multidisciplinary perspectives. Visual, sound and performance artists, furniture designers, spatial architects, technology systems designers, and curators who collaborated in the production of Intimate Transactions discuss their design philosophies, working processes and resolution of this major new media work. Analytical and philosophical essays by international writers complement these writings on production. They consider how new media art, like Intimate Transactions, challenges traditional understandings of art, curatorial installation and exhibition experience because of the need to take into account interaction, the reconfiguration of space, co-presence, performativity and inter-site collaboration.
Resumo:
Background In order to provide insights into the complex biochemical processes inside a cell, modelling approaches must find a balance between achieving an adequate representation of the physical phenomena and keeping the associated computational cost within reasonable limits. This issue is particularly stressed when spatial inhomogeneities have a significant effect on system's behaviour. In such cases, a spatially-resolved stochastic method can better portray the biological reality, but the corresponding computer simulations can in turn be prohibitively expensive. Results We present a method that incorporates spatial information by means of tailored, probability distributed time-delays. These distributions can be directly obtained by single in silico or a suitable set of in vitro experiments and are subsequently fed into a delay stochastic simulation algorithm (DSSA), achieving a good compromise between computational costs and a much more accurate representation of spatial processes such as molecular diffusion and translocation between cell compartments. Additionally, we present a novel alternative approach based on delay differential equations (DDE) that can be used in scenarios of high molecular concentrations and low noise propagation. Conclusions Our proposed methodologies accurately capture and incorporate certain spatial processes into temporal stochastic and deterministic simulations, increasing their accuracy at low computational costs. This is of particular importance given that time spans of cellular processes are generally larger (possibly by several orders of magnitude) than those achievable by current spatially-resolved stochastic simulators. Hence, our methodology allows users to explore cellular scenarios under the effects of diffusion and stochasticity in time spans that were, until now, simply unfeasible. Our methodologies are supported by theoretical considerations on the different modelling regimes, i.e. spatial vs. delay-temporal, as indicated by the corresponding Master Equations and presented elsewhere.