893 resultados para Scalable Nanofabrication


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Research based upon microneedle (MN) arrays has intensified recently. While the initial focus was on biomolecules, the field has expanded to include delivery of conventional small-molecule drugs whose water solubility currently precludes transdermal administration. Much success has been achieved, with peptides, proteins, vaccines, antibodies and even particulates delivered by MN in therapeutic/prophylactic doses. Recent innovations have focused on enhanced formulation design, scalable manufacture and extension of exploitation to minimally invasive patient monitoring, ocular delivery and enhanced administration of cosmeceuticals. Only two MN-based drug/vaccine delivery products are currently marketed, partially due to limitations with older MN designs based upon silicon and metal. Even the more promising polymeric MN have raised a number of regulatory and manufacturability queries that the field must address. MN arrays have tremendous potential to yield real benefits for patients and industry and, through diligence, innovation and collaboration, this will begin to be realised over the next 3-5 years.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Demand for intelligent surveillance in public transport systems is growing due to the increased threats of terrorist attack, vandalism and litigation. The aim of intelligent surveillance is in-time reaction to information received from various monitoring devices, especially CCTV systems. However, video analytic algorithms can only provide static assertions, whilst in reality, many related events happen in sequence and hence should be modeled sequentially. Moreover, analytic algorithms are error-prone, hence how to correct the sequential analytic results based on new evidence (external information or later sensing discovery) becomes an interesting issue. In this paper, we introduce a high-level sequential observation modeling framework which can support revision and update on new evidence. This framework adapts the situation calculus to deal with uncertainty from analytic results. The output of the framework can serve as a foundation for event composition. We demonstrate the significance and usefulness of our framework with a case study of a bus surveillance project.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Demand Response (DR) algorithms manipulate the energy consumption schedules of controllable loads so as to satisfy grid objectives. Implementation of DR algorithms using a centralised agent can be problematic for scalability reasons, and there are issues related to the privacy of data and robustness to communication failures. Thus it is desirable to use a scalable decentralised algorithm for the implementation of DR. In this paper, a hierarchical DR scheme is proposed for Peak Minimisation (PM) based on Dantzig-Wolfe Decomposition (DWD). In addition, a Time Weighted Maximisation option is included in the cost function which improves the Quality of Service for devices seeking to receive their desired energy sooner rather than later. The paper also demonstrates how the DWD algorithm can be implemented more efficiently through the calculation of the upper and lower cost bounds after each DWD iteration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Motivated by the need for designing efficient and robust fully-distributed computation in highly dynamic networks such as Peer-to-Peer (P2P) networks, we study distributed protocols for constructing and maintaining dynamic network topologies with good expansion properties. Our goal is to maintain a sparse (bounded degree) expander topology despite heavy {\em churn} (i.e., nodes joining and leaving the network continuously over time). We assume that the churn is controlled by an adversary that has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Our main contribution is a randomized distributed protocol that guarantees with high probability the maintenance of a {\em constant} degree graph with {\em high expansion} even under {\em continuous high adversarial} churn. Our protocol can tolerate a churn rate of up to $O(n/\poly\log(n))$ per round (where $n$ is the stable network size). Our protocol is efficient, lightweight, and scalable, and it incurs only $O(\poly\log(n))$ overhead for topology maintenance: only polylogarithmic (in $n$) bits needs to be processed and sent by each node per round and any node's computation cost per round is also polylogarithmic. The given protocol is a fundamental ingredient that is needed for the design of efficient fully-distributed algorithms for solving fundamental distributed computing problems such as agreement, leader election, search, and storage in highly dynamic P2P networks and enables fast and scalable algorithms for these problems that can tolerate a large amount of churn.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the fundamental Byzantine leader election problem in dynamic networks where the topology can change from round to round and nodes can also experience heavy {\em churn} (i.e., nodes can join and leave the network continuously over time). We assume the full information model where the Byzantine nodes have complete knowledge about the entire state of the network at every round (including random choices made by all the nodes), have unbounded computational power and can deviate arbitrarily from the protocol. The churn is controlled by an adversary that has complete knowledge and control over which nodes join and leave and at what times and also may rewire the topology in every round and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Our main contribution is an $O(\log^3 n)$ round algorithm that achieves Byzantine leader election under the presence of up to $O({n}^{1/2 - \epsilon})$ Byzantine nodes (for a small constant $\epsilon > 0$) and a churn of up to \\$O(\sqrt{n}/\poly\log(n))$ nodes per round (where $n$ is the stable network size).The algorithm elects a leader with probability at least $1-n^{-\Omega(1)}$ and guarantees that it is an honest node with probability at least $1-n^{-\Omega(1)}$; assuming the algorithm succeeds, the leader's identity will be known to a $1-o(1)$ fraction of the honest nodes. Our algorithm is fully-distributed, lightweight, and is simple to implement. It is also scalable, as it runs in polylogarithmic (in $n$) time and requires nodes to send and receive messages of only polylogarithmic size per round.To the best of our knowledge, our algorithm is the first scalable solution for Byzantine leader election in a dynamic network with a high rate of churn; our protocol can also be used to solve Byzantine agreement in a straightforward way.We also show how to implement an (almost-everywhere) public coin with constant bias in a dynamic network with Byzantine nodes and provide a mechanism for enabling honest nodes to store information reliably in the network, which might be of independent interest.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is a perception amongst some of those learning computer programming that the principles of object-oriented programming (where behaviour is often encapsulated across multiple class files) can be difficult to grasp, especially when taught through a traditional, didactic ‘talk-and-chalk’ method or in a lecture-based environment.
We propose a non-traditional teaching method, developed for a government funded teaching training project delivered by Queen’s University, we call it bigCode. In this scenario, learners are provided with many printed, poster-sized fragments of code (in this case either Java or C#). The learners sit on the floor in groups and assemble these fragments into the many classes which make-up an object-oriented program.
Early trials indicate that bigCode is an effective method for teaching object-orientation. The requirement to physically organise the code fragments imitates closely the thought processes of a good software developer when developing object-oriented code.
Furthermore, in addition to teaching the principles involved in object-orientation, bigCode is also an extremely useful technique for teaching learners the organisation and structure of individual classes in Java or C# (as well as the organisation of procedural code). The mechanics of organising fragments of code into complete, correct computer programs give the users first-hand practice of this important skill, and as a result they subsequently find it much easier to develop well-structured code on a computer.
Yet, open questions remain. Is bigCode successful only because we have unknowingly predominantly targeted kinesthetic learners? Is bigCode also an effective teaching approach for other forms of learners, such as visual learners? How scalable is bigCode: in its current form can it be used with large class sizes, or outside the classroom?

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Possibilistic answer set programming (PASP) unites answer set programming (ASP) and possibilistic logic (PL) by associating certainty values with rules. The resulting framework allows to combine both non-monotonic reasoning and reasoning under uncertainty in a single framework. While PASP has been well-studied for possibilistic definite and possibilistic normal programs, we argue that the current semantics of possibilistic disjunctive programs are not entirely satisfactory. The problem is twofold. First, the treatment of negation-as-failure in existing approaches follows an all-or-nothing scheme that is hard to match with the graded notion of proof underlying PASP. Second, we advocate that the notion of disjunction can be interpreted in several ways. In particular, in addition to the view of ordinary ASP where disjunctions are used to induce a non-deterministic choice, the possibilistic setting naturally leads to a more epistemic view of disjunction. In this paper, we propose a semantics for possibilistic disjunctive programs, discussing both views on disjunction. Extending our earlier work, we interpret such programs as sets of constraints on possibility distributions, whose least specific solutions correspond to answer sets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current data-intensive image processing applications push traditional embedded architectures to their limits. FPGA based hardware acceleration is a potential solution but the programmability gap and time consuming HDL design flow is significant. The proposed research approach to develop “FPGA based programmable hardware acceleration platform” that uses, large number of Streaming Image processing Processors (SIPPro) potentially addresses these issues. SIPPro is pipelined in-order soft-core processor architecture with specific optimisations for image processing applications. Each SIPPro core uses 1 DSP48, 2 Block RAMs and 370 slice-registers, making the processor as compact as possible whilst maintaining flexibility and programmability. It is area efficient, scalable and high performance softcore architecture capable of delivering 530 MIPS per core using Xilinx Zynq SoC (ZC7Z020-3). To evaluate the feasibility of the proposed architecture, a Traffic Sign Recognition (TSR) algorithm has been prototyped on a Zedboard with the color and morphology operations accelerated using multiple SIPPros. Simulation and experimental results demonstrate that the processing platform is able to achieve a speedup of 15 and 33 times for color filtering and morphology operations respectively, with a significant reduced design effort and time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There has been much interest in the belief–desire–intention (BDI) agent-based model for developing scalable intelligent systems, e.g. using the AgentSpeak framework. However, reasoning from sensor information in these large-scale systems remains a significant challenge. For example, agents may be faced with information from heterogeneous sources which is uncertain and incomplete, while the sources themselves may be unreliable or conflicting. In order to derive meaningful conclusions, it is important that such information be correctly modelled and combined. In this paper, we choose to model uncertain sensor information in Dempster–Shafer (DS) theory. Unfortunately, as in other uncertainty theories, simple combination strategies in DS theory are often too restrictive (losing valuable information) or too permissive (resulting in ignorance). For this reason, we investigate how a context-dependent strategy originally defined for possibility theory can be adapted to DS theory. In particular, we use the notion of largely partially maximal consistent subsets (LPMCSes) to characterise the context for when to use Dempster’s original rule of combination and for when to resort to an alternative. To guide this process, we identify existing measures of similarity and conflict for finding LPMCSes along with quality of information heuristics to ensure that LPMCSes are formed around high-quality information. We then propose an intelligent sensor model for integrating this information into the AgentSpeak framework which is responsible for applying evidence propagation to construct compatible information, for performing context-dependent combination and for deriving beliefs for revising an agent’s belief base. Finally, we present a power grid scenario inspired by a real-world case study to demonstrate our work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a tensegrity-based co-operative control algorithm for an aircraft formation. The 6 degrees-of-freedom model of the well-known Aerosonde unmanned aerial vehicle (UAV), is integrated with the model of the tensegrity structure and a decentralised control scheme is proposed. The strategy is shown to be scalable for 2n number of UAVs and is able to maintain a firm geometry whilst allowing flexible shape transformations. Simulation results demonstrate the effectiveness and stability of the proposed tensegrity-based formation control algorithm in 3D.