858 resultados para Robust autonomy
Resumo:
The parasitical relationship between the grand piano and the myriad objects used in its preparation as pioneered by John Cage in the late 1940’s is here discussed from a perspective of free improvisation practice. Preparations can be defined as the use of a “non-instrument” object (screws, bolts, rubbers etc…) to alter or modify the behaviour of an instrument or part of an instrument. Although also present in instrumental practices based on the electric guitar or the drum kit, the piano provides a privileged space of exploration given its large‐scale resonant body. It also highlights the transgressive aspect of preparation (the piano to be prepared often belongs to a venue rather than to the pianist herself, hence highlighting relationships of trust, care and respect). Since 2007 I have used a guitar-object (a small wooden board with strings and pick ups) connected to a small amplifier to prepare the grand piano in my free improvisation practice. This paper addresses the different relationships afforded by this type preparation which is characterised by the fact that the object for preparation is in itself an instrument (albeit a simplified one), and the preparation is ephemeral and intrinsic to the performance. The paper also reflects on the process of designing an interface from and for a particular practice and in collaboration with a guitar luthier.
Automated image analysis for experimental investigations of salt water intrusion in coastal aquifers
Resumo:
A novel methodology has been developed to quantify important saltwater intrusion parameters in a sandbox style experiment using image analysis. Existing methods found in the literature are based mainly on visual observations, which are subjective, labour intensive and limits the temporal and spatial resolutions that can be analysed. A robust error analysis was undertaken to determine the optimum methodology to convert image light intensity to concentration. Results showed that defining a relationship on a pixel-wise basis provided the most accurate image to concentration conversion and allowed quantification of the width of mixing zone between the saltwater and freshwater. A large image sample rate was used to investigate the transient dynamics of saltwater intrusion, which rendered analysis by visual observation unsuitable. This paper presents the methodologies developed to minimise human input and promote autonomy, provide high resolution image to concentration conversion and allow the quantification of intrusion parameters under transient conditions.
Resumo:
Two sets of issues in the area of law and religion have generated a large share of attention and controversy across a wide number of countries and jurisdictions in recent years. The first set of issues relates to the autonomy of churches and other religiously affiliated entities such as schools and social service organisations in their hiring and personnel decisions, involving the question of how far, if at all, such entities should be free from the influence and oversight of the state. The second set of issues involves the presence of religious symbols in the public sphere, such as in state schools or on public lands, involving the question of how far the state should be free from the influence of religion. Although these issues – freedom of religion from the state, and freedom of the state from religion – could be viewed as opposite sides of the same coin, they are almost always treated as separate lines of inquiry, and the implications of each for the other have not been the subject of much scrutiny. In this Introduction, we consider whether insights might be drawn from thinking about these issues both from a comparative law perspective and also from considering these two lines of cases together.
Resumo:
This paper presents a novel method of audio-visual fusion for person identification where both the speech and facial modalities may be corrupted, and there is a lack of prior knowledge about the corruption. Furthermore, we assume there is a limited amount of training data for each modality (e.g., a short training speech segment and a single training facial image for each person). A new representation and a modified cosine similarity are introduced for combining and comparing bimodal features with limited training data as well as vastly differing data rates and feature sizes. Optimal feature selection and multicondition training are used to reduce the mismatch between training and testing, thereby making the system robust to unknown bimodal corruption. Experiments have been carried out on a bimodal data set created from the SPIDRE and AR databases with variable noise corruption of speech and occlusion in the face images. The new method has demonstrated improved recognition accuracy.
Resumo:
In this paper, a multiloop robust control strategy is proposed based on H∞ control and a partial least squares (PLS) model (H∞_PLS) for multivariable chemical processes. It is developed especially for multivariable systems in ill-conditioned plants and non-square systems. The advantage of PLS is to extract the strongest relationship between the input and the output variables in the reduced space of the latent variable model rather than in the original space of the highly dimensional variables. Without conventional decouplers, the dynamic PLS framework automatically decomposes the MIMO process into multiple single-loop systems in the PLS subspace so that the controller design can be simplified. Since plant/model mismatch is almost inevitable in practical applications, to enhance the robustness of this control system, the controllers based on the H∞ mixed sensitivity problem are designed in the PLS latent subspace. The feasibility and the effectiveness of the proposed approach are illustrated by the simulation results of a distillation column and a mixing tank process. Comparisons between H∞_PLS control and conventional individual control (either H∞ control or PLS control only) are also made
Resumo:
Chronic lymphocytic leukemia (CLL) follows a variable clinical course which is difficult to predict at diagnosis. We assessed somatic mutation (SHM) status, CD38 and ZAP-70 expression in 87 patients (49 male, 38 female) with stage A CLL and known cytogenetic profile to compare their role in predicting disease progression, which was assessed by the treatment free interval (TFI) from diagnosis. Sixty (69%) patients were SHM+, 24 (28%) were CD38+ and ten (12%) were ZAP-70+. The median TFI for: (i) SHM + versus SHM- patients was 124 versus 26 months; hazard ratio (HR) = 3.6 [95% confidence interval (CI) = 1.8 - 7.3; P = 0.001]: (ii) CD38- versus CD38+ patients was 120 versus 34 months; HR = 2.4 (95% CI = 1.4 - 5.3; P = 0.02); and (iii) ZAP70- versus ZAP70+ was 120 versus 16 months; HR = 3.4 (95% CI = 1.4 - 8.7; P = 0.01). SHM status and CD38 retained prognostic significance on multivariate analysis whereas ZAP-70 did not. We conclude that ZAP-70 analysis does not provide additional prognostic information in this group of patients.
Resumo:
Motivated by the need for designing efficient and robust fully-distributed computation in highly dynamic networks such as Peer-to-Peer (P2P) networks, we study distributed protocols for constructing and maintaining dynamic network topologies with good expansion properties. Our goal is to maintain a sparse (bounded degree) expander topology despite heavy {\em churn} (i.e., nodes joining and leaving the network continuously over time). We assume that the churn is controlled by an adversary that has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Our main contribution is a randomized distributed protocol that guarantees with high probability the maintenance of a {\em constant} degree graph with {\em high expansion} even under {\em continuous high adversarial} churn. Our protocol can tolerate a churn rate of up to $O(n/\poly\log(n))$ per round (where $n$ is the stable network size). Our protocol is efficient, lightweight, and scalable, and it incurs only $O(\poly\log(n))$ overhead for topology maintenance: only polylogarithmic (in $n$) bits needs to be processed and sent by each node per round and any node's computation cost per round is also polylogarithmic. The given protocol is a fundamental ingredient that is needed for the design of efficient fully-distributed algorithms for solving fundamental distributed computing problems such as agreement, leader election, search, and storage in highly dynamic P2P networks and enables fast and scalable algorithms for these problems that can tolerate a large amount of churn.
Resumo:
Physically Unclonable Functions (PUFs), exploit inherent manufacturing variations and present a promising solution for hardware security. They can be used for key storage, authentication and ID generations. Low power cryptographic design is also very important for security applications. However, research to date on digital PUF designs, such as Arbiter PUFs and RO PUFs, is not very efficient. These PUF designs are difficult to implement on Field Programmable Gate Arrays (FPGAs) or consume many FPGA hardware resources. In previous work, a new and efficient PUF identification generator was presented for FPGA. The PUF identification generator is designed to fit in a single slice per response bit by using a 1-bit PUF identification generator cell formed as a hard-macro. In this work, we propose an ultra-compact PUF identification generator design. It is implemented on ten low-cost Xilinx Spartan-6 FPGA LX9 microboards. The resource utilization is only 2.23%, which, to the best of the authors' knowledge, is the most compact and robust FPGA-based PUF identification generator design reported to date. This PUF identification generator delivers a stable range of uniqueness of around 50% and good reliability between 85% and 100%.
Resumo:
Curriculum for Excellence, Scotland’s 3-18 curriculum, has been described as ‘the most significant curricular change in Scotland for a generation’ (McAra, Broadley and McLauchlan, 2013:223). The purpose of the curriculum is ‘encapsulated’ in four capacities in order that learners become i) successful learners, ii) confident individuals, iii) responsible citizens, and iv) effective contributors. With particular reference to these capacities, we explore the principle of autonomy as it pertains to both individual and collective flourishing seeking to disarm commonplace criticisms of autonomy by arguing that it might be put to work in CfE as a potentially multi-dimensional, context-sensitive concept that is relational as well as individual. We conclude that the four capacities lend themselves to re-consideration and re-mapping in pursuit of autonomy and flourishing premised on the principles of liberal personhood.
Resumo:
An efficient and robust case sorting algorithm based on Extended Equal Area Criterion (EEAC) is proposed in this paper for power system transient stability assessment (TSA). The time-varying degree of an equivalent image system can be deduced by comparing the analysis results of Static EEAC (SEEAC) and Dynamic EEAC (DEEAC), the former of which neglects all time-varying factors while the latter partially considers the time-varying factors. Case sorting rules according to their transient stability severity are set combining the time-varying degree and fault messages. Then a case sorting algorithm is designed with the “OR” logic among multiple rules, based on which each case can be identified into one of the following five categories, namely stable, suspected stable, marginal, suspected unstable and unstable. The performance of this algorithm is verified by studying 1652 contingency cases from 9 real Chinese provincial power systems under various operating conditions. It is shown that desirable classification accuracy can be achieved for all the contingency cases at the cost of very little extra computational burden and only 9.81% of the whole cases need to carry out further detailed calculation in rigorous on-line TSA conditions.
Resumo:
The scale of the Software-Defined Network (SDN) Controller design problem has become apparent with the expansion of SDN deployments. Initial SDN deployments were small-scale, single controller environments for research and usecase testing. Today, enterprise deployments requiring multiple controllers are gathering momentum e.g. Google’s backbone network, Microsoft’s public cloud, and NTT’s edge gateway. Third-party applications are also becoming available e.g. HP SDN App Store. The increase in components and interfaces for the evolved SDN implementation increases the security challenges of the SDN controller design. In this work, the requirements of a secure, robust, and resilient SDN controller are identified, stateof-the-art open-source SDN controllers are analyzed with respect to the security of their design, and recommendations for security improvements are provided. This contribution highlights the gap between the potential security solutions for SDN controllers and the actual security level of current controller designs.
Resumo:
Electrochemical water splitting used for generating hydrogen has attracted increasingly attention due to energy and environmental issues. It is a major challenge to design an efficient, robust and inexpensive electrocatalyst to achieve preferable catalytic performance. Herein, a novel three-dimensional (3D) electrocatalyst was prepared by decorating nanostructured biological material-derived carbon nanofibers with in situ generated cobalt-based nanospheres (denoted as CNF@Co) through a facile approach. The interconnected porous 3D networks of the resulting CNF@Co catalyst provide abundant channels and interfaces, which remarkably favor both mass transfer and oxygen evolution. The as-prepared CNF@Co shows excellent electrocatalytic activity towards the oxygen evolution reactions with an onset potential of about 0.445 V vs. Ag/AgCl. It only needs a low overpotential of 314 mV to achieve a current density of 10 mA/cm<sup>2</sup> in 1.0 M KOH. Furthermore, the CNF@Co catalyst exhibits excellent stability towards water oxidation, even outperforming commercial IrO<inf>2</inf> and RuO<inf>2</inf> catalysts.
Resumo:
In this paper, we introduce a novel approach to face recognition which simultaneously tackles three combined challenges: 1) uneven illumination; 2) partial occlusion; and 3) limited training data. The new approach performs lighting normalization, occlusion de-emphasis and finally face recognition, based on finding the largest matching area (LMA) at each point on the face, as opposed to traditional fixed-size local area-based approaches. Robustness is achieved with novel approaches for feature extraction, LMA-based face image comparison and unseen data modeling. On the extended YaleB and AR face databases for face identification, our method using only a single training image per person, outperforms other methods using a single training image, and matches or exceeds methods which require multiple training images. On the labeled faces in the wild face verification database, our method outperforms comparable unsupervised methods. We also show that the new method performs competitively even when the training images are corrupted.