31 resultados para 3-LEVEL SYSTEMS
Resumo:
We extend the concept that life is an informational phenomenon, at every level of organisation, from molecules to the global ecological system. According to this thesis: (a) living is information processing, in which memory is maintained by both molecular states and ecological states as well as the more obvious nucleic acid coding; (b) this information processing has one overall function-to perpetuate itself; and (c) the processing method is filtration (cognition) of, and synthesis of, information at lower levels to appear at higher levels in complex systems (emergence). We show how information patterns, are united by the creation of mutual context, generating persistent consequences, to result in 'functional information'. This constructive process forms arbitrarily large complexes of information, the combined effects of which include the functions of life. Molecules and simple organisms have already been measured in terms of functional information content; we show how quantification may be extended to each level of organisation up to the ecological. In terms of a computer analogy, life is both the data and the program and its biochemical structure is the way the information is embodied. This idea supports the seamless integration of life at all scales with the physical universe. The innovation reported here is essentially to integrate these ideas, basing information on the 'general definition' of information, rather than simply the statistics of information, thereby explaining how functional information operates throughout life. © 2013 Springer Science+Business Media Dordrecht.
Resumo:
It is often believed that both ionic liquids and surfactants generally behave as non-specific denaturants of proteins. In this paper, it is shown that amphiphilic ionic liquids bearing a long alkyl chain and a target molecule, where the target molecule is appended via a carboxylic ester functionality, can represent super-substrates that enable the catalytic activity of an enzyme, even at high concentrations in solution. Menthol has been chosen as the target molecule for slow and controlled fragrance delivery, and it was found that the rate of the menthol release can be controlled by the chemical structure of the ionic liquid. At a more fundamental level, this study offers an insight into the complex hydrophobic, electrostatic, and hydrogen bond interactions between the enzyme and substrate.
Resumo:
A novel cost-effective and low-latency wormhole router for packet-switched NoC designs, tailored for FPGA, is presented. This has been designed to be scalable at system level to fully exploit the characteristics and constraints of FPGA based systems, rather than custom ASIC technology. A key feature is that it achieves a low packet propagation latency of only two cycles per hop including both router pipeline delay and link traversal delay - a significant enhancement over existing FPGA designs - whilst being very competitive in terms of performance and hardware complexity. It can also be configured in various network topologies including 1-D, 2-D, and 3-D. Detailed design-space exploration has been carried for a range of scaling parameters, with the results of various design trade-offs being presented and discussed. By taking advantage of abundant buildin reconfigurable logic and routing resources, we have been able to create a new scalable on-chip FPGA based router that exhibits high dimensionality and connectivity. The architecture proposed can be easily migrated across many FPGA families to provide flexible, robust and cost-effective NoC solutions suitable for the implementation of high-performance FPGA computing systems. © 2011 IEEE.
Resumo:
This paper describes the ParaPhrase project, a new 3-year targeted research project funded under EU Framework 7 Objective 3.4 (Computer Systems), starting in October 2011. ParaPhrase aims to follow a new approach to introducing parallelism using advanced refactoring techniques coupled with high-level parallel design patterns. The refactoring approach will use these design patterns to restructure programs defined as networks of software components into other forms that are more suited to parallel execution. The programmer will be aided by high-level cost information that will be integrated into the refactoring tools. The implementation of these patterns will then use a well-understood algorithmic skeleton approach to achieve good parallelism. A key ParaPhrase design goal is that parallel components are intended to match heterogeneous architectures, defined in terms of CPU/GPU combinations, for example. In order to achieve this, the ParaPhrase approach will map components at link time to the available hardware, and will then re-map them during program execution, taking account of multiple applications, changes in hardware resource availability, the desire to reduce communication costs etc. In this way, we aim to develop a new approach to programming that will be able to produce software that can adapt to dynamic changes in the system environment. Moreover, by using a strong component basis for parallelism, we can achieve potentially significant gains in terms of reducing sharing at a high level of abstraction, and so in reducing or even eliminating the costs that are usually associated with cache management, locking, and synchronisation. © 2013 Springer-Verlag Berlin Heidelberg.
Resumo:
This paper introduces hybrid address spaces as a fundamental design methodology for implementing scalable runtime systems on many-core architectures without hardware support for cache coherence. We use hybrid address spaces for an implementation of MapReduce, a programming model for large-scale data processing, and the implementation of a remote memory access (RMA) model. Both implementations are available on the Intel SCC and are portable to similar architectures. We present the design and implementation of HyMR, a MapReduce runtime system whereby different stages and the synchronization operations between them alternate between a distributed memory address space and a shared memory address space, to improve performance and scalability. We compare HyMR to a reference implementation and we find that HyMR improves performance by a factor of 1.71× over a set of representative MapReduce benchmarks. We also compare HyMR with Phoenix++, a state-of-art implementation for systems with hardware-managed cache coherence in terms of scalability and sustained to peak data processing bandwidth, where HyMR demon- strates improvements of a factor of 3.1× and 3.2× respectively. We further evaluate our hybrid remote memory access (HyRMA) programming model and assess its performance to be superior of that of message passing.
Resumo:
The aim of this paper is to demonstrate the applicability and the effectiveness of a computationally demanding stereo matching algorithm in different lowcost and low-complexity embedded devices, by focusing on the analysis of timing and image quality performances. Various optimizations have been implemented to allow its deployment on specific hardware architectures while decreasing memory and processing time requirements: (1) reduction of color channel information and resolution for input images, (2) low-level software optimizations such as parallel computation, replacement of function calls or loop unrolling, (3) reduction of redundant data structures and internal data representation. The feasibility of a stereovision system on a low cost platform is evaluated by using standard datasets and images taken from Infra-Red (IR) cameras. Analysis of the resulting disparity map accuracy with respect to a full-size dataset is performed as well as the testing of suboptimal solutions
Resumo:
In this paper, we propose a system level design approach considering voltage over-scaling (VOS) that achieves error resiliency using unequal error protection of different computation elements, while incurring minor quality degradation. Depending on user specifications and severity of process variations/channel noise, the degree of VOS in each block of the system is adaptively tuned to ensure minimum system power while providing "just-the-right" amount of quality and robustness. This is achieved, by taking into consideration block level interactions and ensuring that under any change of operating conditions, only the "less-crucial" computations, that contribute less to block/system output quality, are affected. The proposed approach applies unequal error protection to various blocks of a system-logic and memory-and spans multiple layers of design hierarchy-algorithm, architecture and circuit. The design methodology when applied to a multimedia subsystem shows large power benefits ( up to 69% improvement in power consumption) at reasonable image quality while tolerating errors introduced due to VOS, process variations, and channel noise.
Resumo:
Recent work has noted an increase in the number of parties at the national level in both proportional and majoritarian electoral systems. While the conventional wisdom maintains that the incentives provided by the electoral system will prevent the number of parties at the district level from exceeding two in majoritarian systems, the evidence presented here demonstrates otherwise. I argue that this has occurred because the number of cleavages articulated by parties has increased as several third parties have begun articulating cleavages that are not well represented by the two larger parties.
Resumo:
At its core, Duverger’s Law—holding that the number of viable parties in first-past-the-post systems should not exceed two—applies primarily at the district level. While the number of parties nationally may exceed two, district-level party system fragmentation should not. Given that a growing body of research shows that district-level party system fragmentation can indeed exceed two in first-past-the-post systems, I explore whether the major alternative explanation for party system fragmentation—the social cleavage approach—can explain such violations of Duverger’s Law. Testing this argument in several West European elections prior to the adoption of proportional representation, I find evidence favouring a social cleavage explanation: with the expansion of the class cleavage, the average district-level party system eventually came to violate the two-party predictions associated with Duverger’s Law. This suggests that sufficient social cleavage diversity may produce multiparty systems in other first-past-the-post systems.
Resumo:
Relative sea-level rise has been a major factor driving the evolution of reef systems during the Holocene. Most models of reef evolution suggest that reefs preferentially grow vertically during rising sea level then laterally from windward to leeward, once the reef flat reaches sea level. Continuous lagoonal sedimentation ("bucket fill") and sand apron progradation eventually lead to reef systems with totally filled lagoons. Lagoonal infilling of One Tree Reef (southern Great Barrier Reef) through sand apron accretion was examined in the context of late Holocene relative sea-level change. This analysis was conducted using sedimentological and digital terrain data supported by 50 radiocarbon ages from fossil microatolls, buried patch reefs, foraminifera and shells in sediment cores, and recalibrated previously published radiocarbon ages. This data set challenges the conceptual model of geologically continuous sediment infill during the Holocene through sand apron accretion. Rapid sand apron accretion occurred between 6000 and 3000 calibrated yr before present B.P. (cal. yr B.P.); followed by only small amounts of sedimentation between 3000 cal. yr B.P. and present, with no significant sand apron accretion in the past 2 k.y. This hiatus in sediment infill coincides with a sea-level fall of similar to 1-1.3 m during the late Holocene (ca. 2000 cal. yr B.P.), which would have caused the turn-off of highly productive live coral growth on the reef flats currently dominated by less productive rubble and algal flats, resulting in a reduced sediment input to back-reef environments and the cessation in sand apron accretion. Given that relative sea-level variations of similar to 1 m were common throughout the Holocene, we suggest that this mode of sand apron development and carbonate production is applicable to most reef systems.
Resumo:
Power, and consequently energy, has recently attained first-class system resource status, on par with conventional metrics such as CPU time. To reduce energy consumption, many hardware- and OS-level solutions have been investigated. However, application-level information - which can provide the system with valuable insights unattainable otherwise - was only considered in a handful of cases. We introduce OpenMPE, an extension to OpenMP designed for power management. OpenMP is the de-facto standard for programming parallel shared memory systems, but does not yet provide any support for power control. Our extension exposes (i) per-region multi-objective optimization hints and (ii) application-level adaptation parameters, in order to create energy-saving opportunities for the whole system stack. We have implemented OpenMPE support in a compiler and runtime system, and empirically evaluated its performance on two architectures, mobile and desktop. Our results demonstrate the effectiveness of OpenMPE with geometric mean energy savings across 9 use cases of 15 % while maintaining full quality of service.
Resumo:
Aim The aim of the study is to evaluate factors that enable or constrain the implementation and service delivery of early warnings systems or acute care training in practice. Background To date there is limited evidence to support the effectiveness of acute care initiatives (early warning systems, acute care training, outreach) in reducing the number of adverse events (cardiac arrest, death, unanticipated Intensive Care admission) through increased recognition and management of deteriorating ward based patients in hospital [1-3]. The reasons posited are that previous research primarily focused on measuring patient outcomes following the implementation of an intervention or programme without considering the social factors (the organisation, the people, external influences) which may have affected the process of implementation and hence measured end-points. Further research which considers the social processes is required in order to understand why a programme works, or does not work, in particular circumstances [4]. Method The design is a multiple case study approach of four general wards in two acute hospitals where Early Warning Systems (EWS) and Acute Life-threatening Events Recognition and Treatment (ALERT) course have been implemented. Various methods are being used to collect data about individual capacities, interpersonal relationships and institutional balance and infrastructures in order to understand the intended and unintended process outcomes of implementing EWS and ALERT in practice. This information will be gathered from individual and focus group interviews with key participants (ALERT facilitators, nursing and medical ALERT instructors, ward managers, doctors, ward nurses and health care assistants from each hospital); non-participant observation of ward organisation and structure; audit of patients' EWS charts and audit of the medical notes of patients who deteriorated during the study period to ascertain whether ALERT principles were followed. Discussion & progress to date This study commenced in January 2007. Ethical approval has been granted and data collection is ongoing with interviews being conducted with key stakeholders. The findings from this study will provide evidence for policy-makers to make informed decisions regarding the direction for strategic and service planning of acute care services to improve the level of care provided to acutely ill patients in hospital. References 1. Esmonde L, McDonnell A, Ball C, Waskett C, Morgan R, Rashidain A et al. Investigating the effectiveness of Critical Care Outreach Services: A systematic review. Intensive Care Medicine 2006; 32: 1713-1721 2. McGaughey J, Alderdice F, Fowler R, Kapila A, Mayhew A, Moutray M. Outreach and Early Warning Systems for the prevention of Intensive Care admission and death of critically ill patients on general hospital wards. Cochrane Database of Systematic Reviews 2007, Issue 3. www.thecochranelibrary.com 3. Winters BD, Pham JC, Hunt EA, Guallar E, Berenholtz S, Pronovost PJ (2007) Rapid Response Systems: A systematic review. Critical Care Medicine 2007; 35 (5): 1238-43 4. Pawson R and Tilley N. Realistic Evaluation. London; Sage: 1997
Resumo:
The Grand Chamber of the European Court of Human Rights recently delivered an important judgment on Article 3 ECHR in the case of Bouyid v Belgium. In Bouyid, the Grand Chamber was called upon to consider whether slaps inflicted on a minor and an adult in police custody were in breach of Article 3 ECHR, which provides that ‘No one shall be subjected to torture or to inhuman or degrading treatment or punishment’. Overruling the Chamber judgment in the case, the Grand Chamber ruled by 14 votes to 3 that there had been a substantive violation of Article 3 in that the applicants had been subjected to degrading treatment by members of the Belgian police; it found that there had been a breach of the investigative duty under Article 3 also. In this comment, I focus on the fundamental basis of disagreement between the majority of the Grand Chamber and those who found themselves in dissent, on the question of whether there had been a substantive breach of Article 3. The crux of the disagreement lay in the understanding and application of the test of ‘minimum level of severity’, which the ECtHR has established as decisive of whether a particular form of ill-treatment crosses the Article 3 threshold, seen also in light of Article 3’s absolute character, which makes it non-displaceable – that is, immune to trade-offs of the type applicable in relation to qualified rights such as privacy and freedom of expression. I consider the way the majority of the Grand Chamber unpacked and applied the concept of dignity – or ‘human dignity’ – towards finding a substantive breach of Article 3, and briefly distil some of the principles underpinning the understanding of human dignity emerging in the Court’s analysis.
Resumo:
Administrative systems such as health care registration are of increasing importance in providing information for statistical, research, and policy purposes. There is thus a pressing need to understand better the detailed relationship between population characteristics as recorded in such systems and conventional censuses. This paper explores these issues using the unique Northern Ireland Longitudinal Study (NILS). It takes the 2001 Census enumeration as a benchmark and analyses the social, demographic and spatial patterns of mismatch with the health register at individual level. Descriptive comparison is followed by multivariate and multilevel analyses which show that approximately 25% of individuals are reported to be in different addresses and that age, rurality, education, and housing type are all important factors. This level of mismatch appears to be maintained over time, as earlier migrants who update their address details are replaced by others who have not yet done so. In some cases, apparent mismatches seem likely to reflect complex multi-address living arrangements rather than data error.