889 resultados para Barker frog
Resumo:
Little is known about late Holocene environmental change in Cyrenaica. The late Holocene sequence in the Haua Fteah, the key regional site, is highly discontinuous and characterised by stable-burning deposits. The geoarchaeology of the late-Holocene cave fill of a small cave, CP1565, located close to the Haua Fteah, is described. The well-stratified sequence, dating from the fourth century AD to the present day, provides a glimpse of life at the bottom of the settlement hierarchy and of changing environments over the last 1600 years, with degraded vegetation and aridity in the ‘Little Ice Age’.
Resumo:
Amphibian skin, and particularly that of south/Central American phyllomedusine frogs, is supposed to be "a huge factory and store house of a variety of active peptides". The 40 amino acid amphibian CRF-like peptide, sauvagine, is a prototype member of a unique family of these Phyllomedusa skin peptides. In this study, we describe for the first time the structure of a mature novel peptide from the skin secretion of the South American orange-legged leaf frog, Phyllomedusa hypochondrialis, which belongs to the amphibian CRF/sauvagine family. Partial amino acid sequence from the N-terminal was obtained by automated Edman degradation with the following structure: pGlu-GPPISIDLNMELLRNMIEI-. The biosynthetic precursor of this novel sauvagine peptide, consisted of 85 amino acid residues and was deduced from cDNA library constructed from the same skin secretion. Compared with the standard sauvagine from the frog, Phyllomedusa sauvagei, this novel peptide was found to exert similar contraction effects on isolated guinea-pig colon and rat urinary bladder smooth muscle preparations.
Resumo:
OBJECTIVE: To evaluate the effect of altering a single component of a rehabilitation programme (e.g. adding bilateral practice alone) on functional recovery after stroke, defined using a measure of activity.
DATA SOURCES: A search was conducted of Medline/Pubmed, CINAHL and Web of Science.
REVIEW METHODS: Two reviewers independently assessed eligibility. Randomized controlled trials were included if all participants received the same base intervention, and the experimental group experienced alteration of a single component of the training programme. This could be manipulation of an intrinsic component of training (e.g. intensity) or the addition of a discretionary component (e.g. augmented feedback). One reviewer extracted the data and another independently checked a subsample (20%). Quality was appraised according to the PEDro scale.
RESULTS: Thirty-six studies (n = 1724 participants) were included. These evaluated nine training components: mechanical degrees of freedom, intensity of practice, load, practice schedule, augmented feedback, bilateral movements, constraint of the unimpaired limb, mental practice and mirrored-visual feedback. Manipulation of the mechanical degrees of freedom of the trunk during reaching and the addition of mental practice during upper limb training were the only single components found to independently enhance recovery of function after stroke.
CONCLUSION: This review provides limited evidence to support the supposition that altering a single component of a rehabilitation programme realises greater functional recovery for stroke survivors. Further investigations are required to determine the most effective single components of rehabilitation programmes, and the combinations that may enhance functional recovery.
Resumo:
We describe seven polymorphic, dinucleotide microsatellite loci isolated from bank voles (Clethrionomys glareolus, Rodentia: Muridae) collected from the Wirral Peninsula, United Kingdom. Microsatellites were isolated as part of a long-term study on the wider effects of host-pathogen interactions of an endemic viral disease. These microsatellites showed between five and 13 alleles per locus in these populations. Observed and expected heterozygosities varied between 0.275 to 0.777 and 0.487 to 0.794, respectively. These markers will allow us to investigate the structure of this bank vole population. © 2005 Blackwell Publishing Ltd.
Resumo:
How can applications be deployed on the cloud to achieve maximum performance? This question has become significant and challenging with the availability of a wide variety of Virtual Machines (VMs) with different performance capabilities in the cloud. The above question is addressed by proposing a six step benchmarking methodology in which a user provides a set of four weights that indicate how important each of the following groups: memory, processor, computation and storage are to the application that needs to be executed on the cloud. The weights along with cloud benchmarking data are used to generate a ranking of VMs that can maximise performance of the application. The rankings are validated through an empirical analysis using two case study applications, the first is a financial risk application and the second is a molecular dynamics simulation, which are both representative of workloads that can benefit from execution on the cloud. Both case studies validate the feasibility of the methodology and highlight that maximum performance can be achieved on the cloud by selecting the top ranked VMs produced by the methodology.
Resumo:
With the availability of a wide range of cloud Virtual Machines (VMs) it is difficult to determine which VMs can maximise the performance of an application. Benchmarking is commonly used to this end for capturing the performance of VMs. Most cloud benchmarking techniques are typically heavyweight - time consuming processes which have to benchmark the entire VM in order to obtain accurate benchmark data. Such benchmarks cannot be used in real-time on the cloud and incur extra costs even before an application is deployed.
In this paper, we present lightweight cloud benchmarking techniques that execute quickly and can be used in near real-time on the cloud. The exploration of lightweight benchmarking techniques are facilitated by the development of DocLite - Docker Container-based Lightweight Benchmarking. DocLite is built on the Docker container technology which allows a user-defined portion (such as memory size and the number of CPU cores) of the VM to be benchmarked. DocLite operates in two modes, in the first mode, containers are used to benchmark a small portion of the VM to generate performance ranks. In the second mode, historic benchmark data is used along with the first mode as a hybrid to generate VM ranks. The generated ranks are evaluated against three scientific high-performance computing applications. The proposed techniques are up to 91 times faster than a heavyweight technique which benchmarks the entire VM. It is observed that the first mode can generate ranks with over 90% and 86% accuracy for sequential and parallel execution of an application. The hybrid mode improves the correlation slightly but the first mode is sufficient for benchmarking cloud VMs.
Resumo:
Bag of Distributed Tasks (BoDT) can benefit from decentralised execution on the Cloud. However, there is a trade-off between the performance that can be achieved by employing a large number of Cloud VMs for the tasks and the monetary constraints that are often placed by a user. The research reported in this paper is motivated towards investigating this trade-off so that an optimal plan for deploying BoDT applications on the cloud can be generated. A heuristic algorithm, which considers the user's preference of performance and cost is proposed and implemented. The feasibility of the algorithm is demonstrated by generating execution plans for a sample application. The key result is that the algorithm generates optimal execution plans for the application over 91% of the time.
Resumo:
When orchestrating Web service workflows, the geographical placement of the orchestration engine (s) can greatly affect workflow performance. Data may have to be transferred across long geographical distances, which in turn increases execution time and degrades the overall performance of a workflow. In this paper, we present a framework that, given a DAG-based workflow specification, computes the optimal Amazon EC2 cloud regions to deploy the orchestration engines and execute a workflow. The framework incorporates a constraint model that solves the workflow deployment problem, which is generated using an automated constraint modelling system. The feasibility of the framework is evaluated by executing different sample workflows representative of scientific workloads. The experimental results indicate that the framework reduces the workflow execution time and provides a speed up of 1.3x-2.5x over centralised approaches.