841 resultados para Robotic benchmarks
Resumo:
Back-in-time debuggers are extremely useful tools for identifying the causes of bugs, as they allow us to inspect the past states of objects no longer present in the current execution stack. Unfortunately the "omniscient" approaches that try to remember all previous states are impractical because they either consume too much space or they are far too slow. Several approaches rely on heuristics to limit these penalties, but they ultimately end up throwing out too much relevant information. In this paper we propose a practical approach to back-in-time debugging that attempts to keep track of only the relevant past data. In contrast to other approaches, we keep object history information together with the regular objects in the application memory. Although seemingly counter-intuitive, this approach has the effect that past data that is not reachable from current application objects (and hence, no longer relevant) is automatically garbage collected. In this paper we describe the technical details of our approach, and we present benchmarks that demonstrate that memory consumption stays within practical bounds. Furthermore since our approach works at the virtual machine level, the performance penalty is significantly better than with other approaches.
Resumo:
We report on our experiences with the Spy project, including implementation details and benchmark results. Spy is a re-implementation of the Squeak (i.e., Smalltalk-80) VM using the PyPy toolchain. The PyPy project allows code written in RPython, a subset of Python, to be translated to a multitude of different backends and architectures. During the translation, many aspects of the implementation can be independently tuned, such as the garbage collection algorithm or threading implementation. In this way, a whole host of interpreters can be derived from one abstract interpreter definition. Spy aims to bring these benefits to Squeak, allowing for greater portability and, eventually, improved performance. The current Spy codebase is able to run a small set of benchmarks that demonstrate performance superior to many similar Smalltalk VMs, but which still run slower than in Squeak itself. Spy was built from scratch over the course of a week during a joint Squeak-PyPy Sprint in Bern last autumn.
Resumo:
Concurrency control is mostly based on locks and is therefore notoriously difficult to use. Even though some programming languages provide high-level constructs, these add complexity and potentially hard-to-detect bugs to the application. Transactional memory is an attractive mechanism that does not have the drawbacks of locks, however the underlying implementation is often difficult to integrate into an existing language. In this paper we show how we have introduced transactional semantics into Smalltalk by using the reflective facilities of the language. Our approach is based on method annotations, incremental parse tree transformations and an optimistic commit protocol. The implementation does not depend on modifications to the virtual machine and therefore can be changed at the language level. We report on a practical case study, benchmarks and further and on-going work.
Resumo:
Several commentators have expressed disappointment with New Labour's apparent adherence to the policy frameworks of the previous Conservative administrations. The employment orientation of its welfare programmes, the contradictory nature of the social exclusion initiatives, and the continuing obsession with public sector marketisation, inspections, audits, standards and so on, have all come under critical scrutiny (c.f., Blyth 2001; Jordan 2001; Orme 2001). This paper suggests that in order to understand the socio-economic and political contexts affecting social work we need to examine the relationship between New Labour's modernisation project and its insertion within an architecture of global governance. In particular, membership of the European Union (EU), International Monetary Fund (IMF) and World Trade Organisation (WTO) set the parameters for domestic policy in important ways. Whilst much has been written about the economic dimensions of 'globalisation' in relation to social work rather less has been noted about the ways in which domestic policy agenda are driven by multilateral governance objectives. This policy dimension is important in trying to respond to various changes affecting social work as a professional activity. What is possible, what is encouraged, how things might be done, is tightly bounded by the policy frameworks governing practice and affected by those governing the lives of service users. It is unhelpful to see policy formulation in purely national terms as the UK is inserted into a network governance structure, a regulatory framework where decisions are made by many countries and organisations and agencies. Together, they are producing a 'new legal regime', characterised by a marked neo-liberal policy agenda. This paper aims to demonstrate the relationship of New Labour's modernisation programme to these new forms of legality by examining two main policy areas and the welfare implications they are enmeshed in. The first is privatisation, and the second is social policy in the European Union. Examining these areas allows a demonstration of how much of the New Labour programme can be understood as a local implementation of a transnational strategy, how parts of that strategy produce much of the social exclusion it purports to address, and how social welfare, and particularly social work, are noticeable by their absence within policy discourses of the strategy. The paper details how the privatisation programme is considered to be a crucial vehicle for the further development of a transnational political-economy, where capital accumulation has been redefined as 'welfare'. In this development, frameworks, codes and standards are central, and the final section of the paper examines how the modernisation strategy of the European Union depends upon social policy marked by an employment orientation and risk rationality, aimed at reconfiguring citizen identities.The strategy is governed through an 'open mode of coordination', in which codes, standards, benchmarks and so on play an important role. The paper considers the modernisation strategy and new legality within which it is embedded as dependent upon social policy as a technology of liberal governance, one demonstrating a new rationality in comparison to that governing post-Second World War welfare, and which aims to reconfigure institutional infrastructure and citizen identity.
Resumo:
Zur Optimierung innerbetrieblicher Logistikprozesse ist eine ganzheitliche Prozessdarstellung unter Berücksichtigung von Material-, Informationsfluss und der eingesetzten Ressourcen erforderlich. In diesem Aufsatz werden verschiedene, häufig verwendete Methoden zur Prozessdarstellung diesbezüglich miteinander verglichen und bewertet. Die verschiedenen Stärken und Schwächen werden in Form eines Benchmarks zusammengefasst, das als Grundlage für eine neue Methode dient, die im Rahmen des IGF-Forschungsprojekts 16187 N/1 erarbeitet wurde.
Resumo:
Second Life (SL) is an ideal platform for language learning. It is called a Multi-User Virtual Environment, where users can have varieties of learning experiences in life-like environments. Numerous attempts have been made to use SL as a platform for language teaching and the possibility of SL as a means to promote conversational interactions has been reported. However, the research so far has largely focused on simply using SL without further augmentations for communication between learners or between teachers and learners in a school-like environment. Conversely, not enough attention has been paid to its controllability which builds on the embedded functions in SL. This study, based on the latest theories of second language acquisition, especially on the Task Based Language Teaching and the Interaction Hypothesis, proposes to design and implement an automatized interactive task space (AITS) where robotic agents work as interlocutors of learners. This paper presents a design that incorporates the SLA theories into SL and the implementation method of the design to construct AITS, fulfilling the controllability of SL. It also presents the result of the evaluation experiment conducted on the constructed AITS.
Resumo:
Current advanced cloud infrastructure management solutions allow scheduling actions for dynamically changing the number of running virtual machines (VMs). This approach, however, does not guarantee that the scheduled number of VMs will properly handle the actual user generated workload, especially if the user utilization patterns will change. We propose using a dynamically generated scaling model for the VMs containing the services of the distributed applications, which is able to react to the variations in the number of application users. We answer the following question: How to dynamically decide how many services of each type are needed in order to handle a larger workload within the same time constraints? We describe a mechanism for dynamically composing the SLAs for controlling the scaling of distributed services by combining data analysis mechanisms with application benchmarking using multiple VM configurations. Based on processing of multiple application benchmarks generated data sets we discover a set of service monitoring metrics able to predict critical Service Level Agreement (SLA) parameters. By combining this set of predictor metrics with a heuristic for selecting the appropriate scaling-out paths for the services of distributed applications, we show how SLA scaling rules can be inferred and then used for controlling the runtime scale-in and scale-out of distributed services. We validate our architecture and models by performing scaling experiments with a distributed application representative for the enterprise class of information systems. We show how dynamically generated SLAs can be successfully used for controlling the management of distributed services scaling.
Resumo:
Robotic exoskeletons can be used to study and treat patients with neurological impairments. They can guide and support the human limb over a large range of motion, which requires that the movement trajectory of the exoskeleton coincide with the one of the human arm. This is straightforward to achieve for rather simple joints like the elbow, but very challenging for complex joints like the human shoulder, which is comprised by several bones and can exhibit a movement with multiple rotational and translational degrees of freedom. Thus, several research groups have developed different shoulder actuation mechanism. However, there are no experimental studies that directly compare the comfort of two different shoulder actuation mechanisms. In this study, the comfort and the naturalness of the new shoulder actuation mechanism of the ARMin III exoskeleton are compared to a ball-and-socket-type shoulder actuation. The study was conducted in 20 healthy subjects using questionnaires and 3D-motion records to assess comfort and naturalness. The results indicate that the new shoulder actuation is slightly better than a ball-and-socket-type actuation. However, the differences are small, and under the tested conditions, the comfort and the naturalness of the two tested shoulder actuations do not differ a lot.
Resumo:
HYPOTHESIS A previously developed image-guided robot system can safely drill a tunnel from the lateral mastoid surface, through the facial recess, to the middle ear, as a viable alternative to conventional mastoidectomy for cochlear electrode insertion. BACKGROUND Direct cochlear access (DCA) provides a minimally invasive tunnel from the lateral surface of the mastoid through the facial recess to the middle ear for cochlear electrode insertion. A safe and effective tunnel drilled through the narrow facial recess requires a highly accurate image-guided surgical system. Previous attempts have relied on patient-specific templates and robotic systems to guide drilling tools. In this study, we report on improvements made to an image-guided surgical robot system developed specifically for this purpose and the resulting accuracy achieved in vitro. MATERIALS AND METHODS The proposed image-guided robotic DCA procedure was carried out bilaterally on 4 whole head cadaver specimens. Specimens were implanted with titanium fiducial markers and imaged with cone-beam CT. A preoperative plan was created using a custom software package wherein relevant anatomical structures of the facial recess were segmented, and a drill trajectory targeting the round window was defined. Patient-to-image registration was performed with the custom robot system to reference the preoperative plan, and the DCA tunnel was drilled in 3 stages with progressively longer drill bits. The position of the drilled tunnel was defined as a line fitted to a point cloud of the segmented tunnel using principle component analysis (PCA function in MatLab). The accuracy of the DCA was then assessed by coregistering preoperative and postoperative image data and measuring the deviation of the drilled tunnel from the plan. The final step of electrode insertion was also performed through the DCA tunnel after manual removal of the promontory through the external auditory canal. RESULTS Drilling error was defined as the lateral deviation of the tool in the plane perpendicular to the drill axis (excluding depth error). Errors of 0.08 ± 0.05 mm and 0.15 ± 0.08 mm were measured on the lateral mastoid surface and at the target on the round window, respectively (n =8). Full electrode insertion was possible for 7 cases. In 1 case, the electrode was partially inserted with 1 contact pair external to the cochlea. CONCLUSION The purpose-built robot system was able to perform a safe and reliable DCA for cochlear implantation. The workflow implemented in this study mimics the envisioned clinical procedure showing the feasibility of future clinical implementation.
Resumo:
BACKGROUND: Arm hemiparesis secondary to stroke is common and disabling. We aimed to assess whether robotic training of an affected arm with ARMin--an exoskeleton robot that allows task-specific training in three dimensions-reduces motor impairment more effectively than does conventional therapy. METHODS: In a prospective, multicentre, parallel-group randomised trial, we enrolled patients who had had motor impairment for more than 6 months and moderate-to-severe arm paresis after a cerebrovascular accident who met our eligibility criteria from four centres in Switzerland. Eligible patients were randomly assigned (1:1) to receive robotic or conventional therapy using a centre-stratified randomisation procedure. For both groups, therapy was given for at least 45 min three times a week for 8 weeks (total 24 sessions). The primary outcome was change in score on the arm (upper extremity) section of the Fugl-Meyer assessment (FMA-UE). Assessors tested patients immediately before therapy, after 4 weeks of therapy, at the end of therapy, and 16 weeks and 34 weeks after start of therapy. Assessors were masked to treatment allocation, but patients, therapists, and data analysts were unmasked. Analyses were by modified intention to treat. This study is registered with ClinicalTrials.gov, number NCT00719433. FINDINGS: Between May 4, 2009, and Sept 3, 2012, 143 individuals were tested for eligibility, of whom 77 were eligible and agreed to participate. 38 patients assigned to robotic therapy and 35 assigned to conventional therapy were included in analyses. Patients assigned to robotic therapy had significantly greater improvements in motor function in the affected arm over the course of the study as measured by FMA-UE than did those assigned to conventional therapy (F=4.1, p=0.041; mean difference in score 0.78 points, 95% CI 0.03-1.53). No serious adverse events related to the study occurred. INTERPRETATION: Neurorehabilitation therapy including task-oriented training with an exoskeleton robot can enhance improvement of motor function in a chronically impaired paretic arm after stroke more effectively than conventional therapy. However, the absolute difference between effects of robotic and conventional therapy in our study was small and of weak significance, which leaves the clinical relevance in question.
Resumo:
High-throughput assays, such as yeast two-hybrid system, have generated a huge amount of protein-protein interaction (PPI) data in the past decade. This tremendously increases the need for developing reliable methods to systematically and automatically suggest protein functions and relationships between them. With the available PPI data, it is now possible to study the functions and relationships in the context of a large-scale network. To data, several network-based schemes have been provided to effectively annotate protein functions on a large scale. However, due to those inherent noises in high-throughput data generation, new methods and algorithms should be developed to increase the reliability of functional annotations. Previous work in a yeast PPI network (Samanta and Liang, 2003) has shown that the local connection topology, particularly for two proteins sharing an unusually large number of neighbors, can predict functional associations between proteins, and hence suggest their functions. One advantage of the work is that their algorithm is not sensitive to noises (false positives) in high-throughput PPI data. In this study, we improved their prediction scheme by developing a new algorithm and new methods which we applied on a human PPI network to make a genome-wide functional inference. We used the new algorithm to measure and reduce the influence of hub proteins on detecting functionally associated proteins. We used the annotations of the Gene Ontology (GO) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) as independent and unbiased benchmarks to evaluate our algorithms and methods within the human PPI network. We showed that, compared with the previous work from Samanta and Liang, our algorithm and methods developed in this study improved the overall quality of functional inferences for human proteins. By applying the algorithms to the human PPI network, we obtained 4,233 significant functional associations among 1,754 proteins. Further comparisons of their KEGG and GO annotations allowed us to assign 466 KEGG pathway annotations to 274 proteins and 123 GO annotations to 114 proteins with estimated false discovery rates of <21% for KEGG and <30% for GO. We clustered 1,729 proteins by their functional associations and made pathway analysis to identify several subclusters that are highly enriched in certain signaling pathways. Particularly, we performed a detailed analysis on a subcluster enriched in the transforming growth factor β signaling pathway (P<10-50) which is important in cell proliferation and tumorigenesis. Analysis of another four subclusters also suggested potential new players in six signaling pathways worthy of further experimental investigations. Our study gives clear insight into the common neighbor-based prediction scheme and provides a reliable method for large-scale functional annotations in this post-genomic era.
Resumo:
As the family preservation and support movement evolves rapidly, this article overviews the past, present and future of this approach to policy and services. Building upon several decades of practice experience and research, and now federally funded, program designers are searching for ways to implement system wide change with an array of services all from a family focus, and strengths perspective. Critical issues facing the movement are discussed and a set of benchmarks to judge our future success is presented.
Resumo:
In order to fully describe the construct of empowerment and to determine possible measures for this construct in racially and ethnically diverse neighborhoods, a qualitative study based on Grounded Theory was conducted at both the individual and collective levels. Participants for the study included 49 grassroots experts on community empowerment who were interviewed through semi-structured interviews and focus groups. The researcher also conducted field observations as part of the research protocol.^ The results of the study identified benchmarks of individual and collective empowerment and hundreds of possible markers of collective empowerment applicable in diverse communities. Results also indicated that community involvement is essential in the selection and implementation of proper measures. Additional findings were that the construct of empowerment involves specific principles of empowering relationships and particular motivational factors. All of these findings lead to a two dimensional model of empowerment based on the concepts of relationships among members of a collective body and the collective body's desire for socio-political change.^ These results suggest that the design, implementation, and evaluation of programs that foster empowerment must be based on collaborative ventures between the population being served and program staff because of the interactive, synergistic nature of the construct. In addition, empowering programs should embrace specific principles and processes of individual and collective empowerment in order to maximize their effectiveness and efficiency. And finally, the results suggest that collaboratively choosing markers to measure the processes and outcomes of empowerment in the main systems and populations living in today's multifaceted communities is a useful mechanism to determine change. ^
Resumo:
The European Union’s (EU) trade policy has a strong influence on economic development and the human rights situation in the EU’s partner countries, particularly in developing countries. The present study was commissioned by the German Federal Ministry for Economic Cooperation and Development (BMZ) as a contribution to further developing appropriate methodologies for assessing human rights risks in development-related policies, an objective set in the BMZ’s 2011 strategy on human rights. The study offers guidance for stakeholders seeking to improve their knowledge of how to assess, both ex ante and ex post, the impact of Economic Partnership Agreements on poverty reduction and the right to food in ACP countries. Currently, human rights impacts are not yet systematically addressed in the trade sustainability impact assessments (trade SIAs) that the European Commission conducts when negotiating trade agreements. Nor do they focus specifically on disadvantaged groups or include other benchmarks relevant to human rights impact assessments (HRIAs). The EU itself has identified a need for action in this regard. In June 2012 it presented an Action Plan on Human Rights and Democracy that calls for the inclusion of human rights in all impact assessments and in this context explicitly refers to trade agreements. Since then, the EU has begun to slightly adapt its SIA methodology and is working to define more adequate human rights–consistent procedures. It is hoped that readers of this study will find inspiration to help contribute to this process and help improve human rights consistency of future trade options.
Resumo:
We track dated firn horizons within 400 MHz short-pulse radar profiles to find the continuous extent over which they can be used as historical benchmarks to study past accumulation rates in West Antarctica. The 30-40 cm pulse resolution compares with the accumulation rates of most areas. We tracked a particular set that varied from 30 to 90 m in depth over a distance of 600 km. The main limitations to continuity are fading at depth, pinching associated with accumulation rate differences within hills and valleys, and artificial fading caused by stacking along dips. The latter two may be overcome through multi-kilometer distances by matching the relative amplitude and spacing of several close horizons, along with their pulse forms and phases. Modeling of reflections from thin layers suggests that the - 37 to - 50 dB range of reflectivity and the pulse waveforms we observed are caused by the numerous thin ice layers observed in core stratigraphy. Constructive interference between reflections from these close, high-density layers can explain the maintenance of reflective strength throughout the depth of the firn despite the effects of compaction. The continuity suggests that these layers formed throughout West Antarctica and possibly into East Antarctica as well.