976 resultados para Technical Report, Computer Science
Resumo:
Object-oriented meta-languages such as MOF or EMOF are often used to specify domain specific languages. However, these meta-languages lack the ability to describe behavior or operational semantics. Several approaches used a subset of Java mixed with OCL as executable meta-languages. In this paper, we report our experience of using Smalltalk as an executable and integrated meta-language. We validated this approach in incrementally building over the last decade, Moose, a meta-described reengineering environment. The reflective capabilities of Smalltalk support a uniform way of letting the base developer focus on his tasks while at the same time allowing him to meta-describe his domain model. The advantage of our this approach is that the developer uses the same tools and environment
Resumo:
OBJECT: The aim of our study was to demonstrate the image quality of the new device using human cadavers, extending the horizon of available imaging modalities in forensic medicine. MATERIALS AND METHODS: Six human cadavers were examined, revealing C-arm data sets of the head, neck thorax, abdomen and pelvis. High-resolution mode was performed with 500 fluoroscopy shots during a 190 degrees orbital movement with a constant tube voltage of 100 kV and a current of 4.6 mA. Based on these data sets subsequent three-dimensional reconstructions were generated. RESULTS: Reconstructed data sets revealed high-resolution images of all skeletal structures in a near-CT quality. The same image quality was available in all reconstruction planes. Artefacts caused by restorative dental materials are less accentuated in CBCT data sets. The system configuration was not powerful enough to generate sufficient images of intracranial structures. CONCLUSION: After the here-demonstrated encouraging preliminary results, the forensic indications that would be suitable for imaging with a 3D C-arm have to be defined. Promising seems the visualization local limited region of interest as the cervical spine or the facial skeleton.
Resumo:
With today's prevalence of Internet-connected systems storing sensitive data and the omnipresent threat of technically skilled malicious users, computer security remains a critically important field. Because of today's multitude of vulnerable systems and security threats, it is vital that computer science students be taught techniques for programming secure systems, especially since many of them will work on systems with sensitive data after graduation. Teaching computer science students proper design, implementation, and maintenance of secure systems is a challenging task that calls for the use of novel pedagogical tools. This report describes the implementation of a compiler that converts mandatory access control specification Domain-Type Enforcement Language to the Java Security Manager, primarily for pedagogical purposes. The implementation of the Java Security Manager was explored in depth, and various techniques to work around its inherent limitations were explored and partially implemented, although some of these workarounds do not appear in the current version of the compiler because they would have compromised cross-platform compatibility. The current version of the compiler and implementation details of the Java Security Manager are discussed in depth.
Resumo:
Back-in-time debuggers are extremely useful tools for identifying the causes of bugs, as they allow us to inspect the past states of objects no longer present in the current execution stack. Unfortunately the "omniscient" approaches that try to remember all previous states are impractical because they either consume too much space or they are far too slow. Several approaches rely on heuristics to limit these penalties, but they ultimately end up throwing out too much relevant information. In this paper we propose a practical approach to back-in-time debugging that attempts to keep track of only the relevant past data. In contrast to other approaches, we keep object history information together with the regular objects in the application memory. Although seemingly counter-intuitive, this approach has the effect that past data that is not reachable from current application objects (and hence, no longer relevant) is automatically garbage collected. In this paper we describe the technical details of our approach, and we present benchmarks that demonstrate that memory consumption stays within practical bounds. Furthermore since our approach works at the virtual machine level, the performance penalty is significantly better than with other approaches.
Resumo:
We report on our experiences with the Spy project, including implementation details and benchmark results. Spy is a re-implementation of the Squeak (i.e., Smalltalk-80) VM using the PyPy toolchain. The PyPy project allows code written in RPython, a subset of Python, to be translated to a multitude of different backends and architectures. During the translation, many aspects of the implementation can be independently tuned, such as the garbage collection algorithm or threading implementation. In this way, a whole host of interpreters can be derived from one abstract interpreter definition. Spy aims to bring these benefits to Squeak, allowing for greater portability and, eventually, improved performance. The current Spy codebase is able to run a small set of benchmarks that demonstrate performance superior to many similar Smalltalk VMs, but which still run slower than in Squeak itself. Spy was built from scratch over the course of a week during a joint Squeak-PyPy Sprint in Bern last autumn.
Resumo:
AIM: To investigate the acute effects of stochastic resonance whole body vibration (SR-WBV) training to identify possible explanations for preventive effects against musculoskeletal disorders. METHODS: Twenty-three healthy, female students participated in this quasi-experimental pilot study. Acute physiological and psychological effects of SR-WBV training were examined using electromyography of descending trapezius (TD) muscle, heart rate variability (HRV), different skin parameters (temperature, redness and blood flow) and self-report questionnaires. All subjects conducted a sham SR-WBV training at a low intensity (2 Hz with noise level 0) and a verum SR-WBV training at a higher intensity (6 Hz with noise level 4). They were tested before, during and after the training. Conclusions were drawn on the basis of analysis of variance. RESULTS: Twenty-three healthy, female students participated in this study (age = 22.4 ± 2.1 years; body mass index = 21.6 ± 2.2 kg/m2). Muscular activity of the TD and energy expenditure rose during verum SR-WBV compared to baseline and sham SR-WBV (all P < 0.05). Muscular relaxation after verum SR-WBV was higher than at baseline and after sham SR-WBV (all P < 0.05). During verum SR-WBV the levels of HRV were similar to those observed during sham SR-WBV. The same applies for most of the skin characteristics, while microcirculation of the skin of the middle back was higher during verum compared to sham SR-WBV (P < 0.001). Skin redness showed significant changes over the three measurement points only in the middle back area (P = 0.022). There was a significant rise from baseline to verum SR-WBV (0.86 ± 0.25 perfusion units; P = 0.008). The self-reported chronic pain grade indicators of pain, stiffness, well-being, and muscle relaxation showed a mixed pattern across conditions. Muscle and joint stiffness (P = 0.018) and muscular relaxation did significantly change from baseline to different conditions of SR-WBV (P < 0.001). Moreover, muscle relaxation after verum SR-WBV was higher than after sham SR-WBV (P < 0.05). CONCLUSION: Verum SR-WBV stimulated musculoskeletal activity in young healthy individuals while cardiovascular activation was low. Training of musculoskeletal capacity and immediate increase in musculoskeletal relaxation are potential mediators of pain reduction in preventive trials.
Resumo:
OBJECTIVES Evidence increases that cognitive failure may be used to screen for drivers at risk. Until now, most studies have relied on driving learners. This exploratory pilot study examines self-report of cognitive failure in driving beginners and error during real driving as observed by driving instructors. METHODS Forty-two driving learners of 14 driving instructors filled out a work-related cognitive failure questionnaire. Driving instructors observed driving errors during the next driving lesson. In multiple linear regression analysis, driving errors were regressed on cognitive failure with the number of driving lessons as an estimator of driving experience controlled. RESULTS Higher cognitive failure predicted more driving errors (p < .01) when age, gender and driving experience were controlled in analysis. CONCLUSIONS Cognitive failure was significantly associated with observed driving errors. Systematic research on cognitive failure in driving beginners is recommended.
Resumo:
The ever increasing popularity of apps stems from their ability to provide highly customized services to the user. The flip side is that in order to provide such services, apps need access to very sensitive private information about the user. This leads to malicious apps that collect personal user information in the background and exploit it in various ways. Studies have shown that current app vetting processes which are mainly restricted to install time verification mechanisms are incapable of detecting and preventing such attacks. We argue that the missing fundamental aspect here is a comprehensive and usable mobile privacy solution, one that not only protects the user's location information, but also other equally sensitive user data such as the user's contacts and documents. A solution that is usable by the average user who does not understand or care about the low level technical details. To bridge this gap, we propose privacy metrics that quantify low-level app accesses in terms of privacy impact and transforms them to high-level user understandable ratings. We also provide the design and architecture of our Privacy Panel app that represents the computed ratings in a graphical user-friendly format and allows the user to define policies based on them. Finally, experimental results are given to validate the scalability of the proposed solution.
Resumo:
The human face is a vital component of our identity and many people undergo medical aesthetics procedures in order to achieve an ideal or desired look. However, communication between physician and patient is fundamental to understand the patient’s wishes and to achieve the desired results. To date, most plastic surgeons rely on either “free hand” 2D drawings on picture printouts or computerized picture morphing. Alternatively, hardware dependent solutions allow facial shapes to be created and planned in 3D, but they are usually expensive or complex to handle. To offer a simple and hardware independent solution, we propose a web-based application that uses 3 standard 2D pictures to create a 3D representation of the patient’s face on which facial aesthetic procedures such as filling, skin clearing or rejuvenation, and rhinoplasty are planned in 3D. The proposed application couples a set of well-established methods together in a novel manner to optimize 3D reconstructions for clinical use. Face reconstructions performed with the application were evaluated by two plastic surgeons and also compared to ground truth data. Results showed the application can provide accurate 3D face representations to be used in clinics (within an average of 2 mm error) in less than 5 min.
Resumo:
The Business and Information Technologies (BIT) project strives to reveal new insights into how modern IT impacts organizational structures and business practices using empirical methods. Due to its international scope, it allows for inter-country comparison of empirical results. Germany — represented by the European School of Management and Technologies (ESMT) and the Institute of Information Systems at Humboldt-Universität zu Berlin — joined the BIT project in 2006. This report presents the result of the first survey conducted in Germany during November–December 2006. The key results are as follows: • The most widely adopted technologies and systems in Germany are websites, wireless hardware and software, groupware/productivity tools, and enterprise resource planning (ERP) systems. The biggest potential for growth exists for collaboration and portal tools, content management systems, business process modelling, and business intelligence applications. A number of technological solutions have not yet been adopted by many organizations but also bear some potential, in particular identity management solutions, Radio Frequency Identification (RFID), biometrics, and third-party authentication and verification. • IT security remains on the top of the agenda for most enterprises: budget spending was increasing in the last 3 years. • The workplace and work requirements are changing. IT is used to monitor employees' performance in Germany, but less heavily compared to the United States (Karmarkar and Mangal, 2007).1 The demand for IT skills is increasing at all corporate levels. Executives are asking for more and better structured information and this, in turn, triggers the appearance of new decision-making tools and online technologies on the market. • The internal organization of companies in Germany is underway: organizations are becoming flatter, even though the trend is not as pronounced as in the United States (Karmarkar and Mangal, 2007), and the geographical scope of their operations is increasing. Modern IT plays an important role in enabling this development, e.g. telecommuting, teleconferencing, and other web-based collaboration formats are becoming increasingly popular in the corporate context. • The degree to which outsourcing is being pursued is quite limited with little change expected. IT services, payroll, and market research are the most widely outsourced business functions. This corresponds to the results from other countries. • Up to now, the adoption of e-business technologies has had a rather limited effect on marketing functions. Companies tend to extract synergies from traditional printed media and on-line advertising. • The adoption of e-business has not had a major impact on marketing capabilities and strategy yet. Traditional methods of customer segmentation are still dominating. The corporate identity of most organizations does not change significantly when going online. • Online sales channel are mainly viewed as a complement to the traditional distribution means. • Technology adoption has caused production and organizational costs to decrease. However, the costs of technology acquisition and maintenance as well as consultancy and internal communication costs have increased.
Resumo:
Limited in motivation and cognitive ability to process the increasing amount of information on their Newsfeed, users apply heuristic processing to form their attitudes. Rather than extensively analysing the content, they increasingly rely on heuristic cues – such as the amount of comments and likes as well as the level of relationship with the “poster” – to process the incoming information. In the paper we explore what impact these heuristic cues have on the affective and cognitive attitude of users towards the posts on their Newsfeed. We conduct a survey on based on a Facebook application that allows users to evaluate Newsfeed posts in real time. Applying two distinct panel-regression methods we report robust results that indicate that there is a certain relationship primacy effect when users are processing information: only if the level of relationship with the “poster” is low, the impact of comments and likes on the attitude is considered, whereby likes trigger positive, whereas comments – negative evaluations.
Resumo:
RESTful services gained a lot of attention recently, even in the enterprise world, which is traditionally more web-service centric. Data centric RESfFul services, as previously mainly known in web environments, established themselves as a second paradigm complementing functional WSDL-based SOA. In the Internet of Things, and in particular when talking about sensor motes, the Constraint Application Protocol (CoAP) is currently in the focus of both research and industry. In the enterprise world a protocol called OData (Open Data Protocol) is becoming the future RESTful data access standard. To integrate sensor motes seamlessly into enterprise networks, an embedded OData implementation on top of CoAP is desirable, not requiring an intermediary gateway device. In this paper we introduce and evaluate an embedded OData implementation. We evaluate the OData protocol in terms of performance and energy consumption, considering different data encodings, and compare it to a pure CoAP implementation. We were able to demonstrate that the additional resources needed for an OData/JSON implementation are reasonable when aiming for enterprise interoperability, where OData is suggested to solve both the semantic and technical interoperability problems we have today when connecting systems