879 resultados para Response time (computer systems)
Resumo:
In electronic commerce, systems development is based on two fundamental types of models, business models and process models. A business model is concerned with value exchanges among business partners, while a process model focuses on operational and procedural aspects of business communication. Thus, a business model defines the what in an e-commerce system, while a process model defines the how. Business process design can be facilitated and improved by a method for systematically moving from a business model to a process model. Such a method would provide support for traceability, evaluation of design alternatives, and seamless transition from analysis to realization. This work proposes a unified framework that can be used as a basis to analyze, to interpret and to understand different concepts associated at different stages in e-Commerce system development. In this thesis, we illustrate how UN/CEFACT’s recommended metamodels for business and process design can be analyzed, extended and then integrated for the final solutions based on the proposed unified framework. Also, as an application of the framework, we demonstrate how process-modeling tasks can be facilitated in e-Commerce system design. The proposed methodology, called BP3 stands for Business Process Patterns Perspective. The BP3 methodology uses a question-answer interface to capture different business requirements from the designers. It is based on pre-defined process patterns, and the final solution is generated by applying the captured business requirements by means of a set of production rules to complete the inter-process communication among these patterns.
Resumo:
The term "Brain Imaging" identi�es a set of techniques to analyze the structure and/or functional behavior of the brain in normal and/or pathological situations. These techniques are largely used in the study of brain activity. In addition to clinical usage, analysis of brain activity is gaining popularity in others recent �fields, i.e. Brain Computer Interfaces (BCI) and the study of cognitive processes. In this context, usage of classical solutions (e.g. f MRI, PET-CT) could be unfeasible, due to their low temporal resolution, high cost and limited portability. For these reasons alternative low cost techniques are object of research, typically based on simple recording hardware and on intensive data elaboration process. Typical examples are ElectroEncephaloGraphy (EEG) and Electrical Impedance Tomography (EIT), where electric potential at the patient's scalp is recorded by high impedance electrodes. In EEG potentials are directly generated from neuronal activity, while in EIT by the injection of small currents at the scalp. To retrieve meaningful insights on brain activity from measurements, EIT and EEG relies on detailed knowledge of the underlying electrical properties of the body. This is obtained from numerical models of the electric �field distribution therein. The inhomogeneous and anisotropic electric properties of human tissues make accurate modeling and simulation very challenging, leading to a tradeo�ff between physical accuracy and technical feasibility, which currently severely limits the capabilities of these techniques. Moreover elaboration of data recorded requires usage of regularization techniques computationally intensive, which influences the application with heavy temporal constraints (such as BCI). This work focuses on the parallel implementation of a work-flow for EEG and EIT data processing. The resulting software is accelerated using multi-core GPUs, in order to provide solution in reasonable times and address requirements of real-time BCI systems, without over-simplifying the complexity and accuracy of the head models.
Resumo:
Die Kombination magnetischer Nanopartikel (NP) mit temperatursensitiven Polymeren führt zur Bildung neuer Komposit-Materialien mit interessanten Eigenschaften, die auf vielfältige Weise genutzt werden können. Mögliche Anwendungsgebiete liegen in der magnetischen Trennung, der selektiven Freisetzung von Medikamenten, dem Aufbau von Sensoren und Aktuatoren. Als Polymerkomponente können z.B. Hydrogele dienen. Die Geschwindigkeit der Quellgradänderung mittels externer Stimuli kann durch eine Reduzierung des Hydrogelvolumens erhöht werden, da das Quellen ein diffusionskontrollierter Prozess ist. rnIm Rahmen dieser Arbeit wurde ein durch ultraviolettes Licht vernetzbares Hydrogel aus N-isopropylacrylamid, Methacrylsäure und dem Vernetzer 4-Benzoylphenylmethacrylat hergestellt (PNIPAAm-Hydrogel) und mit magnetischen Nanopartikeln aus Magnetit (Fe3O4) kombiniert. Dabei wurde die Temperatur- und die pH-Abhängigkeit des Quellgrades im Hinblick auf die Verwendung als nanomechanische Cantilever Sensoren (NCS) untersucht. Desweiteren erfolgte eine Charakterisierung durch Oberflächenplasmonen- und optischer Wellenleitermoden-Resonanz Spektroskopie (SPR/OWS). Die daraus erhaltenen Werte für den pKa-Wert und die lower critical solution Temperatur (LCST) stimmten mit den bekannten Literaturwerten überein. Es konnte gezeigt werden, dass eine stärkere Vernetzung zu einer geringeren LCST führt. Die Ergebnisse mittels NCS wiesen zudem auf einen skin-effect während des Heizens von höher vernetzten Polymeren hin.rnDie Magnetit Nanopartikel wurden ausgehend von Eisen(II)acetylacetonat über eine Hochtemperaturreaktion synthetisiert. Durch Variation der Reaktionstemperatur konnte die Größe der hergestellten Nanopartikel zwischen 3.5 und 20 nm mit einer Größenverteilung von 0.5-2.5 nm eingestellt werden. Durch geeignete Oberflächenfunktionalisierung konnten diese in Wasser stabilisiert werden. Dazu wurde nach zwei Strategien verfahren: Zum einen wurden die Nanopartikel mittels einer Silika-Schale funktionalisiert und zum anderen Zitronensäure als Tensid eingesetzt. Wasserstabilität ist vor allem für biologische Anwendungen wünschenswert. Die magnetischen Partikel wurden mit Hilfe von Transmissionselektronenmikroskopie (TEM), und superconductive quantum interference device (SQUID) charakterisiert. Dabei wurde eine Größenabhängigkeit der magnetischen Eigenschaften sowie superparamagnetisches Verhalten beobachtet. Außerdem wurde die Wärmeerzeugung der magnetischen Nanopartikel in einem AC Magnetfeld untersucht. rnDie Kombination beider Komponenten in Form eines Ferrogels wurde durch Mischen Benzophenon funktionalisierter magnetischer Nanopartikel mit Polymer erreicht. Durch Aufschleudern (Spin-Coaten) wurden dünne Filme erzeugt und diese im Hinblick auf ihr Verhalten in einem Magnetfeld untersucht. Dabei wurde eine geringes Plastikverhalten beobachtet. Die experimentellen Ergebnisse wurden anschließend mit theoretisch berechneten Erwartungswerten verglichen und mit den unterschiedlichen Werten für dreidimensionale Ferrogele in Zusammenhang gestellt. rn
Resumo:
The dissertation titled "Driver Safety in Far-side and Far-oblique Crashes" presents a novel approach to assessing vehicle cockpit safety by integrating Human Factors and Applied Mechanics. The methodology of this approach is aimed at improving safety in compact mobile workspaces such as patrol vehicle cockpits. A statistical analysis performed using Michigan state's traffic crash data to assess various contributing factors that affect the risk of severe driver injuries showed that the risk was greater for unrestrained drivers (OR=3.38, p<0.0001) and for incidents involving front and far-side crashes without seatbelts (OR=8.0 and 23.0 respectively, p<0.005). Statistics also showed that near-side and far-side crashes pose similar threat to driver injury severity. A Human Factor survey was conducted to assess various Human-Machine/Human-Computer Interaction aspects in patrol vehicle cockpits. Results showed that tasks requiring manual operation, especially the usage of laptop, would require more attention and potentially cause more distraction. A vehicle survey conducted to evaluate ergonomics-related issues revealed that some of the equipment was in airbag deployment zones. In addition, experiments were conducted to assess the effects on driver distraction caused by changing the position of in-car accessories. A driving simulator study was conducted to mimic HMI/HCI in a patrol vehicle cockpit (20 subjects, average driving experience = 5.35 years, s.d. = 1.8). It was found that the mounting locations of manual tasks did not result in a significant change in response times. Visual displays resulted in response times less than 1.5sec. It can also be concluded that the manual task was equally distracting regardless of mounting positions (average response time was 15 secs). Average speeds and lane deviations did not show any significant results. Data from 13 full-scale sled tests conducted to simulate far-side impacts at 70 PDOF and 40 PDOF was used to analyze head injuries and HIC/AIS values. It was found that accelerations generated by the vehicle deceleration alone were high enough to cause AIS 3 - AIS 6 injuries. Pretensioners could mitigated injuries only in 40 PDOF (oblique) impacts but are useless in 70 PDOF impacts. Seat belts were ineffective in protecting the driver's head from injuries. Head would come in contact with the laptop during a far-oblique (40 PDOF) crash and far-side door for an angle-type crash (70 PDOF). Finite Element analysis head-laptop impact interaction showed that the contact velocity was the most crucial factor in causing a severe (and potentially fatal) head injury. Results indicate that no equipment may be mounted in driver trajectory envelopes. A very narrow band of space is left in patrol vehicles for installation of manual-task equipment to be both safe and ergonomic. In case of a contact, the material stiffness and damping properties play a very significant role in determining the injury outcome. Future work may be done on improving the interiors' material properties to better absorb and dissipate kinetic energy of the head. The design of seat belts and pretensioners may also be seen as an essential aspect to be further improved.
Resumo:
File system security is fundamental to the security of UNIX and Linux systems since in these systems almost everything is in the form of a file. To protect the system files and other sensitive user files from unauthorized accesses, certain security schemes are chosen and used by different organizations in their computer systems. A file system security model provides a formal description of a protection system. Each security model is associated with specified security policies which focus on one or more of the security principles: confidentiality, integrity and availability. The security policy is not only about “who” can access an object, but also about “how” a subject can access an object. To enforce the security policies, each access request is checked against the specified policies to decide whether it is allowed or rejected. The current protection schemes in UNIX/Linux systems focus on the access control. Besides the basic access control scheme of the system itself, which includes permission bits, setuid and seteuid mechanism and the root, there are other protection models, such as Capabilities, Domain Type Enforcement (DTE) and Role-Based Access Control (RBAC), supported and used in certain organizations. These models protect the confidentiality of the data directly. The integrity of the data is protected indirectly by only allowing trusted users to operate on the objects. The access control decisions of these models depend on either the identity of the user or the attributes of the process the user can execute, and the attributes of the objects. Adoption of these sophisticated models has been slow; this is likely due to the enormous complexity of specifying controls over a large file system and the need for system administrators to learn a new paradigm for file protection. We propose a new security model: file system firewall. It is an adoption of the familiar network firewall protection model, used to control the data that flows between networked computers, toward file system protection. This model can support decisions of access control based on any system generated attributes about the access requests, e.g., time of day. The access control decisions are not on one entity, such as the account in traditional discretionary access control or the domain name in DTE. In file system firewall, the access decisions are made upon situations on multiple entities. A situation is programmable with predicates on the attributes of subject, object and the system. File system firewall specifies the appropriate actions on these situations. We implemented the prototype of file system firewall on SUSE Linux. Preliminary results of performance tests on the prototype indicate that the runtime overhead is acceptable. We compared file system firewall with TE in SELinux to show that firewall model can accommodate many other access control models. Finally, we show the ease of use of firewall model. When firewall system is restricted to specified part of the system, all the other resources are not affected. This enables a relatively smooth adoption. This fact and that it is a familiar model to system administrators will facilitate adoption and correct use. The user study we conducted on traditional UNIX access control, SELinux and file system firewall confirmed that. The beginner users found it easier to use and faster to learn then traditional UNIX access control scheme and SELinux.
Resumo:
Background Patients' health related quality of life (HRQoL) has rarely been systematically monitored in general practice. Electronic tools and practice training might facilitate the routine application of HRQoL questionnaires. Thorough piloting of innovative procedures is strongly recommended before the conduction of large-scale studies. Therefore, we aimed to assess i) the feasibility and acceptance of HRQoL assessment using tablet computers in general practice, ii) the perceived practical utility of HRQoL results and iii) to identify possible barriers hindering wider application of this approach. Methods Two HRQoL questionnaires (St. George's Respiratory Questionnaire SGRQ and EORTC QLQ-C30) were electronically presented on portable tablet computers. Wireless network (WLAN) integration into practice computer systems of 14 German general practices with varying infrastructure allowed automatic data exchange and the generation of a printout or a PDF file. General practitioners (GPs) and practice assistants were trained in a 1-hour course, after which they could invite patients with chronic diseases to fill in the electronic questionnaire during their waiting time. We surveyed patients, practice assistants and GPs regarding their acceptance of this tool in semi-structured telephone interviews. The number of assessments, HRQoL results and interview responses were analysed using quantitative and qualitative methods. Results Over the course of 1 year, 523 patients filled in the electronic questionnaires (1–5 times; 664 total assessments). On average, results showed specific HRQoL impairments, e.g. with respect to fatigue, pain and sleep disturbances. The number of electronic assessments varied substantially between practices. A total of 280 patients, 27 practice assistants and 17 GPs participated in the telephone interviews. Almost all GPs (16/17 = 94%; 95% CI = 73–99%), most practice assistants (19/27 = 70%; 95% CI = 50–86%) and the majority of patients (240/280 = 86%; 95% CI = 82–91%) indicated that they would welcome the use of electronic HRQoL questionnaires in the future. GPs mentioned availability of local health services (e.g. supportive, physiotherapy) (mean: 9.4 ± 1.0 SD; scale: 1 – 10), sufficient extra time (8.9 ± 1.5) and easy interpretation of HRQoL results (8.6 ± 1.6) as the most important prerequisites for their use. They believed HRQoL assessment facilitated both communication and follow up of patients' conditions. Practice assistants emphasised that this process demonstrated an extra commitment to patient centred care; patients viewed it as a tool, which contributed to the physicians' understanding of their personal condition and circumstances. Conclusion This pilot study indicates that electronic HRQoL assessment is technically feasible in general practices. It can provide clinically significant information, which can either be used in the consultation for routine care, or for research purposes. While GPs, practice assistants and patients were generally positive about the electronic procedure, several barriers (e.g. practices' lack of time and routine in HRQoL assessment) need to be overcome to enable broader application of electronic questionnaires in every day medical practice.
Resumo:
People often use tools to search for information. In order to improve the quality of an information search, it is important to understand how internal information, which is stored in user’s mind, and external information, represented by the interface of tools interact with each other. How information is distributed between internal and external representations significantly affects information search performance. However, few studies have examined the relationship between types of interface and types of search task in the context of information search. For a distributed information search task, how data are distributed, represented, and formatted significantly affects the user search performance in terms of response time and accuracy. Guided by UFuRT (User, Function, Representation, Task), a human-centered process, I propose a search model, task taxonomy. The model defines its relationship with other existing information models. The taxonomy clarifies the legitimate operations for each type of search task of relation data. Based on the model and taxonomy, I have also developed prototypes of interface for the search tasks of relational data. These prototypes were used for experiments. The experiments described in this study are of a within-subject design with a sample of 24 participants recruited from the graduate schools located in the Texas Medical Center. Participants performed one-dimensional nominal search tasks over nominal, ordinal, and ratio displays, and searched one-dimensional nominal, ordinal, interval, and ratio tasks over table and graph displays. Participants also performed the same task and display combination for twodimensional searches. Distributed cognition theory has been adopted as a theoretical framework for analyzing and predicting the search performance of relational data. It has been shown that the representation dimensions and data scales, as well as the search task types, are main factors in determining search efficiency and effectiveness. In particular, the more external representations used, the better search task performance, and the results suggest the ideal search performance occurs when the question type and corresponding data scale representation match. The implications of the study lie in contributing to the effective design of search interface for relational data, especially laboratory results, which are often used in healthcare activities.
Resumo:
We tested the predictions of Attentional Control Theory (ACT) by examining how anxiety affects visual search strategies, performance efficiency, and performance effectiveness using a dynamic, temporal-constrained anticipation task. Higher and lower skilled players viewed soccer situations under 2 task constraints (near vs. far situation) and were tested under high (HA) and low (LA) anxiety conditions. Response accuracy (effectiveness) and response time, perceived mental effort, and eye-movements (all efficiency) were recorded. A significant increase in anxiety was evidenced by higher state anxiety ratings on the MRF-L scale. Increased anxiety led to decreased performance efficiency because response times and mental effort increased for both skill groups whereas response accuracy did not differ. Anxiety influenced search strategies, with higher skilled players showing a decrease in number of fixation locations for far situations under HA compared with LA condition when compared with lower skilled players. Findings provide support for ACT with anxiety impairing processing efficiency and, potentially, top-down attentional control across different task constraints.
Resumo:
The technological advances in last decades have transformed the external resources of Vocational Counseling, Occupational Information and assessment of clients. Most computer systems follow a behaviorist-cognitive approach. However, the use of vocational counseling software is not exclusive to one conceptual approach. Computers are introduced in education from primary school; counselors and other educators are expected to use those systems. The attitude of counselors ranges from enthusiastic acceptance to complete refusal. Many counselors fear that computers will replace them. An underlying theory holds that counseling is based on the counselor-client interaction. A computer- client interaction cannot be considered vocational counseling. Counseling has five basic aims: prevention, assistance, education and development, service of diverse groups and research. The most relevant trends in computer-based counseling are: tests and questionnaires based on computers, adaptive development, computarized information, vocational counseling systems and research. Basic aims and the potential role of computers in achieving them are discussed. Present vocational counselors can use the technology of computers to link the past of our profession to its promising future. In view of these premises we have developed two computer systems that assist the vocational counseling process: "Professional Interests Questionnaire, Computer Version", and "Computer-based System of Vocational Counseling".
Resumo:
The technological advances in last decades have transformed the external resources of Vocational Counseling, Occupational Information and assessment of clients. Most computer systems follow a behaviorist-cognitive approach. However, the use of vocational counseling software is not exclusive to one conceptual approach. Computers are introduced in education from primary school; counselors and other educators are expected to use those systems. The attitude of counselors ranges from enthusiastic acceptance to complete refusal. Many counselors fear that computers will replace them. An underlying theory holds that counseling is based on the counselor-client interaction. A computer- client interaction cannot be considered vocational counseling. Counseling has five basic aims: prevention, assistance, education and development, service of diverse groups and research. The most relevant trends in computer-based counseling are: tests and questionnaires based on computers, adaptive development, computarized information, vocational counseling systems and research. Basic aims and the potential role of computers in achieving them are discussed. Present vocational counselors can use the technology of computers to link the past of our profession to its promising future. In view of these premises we have developed two computer systems that assist the vocational counseling process: "Professional Interests Questionnaire, Computer Version", and "Computer-based System of Vocational Counseling".
Resumo:
The technological advances in last decades have transformed the external resources of Vocational Counseling, Occupational Information and assessment of clients. Most computer systems follow a behaviorist-cognitive approach. However, the use of vocational counseling software is not exclusive to one conceptual approach. Computers are introduced in education from primary school; counselors and other educators are expected to use those systems. The attitude of counselors ranges from enthusiastic acceptance to complete refusal. Many counselors fear that computers will replace them. An underlying theory holds that counseling is based on the counselor-client interaction. A computer- client interaction cannot be considered vocational counseling. Counseling has five basic aims: prevention, assistance, education and development, service of diverse groups and research. The most relevant trends in computer-based counseling are: tests and questionnaires based on computers, adaptive development, computarized information, vocational counseling systems and research. Basic aims and the potential role of computers in achieving them are discussed. Present vocational counselors can use the technology of computers to link the past of our profession to its promising future. In view of these premises we have developed two computer systems that assist the vocational counseling process: "Professional Interests Questionnaire, Computer Version", and "Computer-based System of Vocational Counseling".
Resumo:
The present dataset contain source data for Figure 5b from Schilling et al., 2009. Cell fate decisions are regulated by the coordinated activation of signalling pathways such as the extracellular signal-regulated kinase (ERK) cascade, but contributions of individual kinase isoforms are mostly unknown. The authors combined quantitative data from erythropoietin-induced pathway activation in primary erythroid progenitor (colony-forming unit erythroid stage, CFU-E) cells with mathematical modelling, in order to predict and experimentally confirmed a distributive ERK phosphorylation mechanism in CFU-E cells. The authors found evidences that double-phosphorylated ERK1 attenuates proliferation beyond a certain activation level, whereas activated ERK2 enhances proliferation with saturation kinetics. They show integrated responses of double-phosphorylated ERK1 and ERK2 that were calculated for different Epo concentrations for the original model as well as for models with elevated ERK1 or ERK2 levels.
Resumo:
Infrastructure as a Service clouds are a flexible and fast way to obtain (virtual) resources as demand varies. Grids, on the other hand, are middleware platforms able to combine resources from different administrative domains for task execution. Clouds can be used by grids as providers of devices such as virtual machines, so they only use the resources they need. But this requires grids to be able to decide when to allocate and release those resources. Here we introduce and analyze by simulations an economic mechanism (a) to set resource prices and (b) resolve when to scale resources depending on the users’ demand. This system has a strong emphasis on fairness, so no user hinders the execution of other users’ tasks by getting too many resources. Our simulator is based on the well-known GridSim software for grid simulation, which we expand to simulate infrastructure clouds. The results show how the proposed system can successfully adapt the amount of allocated resources to the demand, while at the same time ensuring that resources are fairly shared among users.
Resumo:
The term "Logic Programming" refers to a variety of computer languages and execution models which are based on the traditional concept of Symbolic Logic. The expressive power of these languages offers promise to be of great assistance in facing the programming challenges of present and future symbolic processing applications in Artificial Intelligence, Knowledge-based systems, and many other areas of computing. The sequential execution speed of logic programs has been greatly improved since the advent of the first interpreters. However, higher inference speeds are still required in order to meet the demands of applications such as those contemplated for next generation computer systems. The execution of logic programs in parallel is currently considered a promising strategy for attaining such inference speeds. Logic Programming in turn appears as a suitable programming paradigm for parallel architectures because of the many opportunities for parallel execution present in the implementation of logic programs. This dissertation presents an efficient parallel execution model for logic programs. The model is described from the source language level down to an "Abstract Machine" level suitable for direct implementation on existing parallel systems or for the design of special purpose parallel architectures. Few assumptions are made at the source language level and therefore the techniques developed and the general Abstract Machine design are applicable to a variety of logic (and also functional) languages. These techniques offer efficient solutions to several areas of parallel Logic Programming implementation previously considered problematic or a source of considerable overhead, such as the detection and handling of variable binding conflicts in AND-Parallelism, the specification of control and management of the execution tree, the treatment of distributed backtracking, and goal scheduling and memory management issues, etc. A parallel Abstract Machine design is offered, specifying data areas, operation, and a suitable instruction set. This design is based on extending to a parallel environment the techniques introduced by the Warren Abstract Machine, which have already made very fast and space efficient sequential systems a reality. Therefore, the model herein presented is capable of retaining sequential execution speed similar to that of high performance sequential systems, while extracting additional gains in speed by efficiently implementing parallel execution. These claims are supported by simulations of the Abstract Machine on sample programs.
Resumo:
Future high-quality consumer electronics will contain a number of applications running in a highly dynamic environment, and their execution will need to be efficiently arbitrated by the underlying platform software. The multimedia applications that currently execute in such similar contexts face frequent run-time variations in their resource demands, originated by the greedy nature of the multimedia processing itself. Changes in resource demands are triggered by numerous reasons (e.g. a switch in the input media compression format). Such situations require real-time adaptation mechanisms to adjust the system operation to the new requirements, and this must be done seamlessly to satisfy the user experience. One solution for efficiently managing application execution is to apply quality of service resource management techniques, based on assigning and enforcing resource contracts to applications. Most resource management solutions provide temporal isolation by enforcing resource assignments and avoiding any resource overruns. However, this has a clear limitation over the cost-effective resource usage. This paper presents a simple priority assignment scheme based on uniform priority bands to allow that greedy multimedia tasks incur in safe overruns that increase resource usage and do not threaten the timely execution of non-overrunning tasks. Experimental results show that the proposed priority assignment scheme in combination with a resource accounting mechanism preserves timely multimedia execution and delivery, achieves a higher cost-effective processor usage, and guarantees the execution isolation of non-overrunning tasks.