887 resultados para Computer Algebra Systems (CAS)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This new and general method here called overflow current switching allows a fast, continuous, and smooth transition between scales in wide-range current measurement systems, like electrometers. This is achieved, using a hydraulic analogy, by diverting only the overflow current, such that no slow element is forced to change its state during the switching. As a result, this approach practically eliminates the long dead time in low-current (picoamperes) switching. Similar to a logarithmic scale, a composition of n adjacent linear scales, like a segmented ruler, measures the current. The use of a linear wide-range system based on this technique assures fast and continuous measurement in the entire range, without blind regions during transitions and still holding suitable accuracy for many applications. A full mathematical development of the method is given. Several computer realistic simulations demonstrated the viability of the technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Breakthrough advances in microprocessor technology and efficient power management have altered the course of development of processors with the emergence of multi-core processor technology, in order to bring higher level of processing. The utilization of many-core technology has boosted computing power provided by cluster of workstations or SMPs, providing large computational power at an affordable cost using solely commodity components. Different implementations of message-passing libraries and system softwares (including Operating Systems) are installed in such cluster and multi-cluster computing systems. In order to guarantee correct execution of message-passing parallel applications in a computing environment other than that originally the parallel application was developed, review of the application code is needed. In this paper, a hybrid communication interfacing strategy is proposed, to execute a parallel application in a group of computing nodes belonging to different clusters or multi-clusters (computing systems may be running different operating systems and MPI implementations), interconnected with public or private IP addresses, and responding interchangeably to user execution requests. Experimental results demonstrate the feasibility of this proposed strategy and its effectiveness, through the execution of benchmarking parallel applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we propose an efficient and accurate method for fault location in underground distribution systems by means of an Optimum-Path Forest (OPF) classifier. We applied the time domains reflectometry method for signal acquisition, which was further analyzed by OPF and several other well-known pattern recognition techniques. The results indicated that OPF and support vector machines outperformed artificial neural networks and a Bayesian classifier, but OPF was much more efficient than all classifiers for training, and the second fastest for classification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current scientific applications have been producing large amounts of data. The processing, handling and analysis of such data require large-scale computing infrastructures such as clusters and grids. In this area, studies aim at improving the performance of data-intensive applications by optimizing data accesses. In order to achieve this goal, distributed storage systems have been considering techniques of data replication, migration, distribution, and access parallelism. However, the main drawback of those studies is that they do not take into account application behavior to perform data access optimization. This limitation motivated this paper which applies strategies to support the online prediction of application behavior in order to optimize data access operations on distributed systems, without requiring any information on past executions. In order to accomplish such a goal, this approach organizes application behaviors as time series and, then, analyzes and classifies those series according to their properties. By knowing properties, the approach selects modeling techniques to represent series and perform predictions, which are, later on, used to optimize data access operations. This new approach was implemented and evaluated using the OptorSim simulator, sponsored by the LHC-CERN project and widely employed by the scientific community. Experiments confirm this new approach reduces application execution time in about 50 percent, specially when handling large amounts of data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The complexity of power systems has increased in recent years due to the operation of existing transmission lines closer to their limits, using flexible AC transmission system (FACTS) devices, and also due to the increased penetration of new types of generators that have more intermittent characteristics and lower inertial response, such as wind generators. This changing nature of a power system has considerable effect on its dynamic behaviors resulting in power swings, dynamic interactions between different power system devices, and less synchronized coupling. This paper presents some analyses of this changing nature of power systems and their dynamic behaviors to identify critical issues that limit the large-scale integration of wind generators and FACTS devices. In addition, this paper addresses some general concerns toward high compensations in different grid topologies. The studies in this paper are conducted on the New England and New York power system model under both small and large disturbances. From the analyses, it can be concluded that high compensation can reduce the security limits under certain operating conditions, and the modes related to operating slip and shaft stiffness are critical as they may limit the large-scale integration of wind generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. Results This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. Conclusions This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In electronic commerce, systems development is based on two fundamental types of models, business models and process models. A business model is concerned with value exchanges among business partners, while a process model focuses on operational and procedural aspects of business communication. Thus, a business model defines the what in an e-commerce system, while a process model defines the how. Business process design can be facilitated and improved by a method for systematically moving from a business model to a process model. Such a method would provide support for traceability, evaluation of design alternatives, and seamless transition from analysis to realization. This work proposes a unified framework that can be used as a basis to analyze, to interpret and to understand different concepts associated at different stages in e-Commerce system development. In this thesis, we illustrate how UN/CEFACT’s recommended metamodels for business and process design can be analyzed, extended and then integrated for the final solutions based on the proposed unified framework. Also, as an application of the framework, we demonstrate how process-modeling tasks can be facilitated in e-Commerce system design. The proposed methodology, called BP3 stands for Business Process Patterns Perspective. The BP3 methodology uses a question-answer interface to capture different business requirements from the designers. It is based on pre-defined process patterns, and the final solution is generated by applying the captured business requirements by means of a set of production rules to complete the inter-process communication among these patterns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main problem connected to cone beam computed tomography (CT) systems for industrial applications employing 450 kV X-ray tubes is the high amount of scattered radiation which is added to the primary radiation (signal). This stray radiation leads to a significant degradation of the image quality. A better understanding of the scattering and methods to reduce its effects are therefore necessary to improve the image quality. Several studies have been carried out in the medical field at lower energies, whereas studies in industrial CT, especially for energies up to 450 kV, are lacking. Moreover, the studies reported in literature do not consider the scattered radiation generated by the CT system structure and the walls of the X-ray room (environmental scatter). In order to investigate the scattering on CT projections a GEANT4-based Monte Carlo (MC) model was developed. The model, which has been validated against experimental data, has enabled the calculation of the scattering including the environmental scatter, the optimization of an anti-scatter grid suitable for the CT system, and the optimization of the hardware components of the CT system. The investigation of multiple scattering in the CT projections showed that its contribution is 2.3 times the one of primary radiation for certain objects. The results of the environmental scatter showed that it is the major component of the scattering for aluminum box objects of front size 70 x 70 mm2 and that it strongly depends on the thickness of the object and therefore on the projection. For that reason, its correction is one of the key factors for achieving high quality images. The anti-scatter grid optimized by means of the developed MC model was found to reduce the scatter-toprimary ratio in the reconstructed images by 20 %. The object and environmental scatter calculated by means of the simulation were used to improve the scatter correction algorithm which could be patented by Empa. The results showed that the cupping effect in the corrected image is strongly reduced. The developed CT simulation is a powerful tool to optimize the design of the CT system and to evaluate the contribution of the scattered radiation to the image. Besides, it has offered a basis for a new scatter correction approach by which it has been possible to achieve images with the same spatial resolution as state-of-the-art well collimated fan-beam CT with a gain in the reconstruction time of a factor 10. This result has a high economic impact in non-destructive testing and evaluation, and reverse engineering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sustainable computer systems require some flexibility to adapt to environmental unpredictable changes. A solution lies in autonomous software agents which can adapt autonomously to their environments. Though autonomy allows agents to decide which behavior to adopt, a disadvantage is a lack of control, and as a side effect even untrustworthiness: we want to keep some control over such autonomous agents. How to control autonomous agents while respecting their autonomy? A solution is to regulate agents’ behavior by norms. The normative paradigm makes it possible to control autonomous agents while respecting their autonomy, limiting untrustworthiness and augmenting system compliance. It can also facilitate the design of the system, for example, by regulating the coordination among agents. However, an autonomous agent will follow norms or violate them in some conditions. What are the conditions in which a norm is binding upon an agent? While autonomy is regarded as the driving force behind the normative paradigm, cognitive agents provide a basis for modeling the bindingness of norms. In order to cope with the complexity of the modeling of cognitive agents and normative bindingness, we adopt an intentional stance. Since agents are embedded into a dynamic environment, things may not pass at the same instant. Accordingly, our cognitive model is extended to account for some temporal aspects. Special attention is given to the temporal peculiarities of the legal domain such as, among others, the time in force and the time in efficacy of provisions. Some types of normative modifications are also discussed in the framework. It is noteworthy that our temporal account of legal reasoning is integrated to our commonsense temporal account of cognition. As our intention is to build sustainable reasoning systems running unpredictable environment, we adopt a declarative representation of knowledge. A declarative representation of norms will make it easier to update their system representation, thus facilitating system maintenance; and to improve system transparency, thus easing system governance. Since agents are bounded and are embedded into unpredictable environments, and since conflicts may appear amongst mental states and norms, agent reasoning has to be defeasible, i.e. new pieces of information can invalidate formerly derivable conclusions. In this dissertation, our model is formalized into a non-monotonic logic, namely into a temporal modal defeasible logic, in order to account for the interactions between normative systems and software cognitive agents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]Many different complex systems depend on a large number n of mutually independent random Boolean variables. The most useful representation for these systems –usually called complex stochastic Boolean systems (CSBSs)– is the intrinsic order graph. This is a directed graph on 2n vertices, corresponding to the 2n binary n-tuples (u1, . . . , un) ∈ {0, 1} n of 0s and 1s. In this paper, different duality properties of the intrinsic order graph are rigorously analyzed in detail. The results can be applied to many CSBSs arising from any scientific, technical or social area…

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]A complex stochastic Boolean system (CSBS) is a system depending on an arbitrary number n of stochastic Boolean variables. The analysis of CSBSs is mainly based on the intrinsic order: a partial order relation defined on the set f0; 1gn of binary n-tuples. The usual graphical representation for a CSBS is the intrinsic order graph: the Hasse diagram of the intrinsic order. In this paper, some new properties of the intrinsic order graph are studied. Particularly, the set and the number of its edges, the degree and neighbors of each vertex, as well as typical properties, such as the symmetry and fractal structure of this graph, are analyzed…

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In case of severe osteoarthritis at the knee causing pain, deformity, and loss of stability and mobility, the clinicians consider that the substitution of these surfaces by means of joint prostheses. The objectives to be pursued by this surgery are: complete pain elimination, restoration of the normal physiological mobility and joint stability, correction of all deformities and, thus, of limping. The knee surgical navigation systems have bee developed in computer-aided surgery in order to improve the surgical final outcome in total knee arthroplasty. These systems provide the surgeon with quantitative and real-time information about each surgical action, like bone cut executions and prosthesis component alignment, by mean of tracking tools rigidly fixed onto the femur and the tibia. Nevertheless, there is still a margin of error due to the incorrect surgical procedures and to the still limited number of kinematic information provided by the current systems. Particularly, patello-femoral joint kinematics is not considered in knee surgical navigation. It is also unclear and, thus, a source of misunderstanding, what the most appropriate methodology is to study the patellar motion. In addition, also the knee ligamentous apparatus is superficially considered in navigated total knee arthroplasty, without taking into account how their physiological behavior is altered by this surgery. The aim of the present research work was to provide new functional and biomechanical assessments for the improvement of the surgical navigation systems for joint replacement in the human lower limb. This was mainly realized by means of the identification and development of new techniques that allow a thorough comprehension of the functioning of the knee joint, with particular attention to the patello-femoral joint and to the main knee soft tissues. A knee surgical navigation system with active markers was used in all research activities presented in this research work. Particularly, preliminary test were performed in order to assess the system accuracy and the robustness of a number of navigation procedures. Four studies were performed in-vivo on patients requiring total knee arthroplasty and randomly implanted by means of traditional and navigated procedures in order to check for the real efficacy of the latter with respect to the former. In order to cope with assessment of patello-femoral joint kinematics in the intact and replaced knees, twenty in-vitro tests were performed by using a prototypal tracking tool also for the patella. In addition to standard anatomical and articular recommendations, original proposals for defining the patellar anatomical-based reference frame and for studying the patello-femoral joint kinematics were reported and used in these tests. These definitions were applied to two further in-vitro tests in which, for the first time, also the implant of patellar component insert was fully navigated. In addition, an original technique to analyze the main knee soft tissues by means of anatomical-based fiber mappings was also reported and used in the same tests. The preliminary instrumental tests revealed a system accuracy within the millimeter and a good inter- and intra-observer repeatability in defining all anatomical reference frames. In in-vivo studies, the general alignments of femoral and tibial prosthesis components and of the lower limb mechanical axis, as measured on radiographs, was more satisfactory, i.e. within ±3°, in those patient in which total knee arthroplasty was performed by navigated procedures. As for in-vitro tests, consistent patello-femoral joint kinematic patterns were observed over specimens throughout the knee flexion arc. Generally, the physiological intact knee patellar motion was not restored after the implant. This restoration was successfully achieved in the two further tests where all component implants, included the patellar insert, were fully navigated, i.e. by means of intra-operative assessment of also patellar component positioning and general tibio-femoral and patello-femoral joint assessment. The tests for assessing the behavior of the main knee ligaments revealed the complexity of the latter and the different functional roles played by the several sub-bundles compounding each ligament. Also in this case, total knee arthroplasty altered the physiological behavior of these knee soft tissues. These results reveal in-vitro the relevance and the feasibility of the applications of new techniques for accurate knee soft tissues monitoring, patellar tracking assessment and navigated patellar resurfacing intra-operatively in the contest of the most modern operative techniques. This present research work gives a contribution to the much controversial knowledge on the normal and replaced of knee kinematics by testing the reported new methodologies. The consistence of these results provides fundamental information for the comprehension and improvements of knee orthopedic treatments. In the future, the reported new techniques can be safely applied in-vivo and also adopted in other joint replacements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this PhD thesis is to investigate the orientational and dynamical properties of liquid crystalline systems, at molecular level and using atomistic computer simulations, to reach a better understanding of material behavior from a microscopic point view. In perspective this should allow to clarify the relation between the micro and macroscopic properties with the objective of predicting or confirming experimental results on these systems. In this context, we developed four different lines of work in the thesis. The first one concerns the orientational order and alignment mechanism of rigid solutes of small dimensions dissolved in a nematic phase formed by the 4-pentyl,4 cyanobiphenyl (5CB) nematic liquid crystal. The orientational distribution of solutes have been obtained with Molecular Dynamics Simulation (MD) and have been compared with experimental data reported in literature. we have also verified the agreement between order parameters and dipolar coupling values measured in NMR experiments. The MD determined effective orientational potentials have been compared with the predictions of Maier­Saupe and Surface tensor models. The second line concerns the development of a correct parametrization able to reproduce the phase transition properties of a prototype of the oligothiophene semiconductor family: sexithiophene (T6). T6 forms two crystalline polymorphs largely studied, and possesses liquid crystalline phases still not well characterized, From simulations we detected a phase transition from crystal to liquid crystal at about 580 K, in agreement with available experiments, and in particular we found two LC phases, smectic and nematic. The crystal­smectic transition is associated to a relevant density variation and to strong conformational changes of T6, namely the molecules in the liquid crystal phase easily assume a bent shape, deviating from the planar structure typical of the crystal. The third line explores a new approach for calculating the viscosity in a nematic through a virtual exper- iment resembling the classical falling sphere experiment. The falling sphere is replaced by an hydrogenated silicon nanoparticle of spherical shape suspended in 5CB, and gravity effects are replaced by a constant force applied to the nanoparticle in a selected direction. Once the nanoparticle reaches a constant velocity, the viscosity of the medium can be evaluated using Stokes' law. With this method we successfully reproduced experimental viscosities and viscosity anisotropy for the solvent 5CB. The last line deals with the study of order induction on nematic molecules by an hydrogenated silicon surface. Gaining predicting power for the anchoring behavior of liquid crystals at surfaces will be a very desirable capability, as many properties related to devices depend on molecular organization close to surfaces. Here we studied, by means of atomistic MD simulations, the flat interface between an hydrogenated (001) silicon surface in contact with a sample of 5CB molecules. We found a planar anchoring of the first layers of 5CB where surface interactions are dominating with respect to the mesogen intermolecular interactions. We also analyzed the interface 5CB­vacuum, finding a homeotropic orientation of the nematic at this interface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die vorliegende Arbeit beschäftigt sich mit dem Einfluß von Kettenverzweigungen unterschiedlicher Topologien auf die statischen Eigenschaften von Polymeren. Diese Untersuchungen werden mit Hilfe von Monte-Carlo- und Molekular-Dynamik-Simulationen durchgeführt.Zunächst werden einige theoretische Konzepte und Modelle eingeführt, welche die Beschreibung von Polymerketten auf mesoskopischen Längenskalen gestatten. Es werden wichtige Bestimmungsgrößen eingeführt und erläutert, welche zur quantitativen Charakterisierung von Verzweigungsstrukturen bei Polymeren geeignet sind. Es wird ebenso auf die verwendeten Optimierungstechniken eingegangen, die bei der Implementierung des Computerprogrammes Verwendung fanden. Untersucht werden neben linearen Polymerketten unterschiedliche Topolgien -Sternpolymere mit variabler Armzahl, Übergang von Sternpolymeren zu linearen Polymeren, Ketten mit variabler Zahl von Seitenketten, reguläre Dendrimere und hyperverzweigte Strukturen - in Abhängigkeit von der Lösungsmittelqualität. Es wird zunächst eine gründliche Analyse des verwendeten Simulationsmodells an sehr langen linearen Einzelketten vorgenommen. Die Skalierungseigenschaften der linearen Ketten werden untersucht in dem gesamten Lösungsmittelbereich vom guten Lösungsmittel bis hin zu weitgehend kollabierten Ketten im schlechten Lösungsmittel. Ein wichtiges Ergebnis dieser Arbeit ist die Bestätigung der Korrekturen zum Skalenverhalten des hydrodynamischen Radius Rh. Dieses Ergebnis war möglich aufgrund der großen gewählten Kettenlängen und der hohen Qualität der erhaltenen Daten in dieser Arbeit, insbesondere bei den linearen ketten, und es steht im Widerspruch zu vielen bisherigen Simulations-Studien und experimentellen Arbeiten. Diese Korrekturen zum Skalenverhalten wurden nicht nur für die linearen Ketten, sondern auch für Sternpolymere mit unterchiedlicher Armzahl gezeigt. Für lineare Ketten wird der Einfluß von Polydispersität untersucht.Es wird gezeigt, daß eine eindeutige Abbildung von Längenskalen zwischen Simulationsmodell und Experiment nicht möglich ist, da die zu diesem Zweck verwendete dimensionslose Größe eine zu schwache Abhängigkeit von der Polymerisation der Ketten besitzt. Ein Vergleich von Simulationsdaten mit industriellem Low-Density-Polyäthylen(LDPE) zeigt, daß LDPE in Form von stark verzweigten Ketten vorliegt.Für reguläre Dendrimere konnte ein hochgradiges Zurückfalten der Arme in die innere Kernregion nachgewiesen werden.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Broad consensus has been reached within the Education and Cognitive Psychology research communities on the need to center the learning process on experimentation and concrete application of knowledge, rather than on a bare transfer of notions. Several advantages arise from this educational approach, ranging from the reinforce of students learning, to the increased opportunity for a student to gain greater insight into the studied topics, up to the possibility for learners to acquire practical skills and long-lasting proficiency. This is especially true in Engineering education, where integrating conceptual knowledge and practical skills assumes a strategic importance. In this scenario, learners are called to play a primary role. They are actively involved in the construction of their own knowledge, instead of passively receiving it. As a result, traditional, teacher-centered learning environments should be replaced by novel learner-centered solutions. Information and Communication Technologies enable the development of innovative solutions that provide suitable answers to the need for the availability of experimentation supports in educational context. Virtual Laboratories, Adaptive Web-Based Educational Systems and Computer-Supported Collaborative Learning environments can significantly foster different learner-centered instructional strategies, offering the opportunity to enhance personalization, individualization and cooperation. More specifically, they allow students to explore different kinds of materials, to access and compare several information sources, to face real or realistic problems and to work on authentic and multi-facet case studies. In addition, they encourage cooperation among peers and provide support through coached and scaffolded activities aimed at fostering reflection and meta-cognitive reasoning. This dissertation will guide readers within this research field, presenting both the theoretical and applicative results of a research aimed at designing an open, flexible, learner-centered virtual lab for supporting students in learning Information Security.