12 resultados para Computer Structure

em CiencIPCA - Instituto Politécnico do Cávado e do Ave, Portugal


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have employed molecular dynamics simulations to study the behavior of virtual polymeric materials under an applied uniaxial tensile load. Through computer simulations, one can obtain experimentally inaccessible information about phenomena taking place at the molecular and microscopic levels. Not only can the global material response be monitored and characterized along time, but the response of macromolecular chains can be followed independently if desired. The computer-generated materials were created by emulating the step-wise polymerization, resulting in self-avoiding chains in 3D with controlled degree of orientation along a certain axis. These materials represent a simplified model of the lamellar structure of semi-crystalline polymers,being comprised of an amorphous region surrounded by two crystalline lamellar regions. For the simulations, a series of materials were created, varying i) the lamella thickness, ii) the amorphous region thickness, iii) the preferential chain orientation, and iv) the degree of packing of the amorphous region. Simulation results indicate that the lamella thickness has the strongest influence on the mechanical properties of the lamella-amorphous structure, which is in agreement with experimental data. The other morphological parameters also affect the mechanical response, but to a smaller degree. This research follows previous simulation work on the crack formation and propagation phenomena, deformation mechanisms at the nanoscale, and the influence of the loading conditions on the material response. Computer simulations can improve the fundamental understanding about the phenomena responsible for the behavior of polymeric materials, and will eventually lead to the design of knowledge-based materials with improved properties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Experimental scratch resistance testing provides two numbers: the penetration depth Rp and the healing depth Rh. In molecular dynamics computer simulations, we create a material consisting of N statistical chain segments by polymerization; a reinforcing phase can be included. Then we simulate the movement of an indenter and response of the segments during X time steps. Each segment at each time step has three Cartesian coordinates of position and three of momentum. We describe methods of visualization of results based on a record of 6NX coordinates. We obtain a continuous dependence on time t of positions of each of the segments on the path of the indenter. Scratch resistance at a given location can be connected to spatial structures of individual polymeric chains.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Indentation tests are used to determine the hardness of a material, e.g., Rockwell, Vickers, or Knoop. The indentation process is empirically observed in the laboratory during these tests; the mechanics of indentation is insufficiently understood. We have performed first molecular dynamics computer simulations of indentation resistance of polymers with a chain structure similar to that of high density polyethylene (HDPE). A coarse grain model of HDPE is used to simulate how the interconnected segments respond to an external force imposed by an indenter. Results include the time-dependent measurement of penetration depth, recovery depth, and recovery percentage, with respect to indenter force, indenter size, and indentation time parameters. The simulations provide results that are inaccessible experimentally, including continuous evolution of the pertinent tribological parameters during the entire indentation process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The radial undistortion model proposed by Fitzgibbon and the radial fundamental matrix were early steps to extend classical epipolar geometry to distorted cameras. Later minimal solvers have been proposed to find relative pose and radial distortion, given point correspondences between images. However, a big drawback of all these approaches is that they require the distortion center to be exactly known. In this paper we show how the distortion center can be absorbed into a new radial fundamental matrix. This new formulation is much more practical in reality as it allows also digital zoom, cropped images and camera-lens systems where the distortion center does not exactly coincide with the image center. In particular we start from the setting where only one of the two images contains radial distortion, analyze the structure of the particular radial fundamental matrix and show that the technique also generalizes to other linear multi-view relationships like trifocal tensor and homography. For the new radial fundamental matrix we propose different estimation algorithms from 9,10 and 11 points. We show how to extract the epipoles and prove the practical applicability on several epipolar geometry image pairs with strong distortion that - to the best of our knowledge - no other existing algorithm can handle properly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over the last decade, software architecture emerged as a critical issue in Software Engineering. This encompassed a shift from traditional programming towards software development based on the deployment and assembly of independent components. The specification of both the overall systems structure and the interaction patterns between their components became a major concern for the working developer. Although a number of formalisms to express behaviour and to supply the indispensable calculational power to reason about designs, are available, the task of deriving architectural designs on top of popular component platforms has remained largely informal. This paper introduces a systematic approach to derive, from CCS behavioural specifications the corresponding architectural skeletons in the Microsoft .Net framework, in the form of executable C and Cω code. The prototyping process is fully supported by a specific tool developed in Haskell

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Program slicing is a well known family of techniques used to identify code fragments which depend on or are depended upon specific program entities. They are particularly useful in the areas of reverse engineering, program understanding, testing and software maintenance. Most slicing methods, usually oriented towards the imperative or object paradigms, are based on some sort of graph structure representing program dependencies. Slicing techniques amount, therefore, to (sophisticated) graph transversal algorithms. This paper proposes a completely different approach to the slicing problem for functional programs. Instead of extracting program information to build an underlying dependencies’ structure, we resort to standard program calculation strategies, based on the so-called Bird-Meertens formalism. The slicing criterion is specified either as a projection or a hiding function which, once composed with the original program, leads to the identification of the intended slice. Going through a number of examples, the paper suggests this approach may be an interesting, even if not completely general, alternative to slicing functional programs

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A large and growing amount of software systems rely on non-trivial coordination logic for making use of third party services or components. Therefore, it is of outmost importance to understand and capture rigorously this continuously growing layer of coordination as this will make easier not only the veri cation of such systems with respect to their original speci cations, but also maintenance, further development, testing, deployment and integration. This paper introduces a method based on several program analysis techniques (namely, dependence graphs, program slicing, and graph pattern analysis) to extract coordination logic from legacy systems source code. This process is driven by a series of pre-de ned coordination patterns and captured by a special purpose graph structure from which coordination speci cations can be generated in a number of di erent formalisms

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Current software development often relies on non-trivial coordination logic for combining autonomous services, eventually running on different platforms. As a rule, however, such a coordination layer is strongly woven within the application at source code level. Therefore, its precise identification becomes a major methodological (and technical) problem and a challenge to any program understanding or refactoring process. The approach introduced in this paper resorts to slicing techniques to extract coordination data from source code. Such data are captured in a specific dependency graph structure from which a coordination model can be recovered either in the form of an Orc specification or as a collection of code fragments corresponding to the identification of typical coordination patterns in the system. Tool support is also discussed

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract: in Portugal, and in much of the legal systems of Europe, «legal persons» are likely to be criminally responsibilities also for cybercrimes. Like for example the following crimes: «false information»; «damage on other programs or computer data»; «computer-software sabotage»; «illegitimate access»; «unlawful interception» and «illegitimate reproduction of protected program». However, in Portugal, have many exceptions. Exceptions to the «question of criminal liability» of «legal persons». Some «legal persons» can not be blamed for cybercrime. The legislature did not leave! These «legal persons» are v.g. the following («public entities»): legal persons under public law, which include the public business entities; entities utilities, regardless of ownership; or other legal persons exercising public powers. In other words, and again as an example, a Portuguese public university or a private concessionaire of a public service in Portugal, can not commit (in Portugal) any one of cybercrime pointed. Fair? Unfair. All laws should provide that all legal persons can commit cybercrimes. PS: resumo do artigo em inglês.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Program slicing is a well known family of techniques used to identify code fragments which depend on or are depended upon specific program entities. They are particularly useful in the areas of reverse engineering, program understanding, testing and software maintenance. Most slicing methods, usually targeting either the imperative or the object oriented paradigms, are based on some sort of graph structure representing program dependencies. Slicing techniques amount, therefore, to (sophisticated) graph transversal algorithms. This paper proposes a completely different approach to the slicing problem for functional programs. Instead of extracting program information to build an underlying dependencies’ structure, we resort to standard program calculation strategies, based on the so-called Bird- Meertens formalism. The slicing criterion is specified either as a projection or a hiding function which, once composed with the original program, leads to the identification of the intended slice. Going through a number of examples, the paper suggests this approach may be an interesting, even if not completely general alternative to slicing functional programs

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays despite improvements in usability and intuitiveness users have to adapt to the proposed systems to satisfy their needs. For instance, they must learn how to achieve tasks, how to interact with the system, and fulfill system's specifications. This paper proposes an approach to improve this situation enabling graphical user interface redefinition through virtualization and computer vision with the aim of increasing the system's usability. To achieve this goal the approach is based on enriched task models, virtualization and picture-driven computing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dental implant recognition in patients without available records is a time-consuming and not straightforward task. The traditional method is a complete user-dependent process, where the expert compares a 2D X-ray image of the dental implant with a generic database. Due to the high number of implants available and the similarity between them, automatic/semi-automatic frameworks to aide implant model detection are essential. In this study, a novel computer-aided framework for dental implant recognition is suggested. The proposed method relies on image processing concepts, namely: (i) a segmentation strategy for semi-automatic implant delineation; and (ii) a machine learning approach for implant model recognition. Although the segmentation technique is the main focus of the current study, preliminary details of the machine learning approach are also reported. Two different scenarios are used to validate the framework: (1) comparison of the semi-automatic contours against implant’s manual contours of 125 X-ray images; and (2) classification of 11 known implants using a large reference database of 601 implants. Regarding experiment 1, 0.97±0.01, 2.24±0.85 pixels and 11.12±6 pixels of dice metric, mean absolute distance and Hausdorff distance were obtained, respectively. In experiment 2, 91% of the implants were successfully recognized while reducing the reference database to 5% of its original size. Overall, the segmentation technique achieved accurate implant contours. Although the preliminary classification results prove the concept of the current work, more features and an extended database should be used in a future work.