919 resultados para GRAPHICAL LASSO
Resumo:
In a world that is increasingly working with software, the need arises for effective approaches that encourage software reuse. The reuse practice must be aligned to a set of practices, procedures and methodologies that create a stable and high quality product. These questions produce new styles and approaches in the software engineering. In this way, this thesis aims to address concepts related to development and model-driven architecture. The model-driven approach provides significant aspects of the automated development, which helps it with produced models built in the specification phase. The definition of terms such as model, architecture and platform makes the focus becomes clearer, because for MDA and MDD is important to split between technical and business issues. Important processes are covered, so you can highlight the artifacts that are built into each stage of model-driven development. The stages of development: CSM, PIM, PSM and ISM, detailing the purpose of each phase in oriented models, making the end of each stage are gradually produced artifacts that may be specialized. The models are handled by different prospects for modeling, abstracting the concepts and building a set of details that portrays a specific scenario. This retraction can be a graphical or textual representation, however, in most cases is chosen a language modeling, for example, UML. In order to provide a practical view, this dissertation shows some tools that improve the construction of models and the code generate that assists in the development, keeping the documentation systemic. Finally, the paper presents a case study that refers to the theoretical aspects discussed throughout the dissertation, therefore it is expected that the architecture and the model-driven development may be able to explain important features to consider in software engineering
Resumo:
One problem that has been happening frequently in port terminals is the poor planning of the loading and unloading of containers. The reason of this problem is the lack of an efficient method that provides the best means of these operations. The main goal of this work is, to implement a method that provides the best ways to perform the loading and unloading of containers, at each port and thus bring a great saving for these terminals, since the number of moves is directly proportional to cost. To carry out this program was used the idea that the containers are placed in vertical stacks, where the access can be done only by the top of the stack, so the ship was treated as an matrix and to fill it, two rules were created for loading and two for unloading. To obtain the best sequence of rules was used Beam Search method, which is an enumeration type implicit method that analyzes only the best solution of the tree generated. Thus, the program developed in the Java language, provides the best way to perform the loading and unloading ports and the way as the ship leaves each port using a graphical interface
Resumo:
This monograph seeks to provide an exposition and theoretical examination of Ciência da Carne (“Science of the Flesh”), a series of artworks in woodcut printing executed after research into the artistic aspects of Human Anatomy, done throughout the Graduate Course in Visual Arts at UNESP’s Art Institute. Traditional procedures of naturalistic representation of the human figure often adopt the scientific basis of Anatomy as a means of interpreting the surface contours of the body from its inside out. The historical connection between Anatomy and Art, however, is not merely accidental, for it is integral to the development of both disciplines, which find themselves deeply related in the human impulse for self-discovery and reinvention of its own likeness. The series of artworks collected in Ciência da Carne explores, through the particular graphical language provided by woodcut printing, abstract arrangements of isolated anatomical elements, at once removed from the context of traditional figurative representation and from the didactic goals of medical illustration.
Resumo:
In the universities, before the start of each school year, is held the distribution of classes among available teachers. Therefore, it is necessary to consider the maximum workweek for each teacher and their preferences for each discipline, to prevent a teacher to give lessons in two separate locations at the same time and to avoid some teachers to become overloaded while others with large clearance. This process, manually performed, is time consuming and does not allow the visualization of other combinations of assignment of teachers to classes, besides being liable to error. This work aims to develop a decision support tool for the problem of assigning teachers to classes in college. The project encompasses the development of a computer program using the concepts of object orientation and a tree search algorithm of a combinatorial nature called Beam Search. The programming language used is Java and the program has a graphical interface for entering and manipulating data of the problem. Once obtained the schedule data of classes and teachers is possible, by means of the tool, perform various simulations and manual adjustments to achieve the final result. It is an efficient method of class scheduling, considering the speed of task execution and the fact that it generates only feasible results
Resumo:
We are included in a society where the use of the Internet became very important to our everyday life. The relationships nowadays usually happen through technological devices instead of face to face contact, for instance, Internet forums where people can discuss online. However, the global analysis is a big challenge, due to the large amount of data. This work investigates the use of visual representations to support an exploratory analysis of contents in messages from discussions forums. This analysis considers the thematic and the chronology. The target forums refer to the educational area and the analysis happens manually, i.e. by direct reading message-by-message. The proprieties of perception and cognition of the human visual system allow a person the capacity to conduct high-level tasks in information extraction from a graphical or visual representation of data. Therefore, this work was based on Visual Analytics, an area that aims create techniques that amplify these human abilities. For that reason we used software that creates a visualization of data from a forum. This software allows a forum content analysis. But, during the work, we identified the necessity to create a new tool to clean the data, because the data had a lot of unnecessary information. After cleaning the data we created a new visualization and held an analysis seeking a new knowledge. In the end we compared the new visualization with the manual analysis that had been made. Analyzing the results, it was evident the potential of visualization use, it provides a better correlation between the information, enabling the acquisition of new knowledge that was not identified in the initial analysis, providing a better use of the forum content
Resumo:
Communities are present on physical, chemical and biological systems and their identification is fundamental for the comprehension of the behavior of these systems. Recently, available data related to complex networks have grown exponentially, demanding more computational power. The Graphical Processing Unit (GPU) is a cost effective alternative suitable for this purpose. We investigate the convenience of this for network science by proposing a GPU based implementation of Newman community detection algorithm. We showed that the processing time of matrix multiplications of GPUs grow slower than CPUs in relation to the matrix size. It was proven, thus, that GPU processing power is a viable solution for community dentification simulation that demand high computational power. Our implementation was tested on an integrated biological network for the bacterium Escherichia coli
Resumo:
Learning to read and write at an early stage is the process of transferring the sound form of the spoken language for the graphical form of writing, a process, a time that in our system of alphabetical called writing, the letters are graphical representations in the level of phoneme. So that this representation occurs, it is necessary that the individual already can of some form perceive and manipulate the different sonorous segments of the word. This capacity of perception directed to the segments of the word calls Phonological Awareness. Thus, it was established had for objective to verify the pertaining to school performance of 1ª to 4ª series with and without of learning in Tests of Phonological Awareness. Fourth children with age average of 9 years and 3 months without learning disabilities had been submitted to the Protocol of Phonological Awareness (CIELO, 2002) using of this instrument had participated of this study 80 pertaining to school of both only the phonological tasks. The data received from quantiqualitative approach whose results were extracted inferences. The statistically significant results occurred in the tasks of Realism Face Detection, Syllables, Detecting Phonemes, Phonemic Synthesis and Reversal Phonemic. Based on the results we observed that children without learning difficulties performed better on all tasks mentioned above
Resumo:
This work, entitled Websislapam: People Rating System Based on Web Technologies, allows the creation of questionnaires, and the organization of entities and people who participate in evaluations. Entities collect data from people with the help of resources that reduce typing mistakes. The Websislapam maintains a database and provides graphical reporting, which enable the analysis of those tested. Developed using Web technologies such as PHP, Javascript, CSS, and others. Developed with the paradigm of object-oriented programming and the MySQL DBMS. For the theoretical basis, research in the areas of System Database, Web Technologies and Web Engineering were performed. It was determined the evaluation process, systems and Web-based applications, Web and System Engineering Database. Technologies applied in the implementation of Websislapam been described. In a separate chapter presented the main features and artifacts used in the development of Websislapam. A case study demonstrates the practical use of the system
Resumo:
The aim of this work was the development a computer code for simulation and analysis of atomic spectra from databases constructed from the literature. There were created four routines that can be useful for spectroscopic studies in the atomic processes of laser isotope separation. In the first routine, Possible Transitions, the program checks the possible electron transitions from an energy level of the atom present in the database considering the selection rules for an electric dipole transition. The second routine, Locator Transitions, checks the possible electronic transitions within a user-specified spectral region. The routine Spectra Simulator creates simulated spectra using the graphical application gnuplot through lorentzian curve and finally, the routine Electronic Temperature determines the temperature of electronic excitation of the atom, thought the Boltzmann Plot Method. To test the reliability of the program there were obtained experimental emission spectra of a hollow cathode discharge of dysprosium and argon as a buffer gas. The hollow cathode discharge has been subjected to different values of operating currents and pressure of inert gas. The spectra obtained were treated with the assistance of program routines developed (Transition Locator and Spectra Simulator) and temperatures electronic excitation of the atoms of dysprosium in the different discharge conditions were calculated (routine Electronic Temperature). The results showed that the electronic excitation temperature of the neutral dysprosium atoms in the hollow cathode discharge increases with increasing current applied to the cathode and also by increasing the gas pressure buffer. The determination coefficients, R2, obtained by the Electronic Temperature routine using the linear adjust of the Boltzmann Plot Method were greater... (Complete abstract click electronic access below)
Resumo:
This work presents the development of a graphical interface to the Lock-in Amplifier, which is used in physiological studies on the motility of the gastrointestinal tract in rats and signal processing. With a simple and low cost instrumentation, the resources offered by the virtual interface of LabVIEW software allows the creation of commands similar to the actual instrument that, through communication via standard serial port, transmits data between a PC and peripheral device performing specific and particular needs in the amplifier. Created for the lock-in amplifier model SR830 Stanford Research Systems, the remote manipulation gives the user greater accessibility in the process of configuration and calibration. And, since the software is installed, there is the advantage of eliminating the need of purchase new devices to upgrade the system. The commands created were made to perform six basic modifications that are used in routine of the Biomagnetism Laboratory. The instrumentation developed has the following controls: Amplitude, Frequency, Time Constant, slope low pass filter, sensitivity and offset
Resumo:
This work seeks to demonstrate the advantages in functional software test automation using Sikuli tool, which uses image recognition to find the graphical elements of a system, in addition to using a custom library with methods made to automate the summarization of obtained results through the tests and their evidence
Resumo:
Suitable computacional tools allow to build applications that can link information to its physical location, and represent them into visual and interactive schemes, e ectively reaching the power of visual comunication. This leads the user to synthesize information in a simple and e cient way. These applications are linked to the de nition of Geographic Information System (GIS). GIS are comprised by many concepts and tools, which have the main purpose of collecting, storing, viewing and processing spatial data, obtaining the information needed for decision making. Within this context, this paper presents the Conception and Implementation of a Control System for Urban Forestry through Integration of Free and Open Source Software. This conception arose from the need of an Environmental Project developed by the Agriculture's House of the city of Regente Feij o, which has as main objectives cataloging and management of urban a orestation of the municipality. Due to this diversity of concepts, the challenge in building this system is the integration of platforms that are involved in all stages: collecting and storage of data, including maps and other spatial information, operations on the stored information, obtaining results and graphical visualization of the same. After implementation, it was possible to provide for the system users an improvement in the capacity of perception in the information analysis and facilitate the process of decision making
Resumo:
The trade fair industry has great relevance in the national and international economic environment and it is constantly expanding. The general objective of this work is to analyze the main linguistic variants found in the trade fair terminological set. Our research is based on the theories of Cabré (1993, 1999), Barros (2004), Krieger & Finatto (2004), Alves (2007), Barbosa (2009), Dubuc (1985), Berber Sardinha (2004), Babini (2006) and Faulstich (1998, 2001). For this work we constitute two corpora of specialized texts, one for English language and another for Portuguese language. Successively we performed a collection of terms using software for corpora processing. These terms were organized into two notional systems, one in English and another in Portuguese. Then we analyzed the main types of variants found in our terminological set. Among them, those who had higher productivity are the lexical variants, followed by graphical, morphological and syntactic variants.
Resumo:
This paper deals with hypersegmentation of words that are characterized by the unconventional employment of a graphical boundary (using a white space or hyphen) within the limits of the word, as in "em bora" (“although”, using a white space), and "chama-da" (“called”, using the hyphen). In a study conducted on these data, we identified motivations arising not only from their literate nature but also from the morphosyntactic and prosodic information. We showed that there are linguistic features recurrent in these registers of word boundaries, based on the analysis of texts produced by the students who attended the last four years of elementary school in a public school in São Paulo. In this paper, we advance on this study by selecting data whose characteristics do not match those which were recurrent. We will argue that the unconventional presence of boundary within the written word limits may be interpreted as representing prosodic configurations (of intonation and rhythm nature) which contribute to the construction of the relation of meanings in the text.
Resumo:
A monitoring network of atmospheric electric field covering the Vale do Paraiba region was implemented. The sensors were located on different sites with different altitude and geographic topology. The present work reports the study carried on those sensors in order to verify the necessity of using some correction factor to the measured local electric field intensity due to effects of local environment. The measurements were done in continuous 24 hours per day with the data recorded on registers in each device accumulating information during a period of four months. The relation between the electric field values by each sensor was compared to the reference located on Sao Jose dos Campos city using the same period. In a graphical analysis using the local field intensity and the reference, the data were fitted to a straight line obtained by minimum square method. Variation up to 95% was observed between the field values in some sensors. Another method was proposed, comparing the mean values of the electric field in a function of time. The variation in some sensors reached up to 133%. We conclude that the variations are due to local atmospheric conditions and no correction factor is required on the electric field sensors