644 resultados para PL5139.H55 D8 1822
Resumo:
Procura-se analisar a questão de saber se pode existir violação do princípio da igualdade em consequência da omissão legislativa resultante da não concretização adequada de um artigo de um código substantivo. Poderá tal omissão consubstanciar um tratamento desigual para determinados sujeitos em comparação com outros?
Resumo:
Nowadays, the vulgarization of information and communication technologies has reached to a level that the majority of people spend a lot of time using software to do regular tasks, ranging from games and ordinary time and weather utilities to some more sophisticated ones, like retail or banking applications. This new way of life is supported by the Internet or by specific applications that changed the image people had about using information and communication technologies. All over the world, the first cycle of studies of educational systems also has been addressed with the justification that this encourages the development of children. Taking this into consideration, we design and develop a visual explorer system for relational databases that can be used by everyone, from “7 to 77”, in an intuitive and easy way, getting immediate results – a new database querying experience. Thus, in this paper we will expose the main characteristics and features of this visual database explorer, showing how it works and how it can be used to execute the most current data manipulation operations over a database.
Resumo:
Usually, data warehousing populating processes are data-oriented workflows composed by dozens of granular tasks that are responsible for the integration of data coming from different data sources. Specific subset of these tasks can be grouped on a collection together with their relationships in order to form higher- level constructs. Increasing task granularity allows for the generalization of processes, simplifying their views and providing methods to carry out expertise to new applications. Well-proven practices can be used to describe general solutions that use basic skeletons configured and instantiated according to a set of specific integration requirements. Patterns can be applied to ETL processes aiming to simplify not only a possible conceptual representation but also to reduce the gap that often exists between two design perspectives. In this paper, we demonstrate the feasibility and effectiveness of an ETL pattern-based approach using task clustering, analyzing a real world ETL scenario through the definitions of two commonly used clusters of tasks: a data lookup cluster and a data conciliation and integration cluster.
Resumo:
Developing and implementing data-oriented workflows for data migration processes are complex tasks involving several problems related to the integration of data coming from different schemas. Usually, they involve very specific requirements - every process is almost unique. Having a way to abstract their representation will help us to better understand and validate them with business users, which is a crucial step for requirements validation. In this demo we present an approach that provides a way to enrich incrementally conceptual models in order to support an automatic way for producing their correspondent physical implementation. In this demo we will show how B2K (Business to Kettle) system works transforming BPMN 2.0 conceptual models into Kettle data-integration executable processes, approaching the most relevant aspects related to model design and enrichment, model to system transformation, and system execution.
Resumo:
During the last few years many research efforts have been done to improve the design of ETL (Extract-Transform-Load) systems. ETL systems are considered very time-consuming, error-prone and complex involving several participants from different knowledge domains. ETL processes are one of the most important components of a data warehousing system that are strongly influenced by the complexity of business requirements, their changing and evolution. These aspects influence not only the structure of a data warehouse but also the structures of the data sources involved with. To minimize the negative impact of such variables, we propose the use of ETL patterns to build specific ETL packages. In this paper, we formalize this approach using BPMN (Business Process Modelling Language) for modelling more conceptual ETL workflows, mapping them to real execution primitives through the use of a domain-specific language that allows for the generation of specific instances that can be executed in an ETL commercial tool.
Resumo:
Today it is easy to find a lot of tools to define data migration schemas among different types of information systems. Data migration processes use to be implemented on a very diverse range of applications, ranging from conventional operational systems to data warehousing platforms. The implementation of a data migration process often involves a serious planning, considering the development of conceptual migration schemas at early stages. Such schemas help architects and engineers to plan and discuss the most adequate way to migrate data between two different systems. In this paper we present and discuss a way for enriching data migration conceptual schemas in BPMN using a domain-specific language, demonstrating how to convert such enriched schemas to a first correspondent physical representation (a skeleton) in a conventional ETL implementation tool like Kettle.
Resumo:
Today recovering urban waste requires effective management services, which usually imply sophisticated monitoring and analysis mechanisms. This is essential for the smooth running of the entire recycling process as well as for planning and control urban waste recovering. In this paper we present a business intelligence system especially designed and im- plemented to support regular decision-making tasks on urban waste management processes. The system provides a set of domain-oriented analytical tools for studying and characterizing poten- tial scenarios of collection processes of urban waste, as well as for supporting waste manage- ment in urban areas, allowing for the organization and optimization of collection services. In or- der to clarify the way the system was developed and the how it operates, particularly in process visualization and data analysis, we also present the organization model of the system, the ser- vices it disposes, and the interface platforms for exploring data.
Resumo:
ETL conceptual modeling is a very important activity in any data warehousing system project implementation. Owning a high-level system representation allowing for a clear identification of the main parts of a data warehousing system is clearly a great advantage, especially in early stages of design and development. However, the effort to model conceptually an ETL system rarely is properly rewarded. Translating ETL conceptual models directly into something that saves work and time on the concrete implementation of the system process it would be, in fact, a great help. In this paper we present and discuss a hybrid approach to this problem, combining the simplicity of interpretation and power of expression of BPMN on ETL systems conceptualization with the use of ETL patterns to produce automatically an ETL skeleton, a first prototype system, which has the ability to be executed in a commercial ETL tool like Kettle.
Resumo:
Modeling Extract-Transform-Load (ETL) processes of a Data Warehousing System has always been a challenge. The heterogeneity of the sources, the quality of the data obtained and the conciliation process are some of the issues that must be addressed in the design phase of this critical component. Commercial ETL tools often provide proprietary diagrammatic components and modeling languages that are not standard, thus not providing the ideal separation between a modeling platform and an execution platform. This separation in conjunction with the use of standard notations and languages is critical in a system that tends to evolve through time and which cannot be undermined by a normally expensive tool that becomes an unsatisfactory component. In this paper we demonstrate the application of Relational Algebra as a modeling language of an ETL system as an effort to standardize operations and provide a basis for uncommon ETL execution platforms.
Resumo:
Os recursos computacionais exigidos durante o processamento de grandes volumes de dados durante um processo de povoamento de um data warehouse faz com que a necessidade da procura de novas implementações tenha também em atenção a eficiência energética dos diversos componentes processuais que integram um qualquer sistema de povoamento. A lacuna de técnicas ou metodologias para categorizar e avaliar o consumo de energia em sistemas de povoamento de data warehouses é claramente notória. O acesso a esse tipo de informação possibilitaria a construção de sistemas de povoamento de data warehouses com níveis de consumo de energia mais baixos e, portanto, mais eficientes. Partindo da adaptação de técnicas aplicadas a sistemas de gestão de base de dados para a obtenção dos consumos energéticos da execução de interrogações, desenhámos e implementámos uma nova técnica que nos permite obter os consumos de energia para um qualquer processo de povoamento de um data warehouse, através da avaliação do consumo de cada um dos componentes utilizados na sua implementação utilizando uma ferramenta convencional. Neste artigo apresentamos a forma como fazemos tal avaliação, utilizando na demonstração da viabilidade da nossa proposta um processo de povoamento bastante típico em data warehouses – substituição encadeada de chaves operacionais -, que foi implementado através da ferramenta Kettle.
Resumo:
This paper reports on the changes in the structural and morphological features occurring in a particular type of nanocomposite thin-film system, composed of Au nanoparticles (NPs) dispersed in a host TiO2 dielectric matrix. The structural and morphological changes, promoted by in-vacuum annealing experiments of the as-deposited thin films at different temperatures (ranging from 200 to 800 C), resulted in a well-known localized surface plasmon resonance (LSPR) phenomenon, which gave rise to a set of different optical responses that can be tailored for a wide number of applications, including those for optical-based sensors. The results show that the annealing experiments enabled a gradual increase of the mean grain size of the Au NPs (from 2 to 23 nm), and changes in their distributions and separations within the dielectric matrix. For higher annealing temperatures of the as-deposited films, a broad size distribution of Au NPs was found (sizes up to 100 nm). The structural conditions necessary to produce LSPR activity were found to occur for annealing experiments above 300 C, which corresponded to the crystallization of the gold NPs, with an average size strongly dependent on the annealing temperature itself. The main factor for the promotion of LSPR was the growth of gold NPs and their redistribution throughout the host matrix. On the other hand, the host matrix started to crystallize at an annealing temperature of about 500 C, which is an important parameter to explain the shift of the LSPR peak position to longer wavelengths, i.e. a red-shift.
Resumo:
Este artigo propõe uma nova metodologia para Pequenas e Médias Empresas (PME), destinada a caracterizar o seu desempenho na gestão da qualidade, destacando pontos fracos e áreas de melhoria. A metodologia visa identificar as principais causas dos problemas de qualidade e ajuda a estabelecer prioridades na definição de iniciativas de melhoria. Trata-se de uma metodologia de diagnóstico fácil de implementar por empresas com baixo nível de maturidade na gestão da qualidade. A metodologia está organizada em seis etapas diferentes que incluem a recolha de informação sobre processos e subprocessos de gestão da qualidade, definidos com base na Trilogia de Juran, e sobre categorias de resultados preestabelecidas. Para aperfeiçoar e validar a metodologia proposta, foram realizados dois casos de estudo. A aplicação da metodologia foi bem sucedida em ambos os casos. Posteriormente, foi elaborado um relatório sobre o estado da qualidade em cada empresa, que inclui a definição de prioridades de eliminação das causas na origem de maus desempenhos. A metodologia pode ser adaptada para melhor se adequar às necessidades de empresas de diferentes setores, quer pela revisão dos seus processos, quer pela integração de novas ferramentas ou pelo aperfeiçoamento das existentes. Devido à sua simplicidade e abrangência, considera-se que a metodologia desenvolvida pode ser aplicada como ferramenta de autodiagnóstico para a melhoria contínua.
Resumo:
Companies from the motorcycles components branch are dealing with a dynamic environment, resulting from the introduction of new products and the increase of market demand. This dynamic environment requires frequent changes in production lines and requires flexibility in the processes, which can cause reductions in the level of quality and productivity. This paper presents a Lean Six Sigma improvement project performed in a production line of the company's machining sector, in order to eliminate losses that cause low productivity, affecting the fulfillment of the production plan and customer satisfaction. The use of Lean methodology following the DMAIC stages allowed analyzing the factors that influence the line productivity loss. The major problems and causes that contribute to a reduction on productivity and that were identified in this study are the lack of standardization in the setup activities and the excessive stoppages for adjustment of the processes that caused an increase of defects. Control charts, Pareto analysis and cause-and-effect diagrams were used to analyze the problem. On the improvement stage, the changes were based on the reconfiguration of the line layout as well as the modernization of the process. Overall, the project justified an investment in new equipment, the defective product units were reduced by 84% and an increase of 29% of line capacity was noticed.
Resumo:
O objetivo deste trabalho de investigação é o desenvolvimento de um modelo de governança de TI em universidades públicas para que mais eficaz e eficientemente cumpram a sua missão. A investigação irá identificar os mecanismos de governança de TI utilizados em diferentes realidades universitárias e avaliar a eficiência e eficácia na sua implementação considerando fatores contingenciais. Recorrer-se-á à administração de surveys em contextos sobre os quais já se detém e se tem facilidade em adquirir conhecimento, nomeadamente, universidades federais brasileiras e universidades públicas portuguesas. Design Science Research será utilizada pela sua adequação à resolução de problemas organizacionais complexos que requerem o desenvolvimento de artefactos, neste caso, um modelo para governança de TI em universidades públicas. O modelo proposto gerado será avaliado por especialistas e profissionais da área TI através de surveys, entrevistas e workshops. Analisar-se-á a eficácia e facilidade de implementação de mecanismos e ferramentas que auxiliem na governança de TI. Espera-se contribuir com um modelo e um guia para a sua implementação nas universidades públicas enquanto se reforça o corpo de conhecimento na governança de TI.
Resumo:
Dissertação de mestrado em Engenharia de Sistemas