922 resultados para Computer arithmetic and logic units.
Resumo:
The discovery of new materials and their functions has always been a fundamental component of technological progress. Nowadays, the quest for new materials is stronger than ever: sustainability, medicine, robotics and electronics are all key assets which depend on the ability to create specifically tailored materials. However, designing materials with desired properties is a difficult task, and the complexity of the discipline makes it difficult to identify general criteria. While scientists developed a set of best practices (often based on experience and expertise), this is still a trial-and-error process. This becomes even more complex when dealing with advanced functional materials. Their properties depend on structural and morphological features, which in turn depend on fabrication procedures and environment, and subtle alterations leads to dramatically different results. Because of this, materials modeling and design is one of the most prolific research fields. Many techniques and instruments are continuously developed to enable new possibilities, both in the experimental and computational realms. Scientists strive to enforce cutting-edge technologies in order to make progress. However, the field is strongly affected by unorganized file management, proliferation of custom data formats and storage procedures, both in experimental and computational research. Results are difficult to find, interpret and re-use, and a huge amount of time is spent interpreting and re-organizing data. This also strongly limit the application of data-driven and machine learning techniques. This work introduces possible solutions to the problems described above. Specifically, it talks about developing features for specific classes of advanced materials and use them to train machine learning models and accelerate computational predictions for molecular compounds; developing method for organizing non homogeneous materials data; automate the process of using devices simulations to train machine learning models; dealing with scattered experimental data and use them to discover new patterns.
Resumo:
The recent trend of moving Cloud Computing capabilities to the Edge of the network is reshaping how applications and their middleware supports are designed, deployed, and operated. This new model envisions a continuum of virtual resources between the traditional cloud and the network edge, which is potentially more suitable to meet the heterogeneous Quality of Service (QoS) requirements of diverse application domains and next-generation applications. Several classes of advanced Internet of Things (IoT) applications, e.g., in the industrial manufacturing domain, are expected to serve a wide range of applications with heterogeneous QoS requirements and call for QoS management systems to guarantee/control performance indicators, even in the presence of real-world factors such as limited bandwidth and concurrent virtual resource utilization. The present dissertation proposes a comprehensive QoS-aware architecture that addresses the challenges of integrating cloud infrastructure with edge nodes in IoT applications. The architecture provides end-to-end QoS support by incorporating several components for managing physical and virtual resources. The proposed architecture features: i) a multilevel middleware for resolving the convergence between Operational Technology (OT) and Information Technology (IT), ii) an end-to-end QoS management approach compliant with the Time-Sensitive Networking (TSN) standard, iii) new approaches for virtualized network environments, such as running TSN-based applications under Ultra-low Latency (ULL) constraints in virtual and 5G environments, and iv) an accelerated and deterministic container overlay network architecture. Additionally, the QoS-aware architecture includes two novel middlewares: i) a middleware that transparently integrates multiple acceleration technologies in heterogeneous Edge contexts and ii) a QoS-aware middleware for Serverless platforms that leverages coordination of various QoS mechanisms and virtualized Function-as-a-Service (FaaS) invocation stack to manage end-to-end QoS metrics. Finally, all architecture components were tested and evaluated by leveraging realistic testbeds, demonstrating the efficacy of the proposed solutions.
Resumo:
The world currently faces a paradox in terms of accessibility for people with disabilities. While digital technologies hold immense potential to improve their quality of life, the majority of web content still exhibits critical accessibility issues. This PhD thesis addresses this challenge by proposing two interconnected research branches. The first introduces a groundbreaking approach to improving web accessibility by rethinking how it is approached, making it more accessible itself. It involves the development of: 1. AX, a declarative framework of web components that enforces the generation of accessible markup by means of static analysis. 2. An innovative accessibility testing and evaluation methodology, which communicates test results by exploiting concepts that developers are already familiar with (visual rendering and mouse operability) to convey the accessibility of a page. This methodology is implemented through the SAHARIAN browser extension. 3. A11A, a categorized and structured collection of curated accessibility resources aimed at facilitating their intended audiences discover and use them. The second branch focuses on unleashing the full potential of digital technologies to improve accessibility in the physical world. The thesis proposes the SCAMP methodology to make scientific artifacts accessible to blind, visually impaired individuals, and the general public. It enhances the natural characteristics of objects, making them more accessible through interactive, multimodal, and multisensory experiences. Additionally, the prototype of \gls{a11yvt}, a system supporting accessible virtual tours, is presented. It provides blind and visually impaired individuals with features necessary to explore unfamiliar indoor environments, while maintaining universal design principles that makes it suitable for usage by the general public. The thesis extensively discusses the theoretical foundations, design, development, and unique characteristics of these innovative tools. Usability tests with the intended target audiences demonstrate the effectiveness of the proposed artifacts, suggesting their potential to significantly improve the current state of accessibility.
Resumo:
Knowledge graphs and ontologies are closely related concepts in the field of knowledge representation. In recent years, knowledge graphs have gained increasing popularity and are serving as essential components in many knowledge engineering projects that view them as crucial to their success. The conceptual foundation of the knowledge graph is provided by ontologies. Ontology modeling is an iterative engineering process that consists of steps such as the elicitation and formalization of requirements, the development, testing, refactoring, and release of the ontology. The testing of the ontology is a crucial and occasionally overlooked step of the process due to the lack of integrated tools to support it. As a result of this gap in the state-of-the-art, the testing of the ontology is completed manually, which requires a considerable amount of time and effort from the ontology engineers. The lack of tool support is noticed in the requirement elicitation process as well. In this aspect, the rise in the adoption and accessibility of knowledge graphs allows for the development and use of automated tools to assist with the elicitation of requirements from such a complementary source of data. Therefore, this doctoral research is focused on developing methods and tools that support the requirement elicitation and testing steps of an ontology engineering process. To support the testing of the ontology, we have developed XDTesting, a web application that is integrated with the GitHub platform that serves as an ontology testing manager. Concurrently, to support the elicitation and documentation of competency questions, we have defined and implemented RevOnt, a method to extract competency questions from knowledge graphs. Both methods are evaluated through their implementation and the results are promising.
Resumo:
Protected crop production is a modern and innovative approach to cultivating plants in a controlled environment to optimize growth, yield, and quality. This method involves using structures such as greenhouses or tunnels to create a sheltered environment. These productive solutions are characterized by a careful regulation of variables like temperature, humidity, light, and ventilation, which collectively contribute to creating an optimal microclimate for plant growth. Heating, cooling, and ventilation systems are used to maintain optimal conditions for plant growth, regardless of external weather fluctuations. Protected crop production plays a crucial role in addressing challenges posed by climate variability, population growth, and food security. Similarly, animal husbandry involves providing adequate nutrition, housing, medical care and environmental conditions to ensure animal welfare. Then, sustainability is a critical consideration in all forms of agriculture, including protected crop and animal production. Sustainability in animal production refers to the practice of producing animal products in a way that minimizes negative impacts on the environment, promotes animal welfare, and ensures the long-term viability of the industry. Then, the research activities performed during the PhD can be inserted exactly in the field of Precision Agriculture and Livestock farming. Here the focus is on the computational fluid dynamic (CFD) approach and environmental assessment applied to improve yield, resource efficiency, environmental sustainability, and cost savings. It represents a significant shift from traditional farming methods to a more technology-driven, data-driven, and environmentally conscious approach to crop and animal production. On one side, CFD is powerful and precise techniques of computer modeling and simulation of airflows and thermo-hygrometric parameters, that has been applied to optimize the growth environment of crops and the efficiency of ventilation in pig barns. On the other side, the sustainability aspect has been investigated and researched in terms of Life Cycle Assessment analyses.
Resumo:
The usage of Optical Character Recognition’s (OCR, systems is a widely spread technology into the world of Computer Vision and Machine Learning. It is a topic that interest many field, for example the automotive, where becomes a specialized task known as License Plate Recognition, useful for many application from the automation of toll road to intelligent payments. However, OCR systems need to be very accurate and generalizable in order to be able to extract the text of license plates under high variable conditions, from the type of camera used for acquisition to light changes. Such variables compromise the quality of digitalized real scenes causing the presence of noise and degradation of various type, which can be minimized with the application of modern approaches for image iper resolution and noise reduction. Oneclass of them is known as Generative Neural Networks, which are very strong ally for the solution of this popular problem.
Resumo:
Losses of horticulture product in Brazil are significant and among the main causes are the use of inappropriate boxes and the absence of a cold chain. A project for boxes is proposed, based on computer simulations, optimization and experimental validation, trying to minimize the amount of wood associated with structural and ergonomic aspects and the effective area of the openings. Three box prototypes were designed and built using straight laths with different configurations and areas of openings (54% and 36%). The cooling efficiency of Tommy Atkins mango (Mangifera Indica L.) was evaluated by determining the cooling time for fruit packed in the wood models and packed in the commercially used cardboard boxes, submitted to cooling in a forced-air system, at a temperature of 6ºC and average relative humidity of 85.4±2.1%. The Finite Element Method was applied, for the dimensioning and structural optimization of the model with the best behavior in relation to cooling. All wooden boxes with fruit underwent vibration testing for two hours (20 Hz). There was no significant difference in average cooling time in the wooden boxes (36.08±1.44 min); however, the difference was significant in comparison to the cardboard boxes (82.63±29.64 min). In the model chosen for structural optimization (36% effective area of openings and two side laths), the reduction in total volume of material was 60% and 83% in the cross section of the columns. There was no indication of mechanical damage in the fruit after undergoing the vibration test. Computer simulations and structural study may be used as a support tool for developing projects for boxes, with geometric, ergonomic and thermal criteria.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Absolute dating methods have been used in chronological studies of geological processes and sedimentary units of Quaternary age in Central Amazonia, Brazil. Although radiocarbon dating has been very useful in archaeological research and soil studies, the temporal interval of this method is inefficient in evaluating the sedimentation aspects and geological events from the beginning of the Quaternary in the Amazon basin. The use of crystal luminescence dating has been one of the most promising tool for determining the absolute dating of Quaternary deposits in the Amazonian region. Optically stimulated luminescence (OSL) dating, following the MAR and SAR protocols, in a tectonic-sedimentary study of Quaternary fluvial deposits in the confluence area of the Negro and Solimões rivers, indicated ages from 1.3 (Holocene) to about 67.4 kyears (Late Pleistocene) for these sediments. Low radioactive isotope concentrations were found about 2ppm for 235U and 238U; 5ppm for 232Th; and the 40K concentrations were almost zero. A comparison was made between MAR and SAR protocols taking into account the fluvial depositional process.
Resumo:
Neste artigo tivemos a intenção de oferecer ao leitor o resumo de um dos aspectos mais importantes da análise estrutural que realizamos, durante décadas, do texto de Jean Piaget, sobretudo as estreitas relações entre a Biologia e a Lógica na construção e na explicação do conhecimento científico. Nesse sentido, procuramos demonstrar que, a partir dos conceitos de implicação significante e de imagem mental criados por Piaget, um novo campo de investigações se abre, a saber, aquele que denominamos como o dos sistemas de significação não lógica, campo de suma relevância e que vem preencher uma secular lacuna entre a razão e a emoção até hoje presente nas pesquisas sobre os fenômenos normais e patológicos do psiquismo.
Resumo:
OBJETIVO: Com a inclusão das novas tecnologias contemporâneas, a Internet e os jogos eletrônicos tornaram-se ferramentas de uso amplo e irrestrito, transformando-se em um dos maiores fenômenos mundiais da última década. Diversas pesquisas atestam os benefícios desses recursos, mas seu uso sadio e adaptativo progressivamente deu lugar ao abuso e à falta de controle ao criar severos impactos na vida cotidiana de milhões de usuários. O objetivo deste estudo foi revisar de forma sistemática os artigos que examinam a dependência de Internet e jogos eletrônicos na população geral. Almejamos, portanto, avaliar a evolução destes conceitos no decorrer da última década, assim como contribuir para a melhor compreensão do quadro e suas comorbidades. MÉTODO: Foi feita uma revisão sistemática da literatura através do MedLine, Lilacs, SciELO e Cochrane usando-se como parâmetro os termos: "Internet addiction", pathological "Internet use", "problematic Internet use", "Internet abuse", "videogame", "computer games" e "electronic games". A busca eletrônica foi feita até dezembro de 2007. DISCUSSÃO: Estudos realizados em diferentes países apontam para prevalências ainda muito diversas, o que provavelmente se deve à falta de consenso e ao uso de diferentes denominações, dando margem à adoção de distintos critérios diagnósticos. Muitos pacientes que relatam o uso abusivo e dependência passam a apresentar prejuízos significativos na vida profissional, acadêmica (escolar), social e familiar. CONCLUSÕES: São necessárias novas investigações para determinar se esse uso abusivo de Internet e de jogos eletrônicos pode ser compreendido como uma das mais novas classificações psiquiátricas do século XXI ou apenas substratos de outros transtornos.
Resumo:
OBJETIVO: Avaliar a qualidade global das refeições oferecidas por Unidades de Alimentação e Nutrição de empresas beneficiárias do Programa de Alimentação do Trabalhador, na cidade de São Paulo. MÉTODOS: Estudo transversal realizado com 72 empresas cadastradas no programa. Foram coletadas informações de três dias consecutivos das refeições oferecidas no almoço, no jantar e na ceia. A qualidade das refeições oferecidas foi avalia pelo Índice de Qualidade da Refeição, e sua análise foi feita de forma estratificada segundo o perfil da empresa obtido pela análise de cluster. RESULTADOS: A média do Índice de Qualidade da Refeição para as grandes refeições foi de 66,25. Foram obtidos dois grupos de empresas na análise de cluster. As empresas do primeiro, composto em sua maioria por empresas do setor de comércio de micro e pequeno porte, cadastradas na modalidade de autogestão e sem supervisão de nutricionista, obtiveram pior qualidade da refeição (Índice=56,23). As empresas do segundo cluster, constituído principalmente por empresas de médio e grande porte do setor industrial, com gestão terceirizada e supervisão de nutricionista, obtiveram pontuação média do Índice de 82,95. CONCLUSÃO: As refeições oferecidas pelas empresas participantes do Programa de Alimentação do Trabalhador não estavam adequadas, segundo o Índice de Qualidade da Refeição. As empresas de menor porte e estrutura tiveram refeições de pior qualidade quando comparadas com as demais, demonstrando que empresas deste perfil são prioritárias para intervenções dentro do Programa de Alimentação do Trabalhador.
Resumo:
Shallow subsurface layers of gold nanoclusters were formed in polymethylmethacrylate (PMMA) polymer by very low energy (49 eV) gold ion implantation. The ion implantation process was modeled by computer simulation and accurately predicted the layer depth and width. Transmission electron microscopy (TEM) was used to image the buried layer and individual nanoclusters; the layer width was similar to 6-8 nm and the cluster diameter was similar to 5-6 nm. Surface plasmon resonance (SPR) absorption effects were observed by UV-visible spectroscopy. The TEM and SPR results were related to prior measurements of electrical conductivity of Au-doped PMMA, and excellent consistency was found with a model of electrical conductivity in which either at low implantation dose the individual nanoclusters are separated and do not physically touch each other, or at higher implantation dose the nanoclusters touch each other to form a random resistor network (percolation model). (C) 2009 American Vacuum Society. [DOI: 10.1116/1.3231449]
Resumo:
The most popular algorithms for blind equalization are the constant-modulus algorithm (CMA) and the Shalvi-Weinstein algorithm (SWA). It is well-known that SWA presents a higher convergence rate than CMA. at the expense of higher computational complexity. If the forgetting factor is not sufficiently close to one, if the initialization is distant from the optimal solution, or if the signal-to-noise ratio is low, SWA can converge to undesirable local minima or even diverge. In this paper, we show that divergence can be caused by an inconsistency in the nonlinear estimate of the transmitted signal. or (when the algorithm is implemented in finite precision) by the loss of positiveness of the estimate of the autocorrelation matrix, or by a combination of both. In order to avoid the first cause of divergence, we propose a dual-mode SWA. In the first mode of operation. the new algorithm works as SWA; in the second mode, it rejects inconsistent estimates of the transmitted signal. Assuming the persistence of excitation condition, we present a deterministic stability analysis of the new algorithm. To avoid the second cause of divergence, we propose a dual-mode lattice SWA, which is stable even in finite-precision arithmetic, and has a computational complexity that increases linearly with the number of adjustable equalizer coefficients. The good performance of the proposed algorithms is confirmed through numerical simulations.
Resumo:
This pilot project at Cotton Tree, Maroochydore, on two adjacent, linear parcels of land has one of the properties privately owned while the other is owned by the public housing authority. Both owners commissioned Lindsay and Kerry Clare to design housing for their separate needs which enabled the two projects to be governed by a single planning and design strategy. This entailed the realignment of the dividing boundary to form two approximately square blocks which made possible the retention of an important stand of mature paperbark trees and gave each block a more useful street frontage. The scheme provides seven two-bedroom units and one single-bedroom unit as the private component, with six single-bedroom units, three two-bedroom units and two three-bedroom units forming the public housing. The dwellings are deployed as an interlaced mat of freestanding blocks, car courts, courtyard gardens, patios and decks. The key distinction between the public and private parts of the scheme is the pooling of the car parking spaces in the public housing to create a shared courtyard. The housing climbs to three storeys on its southern edge and falls to a single storey on the north-western corner. This enables all units and the principal private outdoor spaces to have a northern orientation. The interiors of both the public and private units are skilfully arranged to take full advantage of views, light and breeze.