925 resultados para EDGE FILTERS
Resumo:
Early definitions of Smart Building focused almost entirely on the technology aspect and did not suggest user interaction at all. Indeed, today we would attribute it more to the concept of the automated building. In this sense, control of comfort conditions inside buildings is a problem that is being well investigated, since it has a direct effect on users’ productivity and an indirect effect on energy saving. Therefore, from the users’ perspective, a typical environment can be considered comfortable, if it’s capable of providing adequate thermal comfort, visual comfort and indoor air quality conditions and acoustic comfort. In the last years, the scientific community has dealt with many challenges, especially from a technological point of view. For instance, smart sensing devices, the internet, and communication technologies have enabled a new paradigm called Edge computing that brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth. This has allowed us to improve services, sustainability and decision making. Many solutions have been implemented such as smart classrooms, controlling the thermal condition of the building, monitoring HVAC data for energy-efficient of the campus and so forth. Though these projects provide to the realization of smart campus, a framework for smart campus is yet to be determined. These new technologies have also introduced new research challenges: within this thesis work, some of the principal open challenges will be faced, proposing a new conceptual framework, technologies and tools to move forward the actual implementation of smart campuses. Keeping in mind, several problems known in the literature have been investigated: the occupancy detection, noise monitoring for acoustic comfort, context awareness inside the building, wayfinding indoor, strategic deployment for air quality and books preserving.
Resumo:
Much of the real-world dataset, including textual data, can be represented using graph structures. The use of graphs to represent textual data has many advantages, mainly related to maintaining a more significant amount of information, such as the relationships between words and their types. In recent years, many neural network architectures have been proposed to deal with tasks on graphs. Many of them consider only node features, ignoring or not giving the proper relevance to relationships between them. However, in many node classification tasks, they play a fundamental role. This thesis aims to analyze the main GNNs, evaluate their advantages and disadvantages, propose an innovative solution considered as an extension of GAT, and apply them to a case study in the biomedical field. We propose the reference GNNs, implemented with methodologies later analyzed, and then applied to a question answering system in the biomedical field as a replacement for the pre-existing GNN. We attempt to obtain better results by using models that can accept as input both node and edge features. As shown later, our proposed models can beat the original solution and define the state-of-the-art for the task under analysis.
Resumo:
Embedding intelligence in extreme edge devices allows distilling raw data acquired from sensors into actionable information, directly on IoT end-nodes. This computing paradigm, in which end-nodes no longer depend entirely on the Cloud, offers undeniable benefits, driving a large research area (TinyML) to deploy leading Machine Learning (ML) algorithms on micro-controller class of devices. To fit the limited memory storage capability of these tiny platforms, full-precision Deep Neural Networks (DNNs) are compressed by representing their data down to byte and sub-byte formats, in the integer domain. However, the current generation of micro-controller systems can barely cope with the computing requirements of QNNs. This thesis tackles the challenge from many perspectives, presenting solutions both at software and hardware levels, exploiting parallelism, heterogeneity and software programmability to guarantee high flexibility and high energy-performance proportionality. The first contribution, PULP-NN, is an optimized software computing library for QNN inference on parallel ultra-low-power (PULP) clusters of RISC-V processors, showing one order of magnitude improvements in performance and energy efficiency, compared to current State-of-the-Art (SoA) STM32 micro-controller systems (MCUs) based on ARM Cortex-M cores. The second contribution is XpulpNN, a set of RISC-V domain specific instruction set architecture (ISA) extensions to deal with sub-byte integer arithmetic computation. The solution, including the ISA extensions and the micro-architecture to support them, achieves energy efficiency comparable with dedicated DNN accelerators and surpasses the efficiency of SoA ARM Cortex-M based MCUs, such as the low-end STM32M4 and the high-end STM32H7 devices, by up to three orders of magnitude. To overcome the Von Neumann bottleneck while guaranteeing the highest flexibility, the final contribution integrates an Analog In-Memory Computing accelerator into the PULP cluster, creating a fully programmable heterogeneous fabric that demonstrates end-to-end inference capabilities of SoA MobileNetV2 models, showing two orders of magnitude performance improvements over current SoA analog/digital solutions.
Resumo:
The fourth industrial revolution, also known as Industry 4.0, has rapidly gained traction in businesses across Europe and the world, becoming a central theme in small, medium, and large enterprises alike. This new paradigm shifts the focus from locally-based and barely automated firms to a globally interconnected industrial sector, stimulating economic growth and productivity, and supporting the upskilling and reskilling of employees. However, despite the maturity and scalability of information and cloud technologies, the support systems already present in the machine field are often outdated and lack the necessary security, access control, and advanced communication capabilities. This dissertation proposes architectures and technologies designed to bridge the gap between Operational and Information Technology, in a manner that is non-disruptive, efficient, and scalable. The proposal presents cloud-enabled data-gathering architectures that make use of the newest IT and networking technologies to achieve the desired quality of service and non-functional properties. By harnessing industrial and business data, processes can be optimized even before product sale, while the integrated environment enhances data exchange for post-sale support. The architectures have been tested and have shown encouraging performance results, providing a promising solution for companies looking to embrace Industry 4.0, enhance their operational capabilities, and prepare themselves for the upcoming fifth human-centric revolution.
Resumo:
The first topic analyzed in the thesis will be Neural Architecture Search (NAS). I will focus on two different tools that I developed, one to optimize the architecture of Temporal Convolutional Networks (TCNs), a convolutional model for time-series processing that has recently emerged, and one to optimize the data precision of tensors inside CNNs. The first NAS proposed explicitly targets the optimization of the most peculiar architectural parameters of TCNs, namely dilation, receptive field, and the number of features in each layer. Note that this is the first NAS that explicitly targets these networks. The second NAS proposed instead focuses on finding the most efficient data format for a target CNN, with the granularity of the layer filter. Note that applying these two NASes in sequence allows an "application designer" to minimize the structure of the neural network employed, minimizing the number of operations or the memory usage of the network. After that, the second topic described is the optimization of neural network deployment on edge devices. Importantly, exploiting edge platforms' scarce resources is critical for NN efficient execution on MCUs. To do so, I will introduce DORY (Deployment Oriented to memoRY) -- an automatic tool to deploy CNNs on low-cost MCUs. DORY, in different steps, can manage different levels of memory inside the MCU automatically, offload the computation workload (i.e., the different layers of a neural network) to dedicated hardware accelerators, and automatically generates ANSI C code that orchestrates off- and on-chip transfers with the computation phases. On top of this, I will introduce two optimized computation libraries that DORY can exploit to deploy TCNs and Transformers on edge efficiently. I conclude the thesis with two different applications on bio-signal analysis, i.e., heart rate tracking and sEMG-based gesture recognition.
Resumo:
Two single crystalline surfaces of Au vicinal to the (111) plane were modified with Pt and studied using scanning tunneling microscopy (STM) and X-ray photoemission spectroscopy (XPS) in ultra-high vacuum environment. The vicinal surfaces studied are Au(332) and Au(887) and different Pt coverage (θPt) were deposited on each surface. From STM images we determine that Pt deposits on both surfaces as nanoislands with heights ranging from 1 ML to 3 ML depending on θPt. On both surfaces the early growth of Pt ad-islands occurs at the lower part of the step edge, with Pt ad-atoms being incorporated into the steps in some cases. XPS results indicate that partial alloying of Pt occurs at the interface at room temperature and at all coverage, as suggested by the negative chemical shift of Pt 4f core line, indicating an upward shift of the d-band center of the alloyed Pt. Also, the existence of a segregated Pt phase especially at higher coverage is detected by XPS. Sample annealing indicates that the temperature rise promotes a further incorporation of Pt atoms into the Au substrate as supported by STM and XPS results. Additionally, the catalytic activity of different PtAu systems reported in the literature for some electrochemical reactions is discussed considering our findings.
Resumo:
As graphene has become one of the most important materials, there is renewed interest in other similar structures. One example is silicene, the silicon analogue of graphene. It shares some of the remarkable graphene properties, such as the Dirac cone, but presents some distinct ones, such as a pronounced structural buckling. We have investigated, through density functional based tight-binding (DFTB), as well as reactive molecular dynamics (using ReaxFF), the mechanical properties of suspended single-layer silicene. We calculated the elastic constants, analyzed the fracture patterns and edge reconstructions. We also addressed the stress distributions, unbuckling mechanisms and the fracture dependence on the temperature. We analysed the differences due to distinct edge morphologies, namely zigzag and armchair.
Resumo:
Sunlight exposure causes several types of injury to humans, especially on the skin; among the most common harmful effects due to ultraviolet (UV) exposure are erythema, pigmentation and lesions in DNA, which may lead to cancer. These long-term effects are minimized with the use of sunscreens, a class of cosmetic products that contains UV filters as the main component in the formulation; such molecules can absorb, reflect or diffuse UV rays, and can be used alone or as a combination to broaden the protection on different wavelengths. Currently, worldwide regulatory agencies define which ingredients and what quantities must be used in each country, and enforce companies to conduct tests that confirm the Sun Protection Factor (SPF) and the UVA (Ultraviolet A) factor. Standard SPF determination tests are currently conducted in vivo, using human subjects. In an industrial mindset, apart from economic and ethical reasons, the introduction of an in vitro method emerges as an interesting alternative by reducing risks associated to UV exposure on tests, as well as providing assertive analytical results. The present work aims to describe a novel methodology for SPF determination directly from sunscreen formulations using the previously described cosmetomics platform and mass spectrometry as the analytical methods of choice.
Resumo:
Approximately 7.2% of the Atlantic rainforest remains in Brazil, with only 16% of this forest remaining in the State of Rio de Janeiro, all of it distributed in fragments. This forest fragmentation can produce biotic and abiotic differences between edges and the fragment interior. In this study, we compared the structure and richness of tree communities in three habitats - an anthropogenic edge (AE), a natural edge (NE) and the fragment interior (FI) - of a fragment of Atlantic forest in the State of Rio de Janeiro, Brazil (22°50'S and 42°28'W). One thousand and seventy-six trees with a diameter at breast height > 4.8 cm, belonging to 132 morphospecies and 39 families, were sampled in a total study area of 0.75 ha. NE had the greatest basal area and the trees in this habitat had the greatest diameter:height allometric coefficient, whereas AE had a lower richness and greater variation in the height of the first tree branch. Tree density, diameter, height and the proportion of standing dead trees did not differ among the habitats. There was marked heterogeneity among replicates within each habitat. These results indicate that the forest interior and the fragment edges (natural or anthropogenic) do not differ markedly considering the studied parameters. Other factors, such as the age from the edge, type of matrix and proximity of gaps, may play a more important role in plant community structure than the proximity from edges.
Resumo:
The application of sand filters in localized irrigation systems is recommended in the presence of organic and algae contamination. The proper design and maintenance of these equipments are essential to assure an effective water quality control, in order to reduce the emitters clogging, to keep its water application uniformity, and to prevent increasing in the system operation costs. Despite the existence of some references about design, operation and maintenance of these filters, they are dispersed, with not enough details to guarantee the optimization of its hydraulics structure design and the proper selection of porous media to be used. Therefore, the objective of this work was to report a current literature review, relating practical information with scientific knowledge. The content of this review would help to induce and intensify the research on this subject and to contribute so the operational functions for the equipment are reached. It is also expected to assist the improvement of the filtration and flushing processes in the agricultural irrigation and the development of original design procedures and the rational use of these devices.
Resumo:
The presence of vegetal impurities in sugarcane delivered to sugarmills as green and dry leaves is a problem not only because they are non-value materials to be processed along with sugarcane stalks, but also because they can rise the color of the clarified juice and, consequently, the color of the sugar produced, with a reduction of its quality for the market. Another problem is the mud volume sedimented in the clarifiers, which also can result in a larger recirculation and greater volume of filtrate juice, with higher losses of sucrose and utilization of the vacuum rotary filters. The objective of this work was to observe the effect of the presence of green and dry leaves on sugarcane juice clarification, related to a control treatment with the addition of fiber extracted from the stalks. The experiments were planned based on the addition of quantities of fibrous sources in order to formulate samples with absolute increase of 0.25 , 0.50 and 0.75 percentual points over the fiber content of the sugarcane stalks (control treatment). The juice clarification was conducted with a laboratory clarifier. The clarified juice color and the mud volume were evaluated. The presence of green leaves caused higher color and mud volume due to the extraction of non-sucrose components of the leaves. Soluble compounds of dry leaves were also extracted, though not detected by juice analysis. The addition of the fiber extracted from the stalks did not induce alterations in the clarification process.
Resumo:
Universidade Estadual de Campinas. Faculdade de Educação Física
Resumo:
Angle Class III malocclusion has been a challenge for researchers concerning diagnosis, prognosis and treatment. It has a prevalence of 5% in the Brazilian population, and may have a genetic or environmental etiology. This malocclusion can be classified as dentoalveolar, skeletal or functional, which will determine the prognosis. Considering these topics, the aim of this study was to describe and discuss a clinical case with functional Class III malocclusion treated by a two-stage approach (interceptive and corrective), with a long-term follow-up. In this case, the patient was treated with a chincup and an Eschler arch, used simultaneously during 14 months, followed by corrective orthodontics. It should be noticed that, in this case, initial diagnosis at the centric relation allowed visualizing the anterior teeth in an edge-to-edge relationship, thereby favoring the prognosis. After completion of the treatment, the patient was followed for a 10-year period, and stability was observed. The clinical treatment results showed that it is possible to achieve favorable outcomes with early management in functional Class III malocclusion patients.
Resumo:
The purpose of this study was to test the hypothesis that both human and bovine sclerotic dentin have similar hardness properties, in addition to similar micromorphological characteristics. Sixteen teeth (8 human and 8 bovine) exhibiting exposed dentin in the incisal edge and showing characteristics typical of sclerosis were used. Vickers surface microhardness testing was conducted. Three areas of the dentin surface of each specimen were selected. All teeth were processed for scanning electron microscopy in order to estimate the amount (in percentage) of solid dentin on the sclerotic dentin surface. The data were compared by Student's t test (α = 0.05). The micromorphological and microhardness data were compared by Pearson's linear correlation test (α = 0.05). The mean percentages of solid dentin of human and bovine sclerotic dentin were similar (human 90.71 ± 0.83 and bovine 89.08 ± 0.81, p = 0.18). The mean microhardness value (VHN) of human sclerotic dentin was significantly higher than that of bovine sclerotic dentin (human 45.26 ± 2.92 and bovine 29.93 ± 3.83, p = 0.006). No correlation was found between the microhardness values and the amount of solid dentin in the sclerotic dentin, irrespective of the species considered (human R² = 0.0240, p = 0.714; bovine R² = 0.0017, p = 0.923; and combined R² = 0.038, p = 0.46). We concluded that although both bovine and human sclerotic dentin present a similar amount of solid tissue, human sclerotic dentin presents higher microhardness than bovine sclerotic dentin.
Resumo:
Os ecossistemas florestais do Brasil abrigam um dos mais altos níveis de diversidade de mamíferos da Terra, e boa parte dessa diversidade se encontra nas áreas legalmente protegidas em áreas de domínio privado. As reservas legais (RLs) e áreas de proteção permanente (APPs) representam estratégias importantes para a proteção e manutenção dessa diversidade. Mudanças propostas no Código Florestal certamente trarão efeitos irreversíveis para a diversidade de mamíferos no Brasil. Os mamíferos apresentam papéis-chave nos ecossistemas, atuando como polinizadores e dispersores de sementes. A extinção local de algumas espécies pode reduzir os serviços ecológicos nas RLs e APPs. Outra consequência grave da redução de áreas de vegetação nativa caso a mudança no Código Florestal seja aprovada será o aumento no risco de transmição de doenças, trazendo sério problemas a saúde pública no Brasil.