900 resultados para End-to-side neurorrhaphy


Relevância:

100.00% 100.00%

Publicador:

Resumo:

ResumenLa penetración del Estado venezolano en la producción y comercialización del café entre 1974 y 1991 no logró desplazar el control social y económico a los grupos tradicionales de poder en la región de los Andes; estos se organizaron en nuevos núcleos de poder junto con los burócratas de las instituciones cafetaleras, demostrando la persistencia de elementos económicos en la organización social andina.AbstractThe involvement of Venezuelan State in production and marketing of coffee from 1974 to 1991 did not put an end to social and economic control by traditional power groups in the Andean region. Such groups organized into new power nuclei together with bureaucrats from coffee sector institutions, thus demonstrating the persistence of non-economic components of Andean social organization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article shows results of a research regarding the scholar work of men and women in the fields of science and technology at the Universidad Nacional. This qualitative analysis research was based on the narratives of female and male scholars interviewed in 2006 and 2007. The objective of this project was to explain and to understand the unequal participation and differences in scientific production between sexes that holds a 35/65 ratio in favor to men. This paper intends to contribute to a process of making sexist practices visible, as means to assess what has been done and what still lays ahead as a necessary step for advancing policies on gender equality at the University. This article marks an end to the series of our research project report on gender-related issues first published on Temas 47 regarding female and male participation in science.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Embedding intelligence in extreme edge devices allows distilling raw data acquired from sensors into actionable information, directly on IoT end-nodes. This computing paradigm, in which end-nodes no longer depend entirely on the Cloud, offers undeniable benefits, driving a large research area (TinyML) to deploy leading Machine Learning (ML) algorithms on micro-controller class of devices. To fit the limited memory storage capability of these tiny platforms, full-precision Deep Neural Networks (DNNs) are compressed by representing their data down to byte and sub-byte formats, in the integer domain. However, the current generation of micro-controller systems can barely cope with the computing requirements of QNNs. This thesis tackles the challenge from many perspectives, presenting solutions both at software and hardware levels, exploiting parallelism, heterogeneity and software programmability to guarantee high flexibility and high energy-performance proportionality. The first contribution, PULP-NN, is an optimized software computing library for QNN inference on parallel ultra-low-power (PULP) clusters of RISC-V processors, showing one order of magnitude improvements in performance and energy efficiency, compared to current State-of-the-Art (SoA) STM32 micro-controller systems (MCUs) based on ARM Cortex-M cores. The second contribution is XpulpNN, a set of RISC-V domain specific instruction set architecture (ISA) extensions to deal with sub-byte integer arithmetic computation. The solution, including the ISA extensions and the micro-architecture to support them, achieves energy efficiency comparable with dedicated DNN accelerators and surpasses the efficiency of SoA ARM Cortex-M based MCUs, such as the low-end STM32M4 and the high-end STM32H7 devices, by up to three orders of magnitude. To overcome the Von Neumann bottleneck while guaranteeing the highest flexibility, the final contribution integrates an Analog In-Memory Computing accelerator into the PULP cluster, creating a fully programmable heterogeneous fabric that demonstrates end-to-end inference capabilities of SoA MobileNetV2 models, showing two orders of magnitude performance improvements over current SoA analog/digital solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The deployment of ultra-dense networks is one of the most promising solutions to manage the phenomenon of co-channel interference that affects the latest wireless communication systems, especially in hotspots. To meet the requirements of the use-cases and the immense amount of traffic generated in these scenarios, 5G ultra-dense networks are being deployed using various technologies, such as distributed antenna system (DAS) and cloud-radio access network (C-RAN). Through these centralized densification schemes, virtualized baseband processing units coordinate the distributed access points and manage the available network resources. In particular, link adaptation techniques are shown to be fundamental to overall system operation and performance enhancement. The core of this dissertation is the result of an analysis and a comparison of dynamic and adaptive methods for modulation and coding scheme (MCS) selection applied to the latest mobile telecommunications standards. A novel algorithm based on the proportional-integral-derivative (PID) controller principles and block error rate (BLER) target has been proposed. Tests were conducted in a 4G and 5G system level laboratory and, by means of a channel emulator, the performance was evaluated for different channel models and target BLERs. Furthermore, due to the intrinsic sectorization of the end-users distribution in the investigated scenario, a preliminary analysis on the joint application of users grouping algorithms with multi-antenna and multi-user techniques has been performed. In conclusion, the importance and impact of other fundamental physical layer operations, such as channel estimation and power control, on the overall end-to-end system behavior and performance were highlighted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this dissertation is to describe the methodologies required to design, operate, and validate the performance of ground stations dedicated to near and deep space tracking, as well as the models developed to process the signals acquired, from raw data to the output parameters of the orbit determination of spacecraft. This work is framed in the context of lunar and planetary exploration missions by addressing the challenges in receiving and processing radiometric data for radio science investigations and navigation purposes. These challenges include the designing of an appropriate back-end to read, convert and store the antenna voltages, the definition of appropriate methodologies for pre-processing, calibration, and estimation of radiometric data for the extraction of information on the spacecraft state, and the definition and integration of accurate models of the spacecraft dynamics to evaluate the goodness of the recorded signals. Additionally, the experimental design of acquisition strategies to perform direct comparison between ground stations is described and discussed. In particular, the evaluation of the differential performance between stations requires the designing of a dedicated tracking campaign to maximize the overlap of the recorded datasets at the receivers, making it possible to correlate the received signals and isolate the contribution of the ground segment to the noise in the single link. Finally, in support of the methodologies and models presented, results from the validation and design work performed on the Deep Space Network (DSN) affiliated nodes DSS-69 and DSS-17 will also be reported.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recent trend of moving Cloud Computing capabilities to the Edge of the network is reshaping how applications and their middleware supports are designed, deployed, and operated. This new model envisions a continuum of virtual resources between the traditional cloud and the network edge, which is potentially more suitable to meet the heterogeneous Quality of Service (QoS) requirements of diverse application domains and next-generation applications. Several classes of advanced Internet of Things (IoT) applications, e.g., in the industrial manufacturing domain, are expected to serve a wide range of applications with heterogeneous QoS requirements and call for QoS management systems to guarantee/control performance indicators, even in the presence of real-world factors such as limited bandwidth and concurrent virtual resource utilization. The present dissertation proposes a comprehensive QoS-aware architecture that addresses the challenges of integrating cloud infrastructure with edge nodes in IoT applications. The architecture provides end-to-end QoS support by incorporating several components for managing physical and virtual resources. The proposed architecture features: i) a multilevel middleware for resolving the convergence between Operational Technology (OT) and Information Technology (IT), ii) an end-to-end QoS management approach compliant with the Time-Sensitive Networking (TSN) standard, iii) new approaches for virtualized network environments, such as running TSN-based applications under Ultra-low Latency (ULL) constraints in virtual and 5G environments, and iv) an accelerated and deterministic container overlay network architecture. Additionally, the QoS-aware architecture includes two novel middlewares: i) a middleware that transparently integrates multiple acceleration technologies in heterogeneous Edge contexts and ii) a QoS-aware middleware for Serverless platforms that leverages coordination of various QoS mechanisms and virtualized Function-as-a-Service (FaaS) invocation stack to manage end-to-end QoS metrics. Finally, all architecture components were tested and evaluated by leveraging realistic testbeds, demonstrating the efficacy of the proposed solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The abundance of visual data and the push for robust AI are driving the need for automated visual sensemaking. Computer Vision (CV) faces growing demand for models that can discern not only what images "represent," but also what they "evoke." This is a demand for tools mimicking human perception at a high semantic level, categorizing images based on concepts like freedom, danger, or safety. However, automating this process is challenging due to entropy, scarcity, subjectivity, and ethical considerations. These challenges not only impact performance but also underscore the critical need for interoperability. This dissertation focuses on abstract concept-based (AC) image classification, guided by three technical principles: situated grounding, performance enhancement, and interpretability. We introduce ART-stract, a novel dataset of cultural images annotated with ACs, serving as the foundation for a series of experiments across four key domains: assessing the effectiveness of the end-to-end DL paradigm, exploring cognitive-inspired semantic intermediaries, incorporating cultural and commonsense aspects, and neuro-symbolic integration of sensory-perceptual data with cognitive-based knowledge. Our results demonstrate that integrating CV approaches with semantic technologies yields methods that surpass the current state of the art in AC image classification, outperforming the end-to-end deep vision paradigm. The results emphasize the role semantic technologies can play in developing both effective and interpretable systems, through the capturing, situating, and reasoning over knowledge related to visual data. Furthermore, this dissertation explores the complex interplay between technical and socio-technical factors. By merging technical expertise with an understanding of human and societal aspects, we advocate for responsible labeling and training practices in visual media. These insights and techniques not only advance efforts in CV and explainable artificial intelligence but also propel us toward an era of AI development that harmonizes technical prowess with deep awareness of its human and societal implications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ill-conditioned inverse problems frequently arise in life sciences, particularly in the context of image deblurring and medical image reconstruction. These problems have been addressed through iterative variational algorithms, which regularize the reconstruction by adding prior knowledge about the problem's solution. Despite the theoretical reliability of these methods, their practical utility is constrained by the time required to converge. Recently, the advent of neural networks allowed the development of reconstruction algorithms that can compute highly accurate solutions with minimal time demands. Regrettably, it is well-known that neural networks are sensitive to unexpected noise, and the quality of their reconstructions quickly deteriorates when the input is slightly perturbed. Modern efforts to address this challenge have led to the creation of massive neural network architectures, but this approach is unsustainable from both ecological and economic standpoints. The recently introduced GreenAI paradigm argues that developing sustainable neural network models is essential for practical applications. In this thesis, we aim to bridge the gap between theory and practice by introducing a novel framework that combines the reliability of model-based iterative algorithms with the speed and accuracy of end-to-end neural networks. Additionally, we demonstrate that our framework yields results comparable to state-of-the-art methods while using relatively small, sustainable models. In the first part of this thesis, we discuss the proposed framework from a theoretical perspective. We provide an extension of classical regularization theory, applicable in scenarios where neural networks are employed to solve inverse problems, and we show there exists a trade-off between accuracy and stability. Furthermore, we demonstrate the effectiveness of our methods in common life science-related scenarios. In the second part of the thesis, we initiate an exploration extending the proposed method into the probabilistic domain. We analyze some properties of deep generative models, revealing their potential applicability in addressing ill-posed inverse problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study is divided into two main part: one focused on the GEO Satellite IoT and the other on the LEO Satellite IoT. Concerning the GEO Satellite IoT, the activity has been developed in the context of EUMETSAT Data Collection Service (DCS) by investigating the performance at the receiver within challenging scenarios. DCS are provided by several GEO Satellite operators, giving almost total coverage around the world. In this study firstly an overview of the DCS end-to-end architecture is given followed by a detailed description of both the tools used for the simulations: the DCP-TST (message generator and transmitter) and the DCP-RX (receiver). After generating several test messages, the performances have been evaluated with the addition of impairments (CW and sweeping interferences) and considerations in terms of BER and Good Messages are produced. Furthermore, a study on the PLL System is also conducted together with evaluations on the effectiveness of tuning the PLL Bw on the overall performance. Concerning the LEO Satellite IoT, the activity was carried out in the framework of the ASI Bidirectional IoT Satellite Service (BISS) Project. The elaborate covers a survey about the possible services that the project can accomplish and a technical analysis on the uplink MA. In particular, the LR-FHSS is proved to be a valid alternative for the uplink through an extensive analysis on its Network capacity and through the study of an analytic model for Success Probability with its Matlab implementation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays the idea of injecting world or domain-specific structured knowledge into pre-trained language models (PLMs) is becoming an increasingly popular approach for solving problems such as biases, hallucinations, huge architectural sizes, and explainability lack—critical for real-world natural language processing applications in sensitive fields like bioinformatics. One recent work that has garnered much attention in Neuro-symbolic AI is QA-GNN, an end-to-end model for multiple-choice open-domain question answering (MCOQA) tasks via interpretable text-graph reasoning. Unlike previous publications, QA-GNN mutually informs PLMs and graph neural networks (GNNs) on top of relevant facts retrieved from knowledge graphs (KGs). However, taking a more holistic view, existing PLM+KG contributions mainly consider commonsense benchmarks and ignore or shallowly analyze performances on biomedical datasets. This thesis start from a propose of a deep investigation of QA-GNN for biomedicine, comparing existing or brand-new PLMs, KGs, edge-aware GNNs, preprocessing techniques, and initialization strategies. By combining the insights emerged in DISI's research, we introduce Bio-QA-GNN that include a KG. Working with this part has led to an improvement in state-of-the-art of MCOQA model on biomedical/clinical text, largely outperforming the original one (+3.63\% accuracy on MedQA). Our findings also contribute to a better understanding of the explanation degree allowed by joint text-graph reasoning architectures and their effectiveness on different medical subjects and reasoning types. Codes, models, datasets, and demos to reproduce the results are freely available at: \url{https://github.com/disi-unibo-nlp/bio-qagnn}.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Planning is an important sub-field of artificial intelligence (AI) focusing on letting intelligent agents deliberate on the most adequate course of action to attain their goals. Thanks to the recent boost in the number of critical domains and systems which exploit planning for their internal procedures, there is an increasing need for planning systems to become more transparent and trustworthy. Along this line, planning systems are now required to produce not only plans but also explanations about those plans, or the way they were attained. To address this issue, a new research area is emerging in the AI panorama: eXplainable AI (XAI), within which explainable planning (XAIP) is a pivotal sub-field. As a recent domain, XAIP is far from mature. No consensus has been reached in the literature about what explanations are, how they should be computed, and what they should explain in the first place. Furthermore, existing contributions are mostly theoretical, and software implementations are rarely more than preliminary. To overcome such issues, in this thesis we design an explainable planning framework bridging the gap between theoretical contributions from literature and software implementations. More precisely, taking inspiration from the state of the art, we develop a formal model for XAIP, and the software tool enabling its practical exploitation. Accordingly, the contribution of this thesis is four-folded. First, we review the state of the art of XAIP, supplying an outline of its most significant contributions from the literature. We then generalise the aforementioned contributions into a unified model for XAIP, aimed at supporting model-based contrastive explanations. Next, we design and implement an algorithm-agnostic library for XAIP based on our model. Finally, we validate our library from a technological perspective, via an extensive testing suite. Furthermore, we assess its performance and usability through a set of benchmarks and end-to-end examples.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This article reports the case of a 55-year-old female patient who presented with unsatisfactory temporary crowns in the right mandibular premolars and molars, and a premolar-to-molar fixed partial denture in the left side. The clinical and radiographic examinations revealed a fracture of the left first premolar that was a retainer of the fixed partial denture and required extraction. Initially, the acrylic resin crowns were replaced by new ones, and a provisional RPD was made using acrylic resin and orthodontic wire clasps to resolve the problem arising from the loss of the fixed partial denture. Considering the patient's high esthetic demands, the treatment options for the definitive prosthetic treatment were discussed with her and rehabilitation with implant-supported dentures was proposed because the clinical conditions of the residual alveolar ridge were suitable for implant installation, and the patient's general health was excellent. However, the patient did not agree because she knew of a failed case of implant-retained denture in a diabetic individual and was concerned. The patient was fully informed that implant installation was the best indication for her case, but the arguments were not sufficient to change her decision. The treatment possibilities were presented and the patient opted for a clasp-retained removable partial denture (RPD) associated with the placement of crowns in the pillar teeth. The temporary RPD was replaced by the definitive RPD constructed subsequently. Although RPD was not the first choice, satisfactory esthetic and functional outcomes were achieved, overcaming the patient's expectations. This case report illustrates that the dentist must be prepared to deal with situations where, for reasons that cannot be managed, the patient does not accept the treatment considered as the most indicated for his/her case. Alternatives must be proposed and the functional and esthetic requirements must be fulfilled in the best possible manner.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Purpose: The aim is to evaluate the level of inclination of the surgeon's spinal column (ISSC) while performing laparoscopic radical prostatectomy (LRP) when using one trocar on each side of the patient abdomen (""torero"" position) in two scenarios: With and without a thin head supporter adapted to the table. Materials and Methods: Based on trigonometric principles, we elaborated a formula to calculate the ISSC for a determined surgeon and surgical table while performing LRP in the torero position. The parameters considered were the width of the surgical table (m), the distance between the surgeon's anterior superior iliac spines (q), and the distance from the central point between the surgeon's anterior superior iliac spines to the surgeon's head (h). We used the formula alpha = 90 degree-cos(-1)(b/h) (where b = q/2 + m/2) in an Excel sheet to calculate the angle of inclination of the surgeon's spinal column. We applied the measures of 12 surgeons with different biotypes of our staff to calculate the ISSC with and without the thin head supporter. Results: The use of a thin head supporter reduced the mean ISCC in the torero position from 36.1 +/- 3.73 degrees (range 31.3 to 49.8 degrees) to 22.1 +/- 4.9 degrees (range 18.7 to 32.9 degrees), which corresponds to a reduction of 38.8% in the mean angle of inclination. This difference was statistically significant (P < 0.001). Conclusion: The use of a thin head supporter adapted to the surgical table objectively reduces lateral inclination of the surgeon's spinal column in the torero position, making LRP a more comfortable procedure.