971 resultados para Aspect-oriented middleware reference architecture


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este trabajo presenta la reelaboración de un modelo de producción de textos escritos, publicado por el Grupo Didactext en 2003. Se sitúa en un marco sociocognitivo, lingüístico y didáctico, y está concebido desde la interacción de tres dimensiones simbolizadas por círculos concéntricos recurrentes. El primer círculo corresponde al ámbito cultural: las diversas esferas de la praxis humana en las que está inmersa toda actividad de composición escrita. El segundo se refiere a los contextos de producción, de los que forman parte el contexto social, el situacional, el físico, la audiencia y el medio de composición. El tercer círculo corresponde al individuo, en el que se tiene en cuenta el papel de la memoria en la producción de un texto desde el enfoque sociocultural, la motivación, las emociones y las estrategias cognitivas y metacognitivas, dentro de las cuales se conciben seis unidades funcionales que actúan en concurrencia: acceso al conocimiento, planificación, redacción, revisión y reescritura, edición, y presentación oral. La orientación didáctica se interesa por la enseñanza y el aprendizaje de la escritura académica en las aulas, así como por la investigación de la escritura en contextos de educación.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the leading motivations behind the multilingual semantic web is to make resources accessible digitally in an online global multilingual context. Consequently, it is fundamental for knowledge bases to find a way to manage multilingualism and thus be equipped with those procedures for its conceptual modelling. In this context, the goal of this paper is to discuss how common-sense knowledge and cultural knowledge are modelled in a multilingual framework. More particularly, multilingualism and conceptual modelling are dealt with from the perspective of FunGramKB, a lexico-conceptual knowledge base for natural language understanding. This project argues for a clear division between the lexical and the conceptual dimensions of knowledge. Moreover, the conceptual layer is organized into three modules, which result from a strong commitment towards capturing semantic knowledge (Ontology), procedural knowledge (Cognicon) and episodic knowledge (Onomasticon). Cultural mismatches are discussed and formally represented at the three conceptual levels of FunGramKB.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A lightweight Java application suite has been developed and deployed allowing collaborative learning between students and tutors at remote locations. Students can engage in group activities online and also collaborate with tutors. A generic Java framework has been developed and applied to electronics, computing and mathematics education. The applications are respectively: (a) a digital circuit simulator, which allows students to collaborate in building simple or complex electronic circuits; (b) a Java programming environment where the paradigm is behavioural-based robotics, and (c) a differential equation solver useful in modelling of any complex and nonlinear dynamic system. Each student sees a common shared window on which may be added text or graphical objects and which can then be shared online. A built-in chat room supports collaborative dialogue. Students can work either in collaborative groups or else in teams as directed by the tutor. This paper summarises the technical architecture of the system as well as the pedagogical implications of the suite. A report of student evaluation is also presented distilled from use over a period of twelve months. We intend this suite to facilitate learning between groups at one or many institutions and to facilitate international collaboration. We also intend to use the suite as a tool to research the establishment and behaviour of collaborative learning groups. We shall make our software freely available to interested researchers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Los usuarios reales y potenciales del sistemas de salud en Colombia, encuentran en el camino de acceso a los servicios múltiples barreras, debido a las diferentes carencias que éste sistema presenta. Para apoyar las necesidades del usuario en éste tema de acceso, se constituyó en el año 2006 la Fundación RASA, quien ofrece de manera gratuita, a la comunidad en general, mecanismos de exigibilidad, acceso y protección de sus derechos en temas de salud. Este trabajo de grado, pretende dar un valor agregado diseñando una herramienta modular que facilite la implementación del banco de proyectos, con el fin de mejorar sus procesos y sea referencia para otras entidades dedicadas a velar por el bienestar de los individuos, en virtud de su objeto social orientado a la protección de derechos en temas de salud, además se realiza con el fin de Optar por el Título de Especialista en Alta Gerencia de la Universidad de Medellín. Con el fin de entender las necesidades de La Fundación RASA, será primordial conocer y tener un manejo adecuado de la teoría de la organización; teniendo clara su vertiente descriptiva que sugiere lo que se debe hacer para mejorar varios aspectos de la empresa, en éste caso la necesidad de diseñar un Banco de Proyectos. El trabajo se ha estructurado de la siguiente manera: Referente Teórico, Contextualización De La Fundación Rasa, Diseño De Una Herramienta Modular (Introducción, Objetivos, Recursos, Metodología y procedimientos, Evaluación y Monitoreo, Responsable de proceso, Presupuesto Estructural), Conclusiones y Recomendaciones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data mining, as a heatedly discussed term, has been studied in various fields. Its possibilities in refining the decision-making process, realizing potential patterns and creating valuable knowledge have won attention of scholars and practitioners. However, there are less studies intending to combine data mining and libraries where data generation occurs all the time. Therefore, this thesis plans to fill such a gap. Meanwhile, potential opportunities created by data mining are explored to enhance one of the most important elements of libraries: reference service. In order to thoroughly demonstrate the feasibility and applicability of data mining, literature is reviewed to establish a critical understanding of data mining in libraries and attain the current status of library reference service. The result of the literature review indicates that free online data resources other than data generated on social media are rarely considered to be applied in current library data mining mandates. Therefore, the result of the literature review motivates the presented study to utilize online free resources. Furthermore, the natural match between data mining and libraries is established. The natural match is explained by emphasizing the data richness reality and considering data mining as one kind of knowledge, an easy choice for libraries, and a wise method to overcome reference service challenges. The natural match, especially the aspect that data mining could be helpful for library reference service, lays the main theoretical foundation for the empirical work in this study. Turku Main Library was selected as the case to answer the research question: whether data mining is feasible and applicable for reference service improvement. In this case, the daily visit from 2009 to 2015 in Turku Main Library is considered as the resource for data mining. In addition, corresponding weather conditions are collected from Weather Underground, which is totally free online. Before officially being analyzed, the collected dataset is cleansed and preprocessed in order to ensure the quality of data mining. Multiple regression analysis is employed to mine the final dataset. Hourly visits are the independent variable and weather conditions, Discomfort Index and seven days in a week are dependent variables. In the end, four models in different seasons are established to predict visiting situations in each season. Patterns are realized in different seasons and implications are created based on the discovered patterns. In addition, library-climate points are generated by a clustering method, which simplifies the process for librarians using weather data to forecast library visiting situation. Then the data mining result is interpreted from the perspective of improving reference service. After this data mining work, the result of the case study is presented to librarians so as to collect professional opinions regarding the possibility of employing data mining to improve reference services. In the end, positive opinions are collected, which implies that it is feasible to utilizing data mining as a tool to enhance library reference service.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Urban centers all around the world are striving to re-orient themselves to promoting ideals of human engagement, flexibility, openness and synergy, that thoughtful architecture can provide. From a time when solitude in one’s own backyard was desirable, today’s outlook seeks more, to cater to the needs of diverse individuals and that of collaborators. This thesis is an investigation of the role of architecture in realizing how these ideals might be achieved, using Mixed Use Developments as the platform of space to test these designs ideas on. The author also investigates, identifies, and re-imagines how the idea of live-work excites and attracts users and occupants towards investing themselves in Mixed Used Developments (MUD’s), in urban cities. On the premise that MUDs historically began with an intention of urban revitalization, lying in the core of this spatial model, is the opportunity to investigate what makes mixing of uses an asset, especially in the eyes to today’s generation. Within the framework of reference to the current generation, i.e. the millennial population and alike, who have a lifestyle core that is urban-centric, the excitement for this topic is in the vision of MUD’s that will spatially cater to a variety in lifestyles, demographics, and functions, enabling its users to experience a vibrant 24/7 destination. Where cities are always in flux, the thesis will look to investigate the idea of opportunistic space, in a new MUD, that can also be perceived as an adaptive reuse of itself. The sustainability factor lies in the foresight of the transformative and responsive character of the different uses in the MUD at large, which provides the possibility to cater to a changing demand of building use over time. Delving into the architectural response, the thesis in the process explores, conflicts, tensions, and excitements, and the nature of relationships between different spatial layers of permanence vs. transformative, public vs. private, commercial vs. residential, in such an MUD. At a larger scale, investigations elude into the formal meaning and implications of the proposed type of MUD’s and the larger landscapes in which they are situated, with attempts to blur the fine line between architecture and urbanism. A unique character of MUD’s is the power it has to draw in people at the ground level and lead them into exciting spatial experiences. While the thesis stemmed from a purely objective and theoretical standpoint, the author believes that it is only when context is played into the design thinking process, that true architecture may start to flourish. The unique The significance of this thesis lies on the premise that the author believes that this re-imagined MUD has immense opportunity to amplify human engagement with designed space, and in the belief that it will better enable fostering sustainable communities and in the process, enhance people’s lives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação para obtenção do grau de Mestre em Arquitectura, apresentada na Universidade de Lisboa - Faculdade de Arquitectura.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hardware vendors make an important effort creating low-power CPUs that keep battery duration and durability above acceptable levels. In order to achieve this goal and provide good performance-energy for a wide variety of applications, ARM designed the big.LITTLE architecture. This heterogeneous multi-core architecture features two different types of cores: big cores oriented to performance and little cores, slower and aimed to save energy consumption. As all the cores have access to the same memory, multi-threaded applications must resort to some mutual exclusion mechanism to coordinate the access to shared data by the concurrent threads. Transactional Memory (TM) represents an optimistic approach for shared-memory synchronization. To take full advantage of the features offered by software TM, but also benefit from the characteristics of the heterogeneous big.LITTLE architectures, our focus is to propose TM solutions that take into account the power/performance requirements of the application and what it is offered by the architecture. In order to understand the current state-of-the-art and obtain useful information for future power-aware software TM solutions, we have performed an analysis of a popular TM library running on top of an ARM big.LITTLE processor. Experiments show, in general, better scalability for the LITTLE cores for most of the applications except for one, which requires the computing performance that the big cores offer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With recent advances in remote sensing processing technology, it has become more feasible to begin analysis of the enormous historic archive of remotely sensed data. This historical data provides valuable information on a wide variety of topics which can influence the lives of millions of people if processed correctly and in a timely manner. One such field of benefit is that of landslide mapping and inventory. This data provides a historical reference to those who live near high risk areas so future disasters may be avoided. In order to properly map landslides remotely, an optimum method must first be determined. Historically, mapping has been attempted using pixel based methods such as unsupervised and supervised classification. These methods are limited by their ability to only characterize an image spectrally based on single pixel values. This creates a result prone to false positives and often without meaningful objects created. Recently, several reliable methods of Object Oriented Analysis (OOA) have been developed which utilize a full range of spectral, spatial, textural, and contextual parameters to delineate regions of interest. A comparison of these two methods on a historical dataset of the landslide affected city of San Juan La Laguna, Guatemala has proven the benefits of OOA methods over those of unsupervised classification. Overall accuracies of 96.5% and 94.3% and F-score of 84.3% and 77.9% were achieved for OOA and unsupervised classification methods respectively. The greater difference in F-score is a result of the low precision values of unsupervised classification caused by poor false positive removal, the greatest shortcoming of this method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability to use Software Defined Radio (SDR) in the civilian mobile applications will make it possible for the next generation of mobile devices to handle multi-standard personal wireless devices and ubiquitous wireless devices. The original military standard created many beneficial characteristics for SDR, but resulted in a number of disadvantages as well. Many challenges in commercializing SDR are still the subject of interest in the software radio research community. Four main issues that have been already addressed are performance, size, weight, and power. This investigation presents an in-depth study of SDR inter-components communications in terms of total link delay related to the number of components and packet sizes in systems based on Software Communication Architecture (SCA). The study is based on the investigation of the controlled environment platform. Results suggest that the total link delay does not linearly increase with the number of components and the packet sizes. The closed form expression of the delay was modeled using a logistic function in terms of the number of components and packet sizes. The model performed well when the number of components was large. Based upon the mobility applications, energy consumption has become one of the most crucial limitations. SDR will not only provide flexibility of multi-protocol support, but this desirable feature will also bring a choice of mobile protocols. Having such a variety of choices available creates a problem in the selection of the most appropriate protocol to transmit. An investigation in a real-time algorithm to optimize energy efficiency was also performed. Communication energy models were used including switching estimation to develop a waveform selection algorithm. Simulations were performed to validate the concept.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dissertation addresses the still not solved challenges concerned with the source-based digital 3D reconstruction, visualisation and documentation in the domain of archaeology, art and architecture history. The emerging BIM methodology and the exchange data format IFC are changing the way of collaboration, visualisation and documentation in the planning, construction and facility management process. The introduction and development of the Semantic Web (Web 3.0), spreading the idea of structured, formalised and linked data, offers semantically enriched human- and machine-readable data. In contrast to civil engineering and cultural heritage, academic object-oriented disciplines, like archaeology, art and architecture history, are acting as outside spectators. Since the 1990s, it has been argued that a 3D model is not likely to be considered a scientific reconstruction unless it is grounded on accurate documentation and visualisation. However, these standards are still missing and the validation of the outcomes is not fulfilled. Meanwhile, the digital research data remain ephemeral and continue to fill the growing digital cemeteries. This study focuses, therefore, on the evaluation of the source-based digital 3D reconstructions and, especially, on uncertainty assessment in the case of hypothetical reconstructions of destroyed or never built artefacts according to scientific principles, making the models shareable and reusable by a potentially wide audience. The work initially focuses on terminology and on the definition of a workflow especially related to the classification and visualisation of uncertainty. The workflow is then applied to specific cases of 3D models uploaded to the DFG repository of the AI Mainz. In this way, the available methods of documenting, visualising and communicating uncertainty are analysed. In the end, this process will lead to a validation or a correction of the workflow and the initial assumptions, but also (dealing with different hypotheses) to a better definition of the levels of uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recent trend of moving Cloud Computing capabilities to the Edge of the network is reshaping how applications and their middleware supports are designed, deployed, and operated. This new model envisions a continuum of virtual resources between the traditional cloud and the network edge, which is potentially more suitable to meet the heterogeneous Quality of Service (QoS) requirements of diverse application domains and next-generation applications. Several classes of advanced Internet of Things (IoT) applications, e.g., in the industrial manufacturing domain, are expected to serve a wide range of applications with heterogeneous QoS requirements and call for QoS management systems to guarantee/control performance indicators, even in the presence of real-world factors such as limited bandwidth and concurrent virtual resource utilization. The present dissertation proposes a comprehensive QoS-aware architecture that addresses the challenges of integrating cloud infrastructure with edge nodes in IoT applications. The architecture provides end-to-end QoS support by incorporating several components for managing physical and virtual resources. The proposed architecture features: i) a multilevel middleware for resolving the convergence between Operational Technology (OT) and Information Technology (IT), ii) an end-to-end QoS management approach compliant with the Time-Sensitive Networking (TSN) standard, iii) new approaches for virtualized network environments, such as running TSN-based applications under Ultra-low Latency (ULL) constraints in virtual and 5G environments, and iv) an accelerated and deterministic container overlay network architecture. Additionally, the QoS-aware architecture includes two novel middlewares: i) a middleware that transparently integrates multiple acceleration technologies in heterogeneous Edge contexts and ii) a QoS-aware middleware for Serverless platforms that leverages coordination of various QoS mechanisms and virtualized Function-as-a-Service (FaaS) invocation stack to manage end-to-end QoS metrics. Finally, all architecture components were tested and evaluated by leveraging realistic testbeds, demonstrating the efficacy of the proposed solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim of the present study was to develop a statistical approach to define the best cut-off Copy number alterations (CNAs) calling from genomic data provided by high throughput experiments, able to predict a specific clinical end-point (early relapse, 18 months) in the context of Multiple Myeloma (MM). 743 newly diagnosed MM patients with SNPs array-derived genomic and clinical data were included in the study. CNAs were called both by a conventional (classic, CL) and an outcome-oriented (OO) method, and Progression Free Survival (PFS) hazard ratios of CNAs called by the two approaches were compared. The OO approach successfully identified patients at higher risk of relapse and the univariate survival analysis showed stronger prognostic effects for OO-defined high-risk alterations, as compared to that defined by CL approach, statistically significant for 12 CNAs. Overall, 155/743 patients relapsed within 18 months from the therapy start. A small number of OO-defined CNAs were significantly recurrent in early-relapsed patients (ER-CNAs) - amp1q, amp2p, del2p, del12p, del17p, del19p -. Two groups of patients were identified either carrying or not ≥1 ER-CNAs (249 vs. 494, respectively), the first one with significantly shorter PFS and overall survivals (OS) (PFS HR 2.15, p<0001; OS HR 2.37, p<0.0001). The risk of relapse defined by the presence of ≥1 ER-CNAs was independent from those conferred both by R-IIS 3 (HR=1.51; p=0.01) and by low quality (< stable disease) clinical response (HR=2.59 p=0.004). Notably, the type of induction therapy was not descriptive, suggesting that ER is strongly related to patients’ baseline genomic architecture. In conclusion, the OO- approach employed allowed to define CNAs-specific dynamic clonality cut-offs, improving the CNAs calls’ accuracy to identify MM patients with the highest probability to ER. As being outcome-dependent, the OO-approach is dynamic and might be adjusted according to the selected outcome variable of interest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natural Language Processing has always been one of the most popular topics in Artificial Intelligence. Argument-related research in NLP, such as argument detection, argument mining and argument generation, has been popular, especially in recent years. In our daily lives, we use arguments to express ourselves. The quality of arguments heavily impacts the effectiveness of our communications with others. In professional fields, such as legislation and academic areas, arguments of good quality play an even more critical role. Therefore, argument generation with good quality is a challenging research task that is also of great importance in NLP. The aim of this work is to investigate the automatic generation of arguments with good quality, according to the given topic, stance and aspect (control codes). To achieve this goal, a module based on BERT [17] which could judge an argument's quality is constructed. This module is used to assess the quality of the generated arguments. Another module based on GPT-2 [19] is implemented to generate arguments. Stances and aspects are also used as guidance when generating arguments. After combining all these models and techniques, the ranks of the generated arguments could be acquired to evaluate the final performance. This dissertation describes the architecture and experimental setup, analyzes the results of our experimentation, and discusses future directions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lawsonia inermis mediated synthesis of silver nanoparticles (Ag-NPs) and its efficacy against Candida albicans, Microsporum canis, Propioniabacterium acne and Trichophyton mentagrophytes is reported. A two-step mechanism has been proposed for bioreduction and formation of an intermediate complex leading to the synthesis of capped nanoparticles was developed. In addition, antimicrobial gel for M. canis and T. mentagrophytes was also formulated. Ag-NPs were synthesized by challenging the leaft extract of L. inermis with 1 mM AgNO₃. The Ag-NPs were characterized by Ultraviolet-Visible (UV-Vis) spectrophotometer and Fourier transform infrared spectroscopy (FTIR). Transmission electron microscopy (TEM), nanoparticle tracking and analysis sytem (NTA) and zeta potential was measured to detect the size of Ag-NPs. The antimicrobial activity of Ag-NPs was evaluated by disc diffusion method against the test organisms. Thus these Ag-NPs may prove as a better candidate drug due to their biogenic nature. Moreover, Ag-NPs may be an answer to the drug-resistant microorganisms.