921 resultados para Auto, organizzazione, sistemi, MAS, design pattern, TuCSoN, respect


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabajo corresponde con la implementación de componentes software dentro de la Plataforma COMPUTAPLEX, la cual tiene como objetivo facilitar a los investigadores la realización de tareas del proceso experimental de ingeniería de software. Uno de los aportes a esta plataforma tecnológica corresponde con el desarrolló de los componentes necesarios para la recuperación de datos experimentales disponibles en diversas fuentes de datos, para ello se hizo uso de un mecanismo capaz de unificar la extracción de información de MySQL, ficheros excel y ficheros SPSS. Con ello diferentes grupos de investigación asociados pueden compartir y tener acceso a repositorios experimentales que se mantienen tanto de manera local como externa. Por otra parte, se ha realizado un estudio de la tecnología de agentes en la que se describe sus definiciones, lenguajes de comunicación, especificación FIPA, JADE como implementación FIPA y parser XML. Además para este trabajo se ha definido e implementado una ontología de comunicación entre agentes, la misma que fue diseñada en la herramienta Protégé. En lo que se refiere al desarrollo de componentes se hizo uso de una amplía variedad de tecnologías que incluye lenguaje de programación Java, framework JADE para el desarrollo de agentes, librería JENA para manejo de ontologías, librería SAXParser para lectura de archivos XML y patrón de diseño Factory. Finalmente se describe la metodología de trabajo utilizada en el proyecto, la cual por medio de la realización de varios ciclos iterativos permitió obtener prototipos que poco a poco fueron cubriendo las necesidades del producto software.----ABSTRACT---- This work relates to the implementation of software components within the platform Computaplex, which aims to enable researchers to conduct experimental software engineering process tasks. One of the contributions to this platform technology corresponds to the development of components which are necessary for the recovery of experimental data available in different data sources, to archive this goal a mechanism able to unify the extraction of information from MySQL, Excel and SPSS files was made. Therefore, associated research groups can share and access experimental repositories that remain both locally and externally. Moreover, it has been conducted a study of agent technology in its definition is described, languages communication, FIPA, JADE and FIPA implementation and XML parser. In addition to this work, it has been defined and implemented an ontology for communication between agents, the same as was designed in the Protégé tool. In what refers to the development of components, a wide range of technologies have been made which includes Java programming language, framework JADE for agent development, JENA library for handling ontologies, SAXParser for reading XML files and Factory design pattern. Finally, describing the work methodology used in this project, which through the implementation of several iterative cycles allowed to obtain prototypes were gradually meeting the needs of the software product.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El trabajo de fin de grado que se va a definir detalladamente en esta memoria, trata de poner de manifiesto muchos de los conocimientos que he adquirido a lo largo de la carrera, aplicándolos en un proyecto real. Se ha desarrollado una plataforma capaz de albergar ideas, escritas por personas de todo el mundo que buscan compartirlas con los demás, para que estas sean comentadas, valoradas y entre todos poder mejorarlas. Estas ideas pueden ser de cualquier ámbito, por tanto, se da la posibilidad de clasificarlas en las categorías que mejor encajen con la idea. La aplicación ofrece una API RESTful muy descriptiva, en la que se ha identificado y estructurado cada recurso, para que a través de los “verbos http” se puedan gestionar todos los elementos de una forma fácil y sencilla, independientemente del cliente que la utilice. La arquitectura está montada siguiendo el patrón de diseño modelo vista-controlador, utilizando las últimas tecnologías del mercado como Spring, Liferay, SmartGWT y MongoDB (entre muchas otras) con el objetivo de crear una aplicación segura, escalable y modulada, por lo que se ha tenido que integrar todos estos frameworks. Los datos de la aplicación se hacen persistentes en dos tipos de bases de datos, una relacional (MySQL) y otra no relacional (MongoDB), aprovechando al máximo las características que ofrecen cada una de ellas. El cliente propuesto es accesible mediante un navegador web, se basa en el portal de Liferay. Se han desarrollado varios “Portlets o Widgets”, que componen la estructura de contenido que ve el usuario final. A través de ellos se puede acceder al contenido de la aplicación, ideas, comentarios y demás contenidos sociales, de una forma agradable para el usuario, ya que estos “Portlets” se comunican entre sí y hacen peticiones asíncronas a la API RESTful sin necesidad de recargar toda la estructura de la página. Además, los usuarios pueden registrarse en el sistema para aportar más contenidos u obtener roles que les dan permisos para realizar acciones de administración. Se ha seguido una metodología “Scrum” para la realización del proyecto, con el objetivo de dividir el proyecto en tareas pequeñas y desarrollarlas de una forma ágil. Herramientas como “Jenkins” me han ayudado a una integración continua y asegurando mediante la ejecución de los test de prueba, que todos los componentes funcionan. La calidad ha sido un aspecto principal en el proyecto, se han seguido metodologías software y patrones de diseño para garantizar un diseño de calidad, reutilizable, óptimo y modulado. El uso de la herramienta “Sonar” ha ayudado a este cometido. Además, se ha implementado un sistema de pruebas muy completo de todos los componentes de la aplicación. En definitiva, se ha diseñado una aplicación innovadora de código abierto, que establece unas bases muy definidas para que si algún día se pone en producción, sirva a las personas para compartir pensamientos o ideas ayudando a mejorar el mundo en el que vivimos. ---ABSTRACT---The Final Degree Project, described in detail in this report, attempts to cover a lot of the knowledge I have acquired during my studies, applying it to a real project. The objective of the project has been to develop a platform capable of hosting ideas from people all over the world, where users can share their ideas, comment on and rate the ideas of others and together help improving them. Since these ideas can be of any kind, it is possible to classify them into suitable categories. The application offers a very descriptive API RESTful, where each resource has been identified and organized in a way that makes it possible to easily manage all the elements using the HTTP verbs, regardless of the client using it. The architecture has been built following the design pattern model-view-controller, using the latest market technologies such as Spring, Liferay, Smart GWT and MongoDB (among others) with the purpose of creating a safe, scalable and adjustable application. The data of the application are persistent in two different kinds of databases, one relational (MySQL) and the other non-relational (MongoDB), taking advantage of all the different features each one of them provides. The suggested client is accessible through a web browser and it is based in Liferay. Various “Portlets" or "Widgets” make up the final content of the page. Thanks to these Portlets, the user can access the application content (ideas, comments and categories) in a pleasant way as the Portlets communicate with each other making asynchronous requests to the API RESTful without the necessity to refresh the whole page. Furthermore, users can log on to the system to contribute with more contents or to obtain administrator privileges. The Project has been developed following a “Scrum” methodology, with the main objective being that of dividing the Project into smaller tasks making it possible to develop each task in a more agile and ultimately faster way. Tools like “Jenkins” have been used to guarantee a continuous integration and to ensure that all the components work correctly thanks to the execution of test runs. Quality has been one of the main aspects in this project, why design patterns and software methodologies have been used to guarantee a high quality, reusable, modular and optimized design. The “Sonar” technology has helped in the achievement of this goal. Furthermore, a comprehensive proofing system of all the application's components has been implemented. In conclusion, this Project has consisted in developing an innovative, free source application that establishes a clearly defined basis so that, if it someday will be put in production, it will allow people to share thoughts and ideas, and by doing so, help them to improve the World we live in.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cancer cachexia is characterised by selective depletion of skeletal muscle protein reserves. The ubiquitin-proteasome proteolytic pathway has been shown to be responsible for muscle wasting in a range of cachectic conditions including cancer cachexia. To establish the importance of this pathway in muscle wasting during cancer (and sepsis), a quantitative competitive RT-PCR (QcRT-PCR) method was developed to measure the mRNA levels of the proteasome sub units C2a and C5ß and the ubiquitin-conjugating enzyme E214k. Western blotting was also used to measure the 20S proteasome and E214k protein expression. In vivo studies in mice bearing a cachexia inducing murine colon adenocarcinoma (MAC16) demonstrated the effect of progressive weight loss on the mRNA and protein expression for 20S proteasome subunits, as well as the ubiquitin-conjugating enzyme, E214k, in gastrocnemius and pectoral muscles. QcRT-PCR measurements showed a good correlation between expression of the proteasome subunits (C2 and CS) and the E214k enzyme mRNA and weight loss in gastrocnemius muscle, where expression increased with increasing weight loss followed by a decrease in expression at higher weight losses (25-27%). Similar results were obtained in pectoral muscles, but with the expression being several fold lower in comparison to that in gastrocnemius muscle, reflecting the different degrees of protein degradation in the two muscles during the process of cancer cachexia. Western blot analysis of 20S and E214k protein expression followed a similar pattern with respect to weight loss as that found with mRNA. In addition, mRNA and protein expression of the 20S proteasome subunits and E214k enzyme was measured in biopsies from cachectic cancer patients, which also showed a good correlation between weight loss and proteasome expression, demonstrating a progressive increase in expression of the proteasome subunits and E214k mRNA and protein in cachectic patients with progressively increasing weight loss.The effect of the cachexia-inducing tumour product PIF (proteolysis inducing factor) and 15-hydroxyeicosatetraenoic acid (15-HETE), the arachidoinic acid metabolite (thought to be the intracellular transducer of PIF action) has also been determined. Using a surrogate model system for skeletal muscle, C2C12 myotubes in vitro, it was shown that both PIF and 15-HETE increased proteasome subunit expression (C2a and C5ß) as well as the E214k enzyme. This increase gene expression was attenuated by preincubation with EPA or the 15-lipoxygenase inhibitor CV-6504; immunoblotting also confirmed these findings. Similarly, in sepsis-induced cachexia in NMRI mice there was increased mRNA and protein expression of the 20S proteasome subunits and the E214k enzyme, which was inhibited by EPA treatment. These results suggest that 15-HETE is the intracellular mediator for PIF induced protein degradation in skeletal muscle, and that elevated muscle catabolism is accomplished through upregulation of the ubiquitin-proteasome-proteolytic pathway. Furthermore, both EPA and CV -6504 have shown anti-cachectic properties, which could be used in the future for the treatment of cancer cachexia and other similar catabolic conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automated negotiation is widely applied in various domains. However, the development of such systems is a complex knowledge and software engineering task. So, a methodology there will be helpful. Unfortunately, none of existing methodologies can offer sufficient, detailed support for such system development. To remove this limitation, this paper develops a new methodology made up of: (1) a generic framework (architectural pattern) for the main task, and (2) a library of modular and reusable design pattern (templates) of subtasks. Thus, it is much easier to build a negotiating agent by assembling these standardised components rather than reinventing the wheel each time. Moreover, since these patterns are identified from a wide variety of existing negotiating agents (especially high impact ones), they can also improve the quality of the final systems developed. In addition, our methodology reveals what types of domain knowledge need to be input into the negotiating agents. This in turn provides a basis for developing techniques to acquire the domain knowledge from human users. This is important because negotiation agents act faithfully on the behalf of their human users and thus the relevant domain knowledge must be acquired from the human users. Finally, our methodology is validated with one high impact system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The sharing of product and process information plays a central role in coordinating supply chains operations and is a key driver for their success. "Linked pedigrees" - linked datasets, that encapsulate event based traceability information of artifacts as they move along the supply chain, provide a scalable mechanism to record and facilitate the sharing of track and trace knowledge among supply chain partners. In this paper we present "OntoPedigree" a content ontology design pattern for the representation of linked pedigrees, that can be specialised and extended to define domain specific traceability ontologies. Events captured within the pedigrees are specified using EPCIS - a GS1 standard for the specification of traceability information within and across enterprises, while certification information is described using PROV - a vocabulary for modelling provenance of resources. We exemplify the utility of OntoPedigree in linked pedigrees generated for supply chains within the perishable goods and pharmaceuticals sectors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Structured parallel programming, and in particular programming models using the algorithmic skeleton or parallel design pattern concepts, are increasingly considered to be the only viable means of supporting effective development of scalable and efficient parallel programs. Structured parallel programming models have been assessed in a number of works in the context of performance. In this paper we consider how the use of structured parallel programming models allows knowledge of the parallel patterns present to be harnessed to address both performance and energy consumption. We consider different features of structured parallel programming that may be leveraged to impact the performance/energy trade-off and we discuss a preliminary set of experiments validating our claims.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cache-coherent non uniform memory access (ccNUMA) architecture is a standard design pattern for contemporary multicore processors, and future generations of architectures are likely to be NUMA. NUMA architectures create new challenges for managed runtime systems. Memory-intensive applications use the system’s distributed memory banks to allocate data, and the automatic memory manager collects garbage left in these memory banks. The garbage collector may need to access remote memory banks, which entails access latency overhead and potential bandwidth saturation for the interconnection between memory banks. This dissertation makes five significant contributions to garbage collection on NUMA systems, with a case study implementation using the Hotspot Java Virtual Machine. It empirically studies data locality for a Stop-The-World garbage collector when tracing connected objects in NUMA heaps. First, it identifies a locality richness which exists naturally in connected objects that contain a root object and its reachable set— ‘rooted sub-graphs’. Second, this dissertation leverages the locality characteristic of rooted sub-graphs to develop a new NUMA-aware garbage collection mechanism. A garbage collector thread processes a local root and its reachable set, which is likely to have a large number of objects in the same NUMA node. Third, a garbage collector thread steals references from sibling threads that run on the same NUMA node to improve data locality. This research evaluates the new NUMA-aware garbage collector using seven benchmarks of an established real-world DaCapo benchmark suite. In addition, evaluation involves a widely used SPECjbb benchmark and Neo4J graph database Java benchmark, as well as an artificial benchmark. The results of the NUMA-aware garbage collector on a multi-hop NUMA architecture show an average of 15% performance improvement. Furthermore, this performance gain is shown to be as a result of an improved NUMA memory access in a ccNUMA system. Fourth, the existing Hotspot JVM adaptive policy for configuring the number of garbage collection threads is shown to be suboptimal for current NUMA machines. The policy uses outdated assumptions and it generates a constant thread count. In fact, the Hotspot JVM still uses this policy in the production version. This research shows that the optimal number of garbage collection threads is application-specific and configuring the optimal number of garbage collection threads yields better collection throughput than the default policy. Fifth, this dissertation designs and implements a runtime technique, which involves heuristics from dynamic collection behavior to calculate an optimal number of garbage collector threads for each collection cycle. The results show an average of 21% improvements to the garbage collection performance for DaCapo benchmarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Opto-acoustic imaging is a growing field of research in recent years, providing functional imaging of physiological biomarkers, such as the oxygenation of haemoglobin. Piezo electric transducers are the industry standard detector for ultrasonics, but their limited bandwidth, susceptibility to electromagnetic interference and their inversely proportional sensitivity to size all affect the detector performance. Sensors based on polymer optical fibres (POF) are immune to electromagnetic interference, have lower acoustic impedance and a reduced Young's Modulus compared to silica fibres. Furthermore, POF enables the possibility of a wideband sensor and a size appropriate to endoscopy. Micro-structured POF (mPOF) used in an interferometric detector has been shown to be an order of magnitude more sensitive than silica fibre at 1 MHz and 3 times more sensitive at 10 MHz. We present the first opto-acoustic measurements obtained using a 4.7mm PMMA mPOF Bragg grating with a fibre diameter of 130 μm and present the lateral directivity pattern of a PMMA mPOF FBG ultrasound sensor over a frequency range of 1-50 MHz. We discuss the impact of the pattern with respect to the targeted application and draw conclusions on how to mitigate the problems encountered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Increasingly, built environment professionals in Australia, including architect, landscape architect and planner practitioners, are becoming involved in planning and design of projects for, and in direct consultation with Indigenous communities and their proponents. Critically, built environment professionals must be able to plan and design, and demonstrate respect for Indigenous protocols, cultural issues and their community values. Yet many students graduate with little or no comprehension of Indigenous knowledge systems or the protocols for engagement with Australian or international Indigenous communities in which they are required to work. This paper reports on a recently completed Office of Learning & Teaching funded project that was designed to improve the knowledge and skills of tertiary students in the built environment professions including proposing strategies and processes to expose students in the built environment professions to Australian Indigenous knowledge systems. This is a positive beginning in a long-term decolonising project.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Ecological principles have been employed to assist in the sustainability of a suite of 'gateway' marinas currently being developed in Queensland. Tasks included (a) location and fostering of core remnant native vegetation areas, (b) understanding the dynamic patterns of region behaviour using the ecological strategies employed by key flora and fauna species, (c) promoting those native wildlife species best characterising the region, and (d) allocating management actions along elongated buffer zones to the catchment headwaters (rather than only peripheral to the property). The design of infrastructure and its relationship to sustainable landscape development is lacking such a response int eh planning and detailing of new marinas. This paper distinguishes between the practice of landscape ecology and the design of ecological landscapes, offering examples of the principles of the latter in support of the concept of ecological landscape practice.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Cued recall and item recognition are considered the standard episodic memory retrieval tasks. However, only the neural correlates of the latter have been studied in detail with fMRI. Using an event-related fMRI experimental design that permits spoken responses, we tested hypotheses from an auto-associative model of cued recall and item recognition [Chappell, M., & Humphreys, M. S. (1994). An auto-associative neural network for sparse representations: Analysis and application to models of recognition and cued recall. Psychological Review, 101, 103-128]. In brief, the model assumes that cues elicit a network of phonological short term memory (STM) and semantic long term memory (LTM) representations distributed throughout the neocortex as patterns of sparse activations. This information is transferred to the hippocampus which converges upon the item closest to a stored pattern and outputs a response. Word pairs were learned from a study list, with one member of the pair serving as the cue at test. Unstudied words were also intermingled at test in order to provide an analogue of yes/no recognition tasks. Compared to incorrectly rejected studied items (misses) and correctly rejected (CR) unstudied items, correctly recalled items (hits) elicited increased responses in the left hippocampus and neocortical regions including the left inferior prefrontal cortex (LIPC), left mid lateral temporal cortex and inferior parietal cortex, consistent with predictions from the model. This network was very similar to that observed in yes/no recognition studies, supporting proposals that cued recall and item recognition involve common rather than separate mechanisms.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This report describes the Year One Pilot Study processes, and articulates findings from the major project components designed to address these challenges noted above (See Figure 1). Specifically, the pilot study tested the campaign research and development process involving participatory design with young people and sector partners, and the efficacy and practicality of conducting a longitudinal, randomised control trial online with minors, including ways oflinking survey data to campaign data. Each sub-study comprehensively considered the ethical requirements of conducting online research with minors in school settings. The theoretical and methodological framework for measuring campaign engagement and efficacy (Sub-studies 3, 4 and 5) drew on the Model of Goal-Directed Behaviour (MGB) (Perugini & Bagozzi 2001) and Nudge Theory (Thaler & Sunstein, 2008).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Life cycle assessment (LCA) is used to estimate a product's environmental impact. Using LCA during the earlier stages of design may produce erroneous results since information available on the product's lifecycle is typically incomplete at these stages. The resulting uncertainty must be accounted for in the decision-making process. This paper proposes a method for estimating the environmental impact of a product's life cycle and the associated degree of uncertainty of that impact using information generated during the design process. Total impact is estimated based on aggregation of individual product life cycle processes impacts. Uncertainty estimation is based on assessing the mismatch between the information required and the information available about the product life cycle in each uncertainty category, as well as their integration. The method is evaluated using pre-defined scenarios with varying uncertainty. DOI: 10.1115/1.4002163]

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The importance of long-range prediction of rainfall pattern for devising and planning agricultural strategies cannot be overemphasized. However, the prediction of rainfall pattern remains a difficult problem and the desired level of accuracy has not been reached. The conventional methods for prediction of rainfall use either dynamical or statistical modelling. In this article we report the results of a new modelling technique using artificial neural networks. Artificial neural networks are especially useful where the dynamical processes and their interrelations for a given phenomenon are not known with sufficient accuracy. Since conventional neural networks were found to be unsuitable for simulating and predicting rainfall patterns, a generalized structure of a neural network was then explored and found to provide consistent prediction (hindcast) of all-India annual mean rainfall with good accuracy. Performance and consistency of this network are evaluated and compared with those of other (conventional) neural networks. It is shown that the generalized network can make consistently good prediction of annual mean rainfall. Immediate application and potential of such a prediction system are discussed.