971 resultados para Reciprocal collaborative method
Resumo:
The "sustainability" concept relates to the prolonging of human economic systems with as little detrimental impact on ecological systems as possible. Construction that exhibits good environmental stewardship and practices that conserve resources in a manner that allow growth and development to be sustained for the long-term without degrading the environment are indispensable in a developed society. Past, current and future advancements in asphalt as an environmentally sustainable paving material are especially important because the quantities of asphalt used annually in Europe as well as in the U.S. are large. The asphalt industry is still developing technological improvements that will reduce the environmental impact without affecting the final mechanical performance. Warm mix asphalt (WMA) is a type of asphalt mix requiring lower production temperatures compared to hot mix asphalt (HMA), while aiming to maintain the desired post construction properties of traditional HMA. Lowering the production temperature reduce the fuel usage and the production of emissions therefore and that improve conditions for workers and supports the sustainable development. Even the crumb-rubber modifier (CRM), with shredded automobile tires and used in the United States since the mid 1980s, has proven to be an environmentally friendly alternative to conventional asphalt pavement. Furthermore, the use of waste tires is not only relevant in an environmental aspect but also for the engineering properties of asphalt [Pennisi E., 1992]. This research project is aimed to demonstrate the dual value of these Asphalt Mixes in regards to the environmental and mechanical performance and to suggest a low environmental impact design procedure. In fact, the use of eco-friendly materials is the first phase towards an eco-compatible design but it cannot be the only step. The eco-compatible approach should be extended also to the design method and material characterization because only with these phases is it possible to exploit the maximum potential properties of the used materials. Appropriate asphalt concrete characterization is essential and vital for realistic performance prediction of asphalt concrete pavements. Volumetric (Mix design) and mechanical (Permanent deformation and Fatigue performance) properties are important factors to consider. Moreover, an advanced and efficient design method is necessary in order to correctly use the material. A design method such as a Mechanistic-Empirical approach, consisting of a structural model capable of predicting the state of stresses and strains within the pavement structure under the different traffic and environmental conditions, was the application of choice. In particular this study focus on the CalME and its Incremental-Recursive (I-R) procedure, based on damage models for fatigue and permanent shear strain related to the surface cracking and to the rutting respectively. It works in increments of time and, using the output from one increment, recursively, as input to the next increment, predicts the pavement conditions in terms of layer moduli, fatigue cracking, rutting and roughness. This software procedure was adopted in order to verify the mechanical properties of the study mixes and the reciprocal relationship between surface layer and pavement structure in terms of fatigue and permanent deformation with defined traffic and environmental conditions. The asphalt mixes studied were used in a pavement structure as surface layer of 60 mm thickness. The performance of the pavement was compared to the performance of the same pavement structure where different kinds of asphalt concrete were used as surface layer. In comparison to a conventional asphalt concrete, three eco-friendly materials, two warm mix asphalt and a rubberized asphalt concrete, were analyzed. The First Two Chapters summarize the necessary steps aimed to satisfy the sustainable pavement design procedure. In Chapter I the problem of asphalt pavement eco-compatible design was introduced. The low environmental impact materials such as the Warm Mix Asphalt and the Rubberized Asphalt Concrete were described in detail. In addition the value of a rational asphalt pavement design method was discussed. Chapter II underlines the importance of a deep laboratory characterization based on appropriate materials selection and performance evaluation. In Chapter III, CalME is introduced trough a specific explanation of the different equipped design approaches and specifically explaining the I-R procedure. In Chapter IV, the experimental program is presented with a explanation of test laboratory devices adopted. The Fatigue and Rutting performances of the study mixes are shown respectively in Chapter V and VI. Through these laboratory test data the CalME I-R models parameters for Master Curve, fatigue damage and permanent shear strain were evaluated. Lastly, in Chapter VII, the results of the asphalt pavement structures simulations with different surface layers were reported. For each pavement structure, the total surface cracking, the total rutting, the fatigue damage and the rutting depth in each bound layer were analyzed.
Resumo:
BACKGROUND: Over the last 4 years ADAMTS-13 measurement underwent dramatic progress with newer and simpler methods. AIMS: Blind evaluation of newer methods for their performance characteristics. DESIGN: The literature was searched for new methods and the authors invited to join the evaluation. Participants were provided with a set of 60 coded frozen plasmas that were prepared centrally by dilutions of one ADAMTS-13-deficient plasma (arbitrarily set at 0%) into one normal-pooled plasma (set at 100%). There were six different test plasmas ranging from 100% to 0%. Each plasma was tested 'blind' 10 times by each method and results expressed as percentage vs. the local and the common standard provided by the organizer. RESULTS: There were eight functional and three antigen assays. Linearity of observed-vs.-expected ADAMTS-13 levels assessed as r2 ranged from 0.931 to 0.998. Between-run reproducibility expressed as the (mean) CV for repeated measurements was below 10% for three methods, 10-15% for five methods and up to 20% for the remaining three. F-values (analysis of variance) calculated to assess the capacity to distinguish between ADAMTS-13 levels (the higher the F-value, the better the capacity) ranged from 3965 to 137. Between-method variability (CV) amounted to 24.8% when calculated vs. the local and to 20.5% when calculated vs. the common standard. Comparative analysis showed that functional assays employing modified von Willebrand factor peptides as substrate for ADAMTS-13 offer the best performance characteristics. CONCLUSIONS: New assays for ADAMTS-13 have the potential to make the investigation/management of patients with thrombotic microangiopathies much easier than in the past.
Resumo:
Objective To compare the effectiveness and safety of three types of stents (sirolimus eluting, paclitaxel eluting, and bare metal) in people with and without diabetes mellitus. Design Collaborative network meta-analysis. Data sources Electronic databases (Medline, Embase, the Cochrane Central Register of Controlled Trials), relevant websites, reference lists, conference abstracts, reviews, book chapters, and proceedings of advisory panels for the US Food and Drug Administration. Manufacturers and trialists provided additional data. Review methods Network meta-analysis with a mixed treatment comparison method to combine direct within trial comparisons between stents with indirect evidence from other trials while maintaining randomisation. Overall mortality was the primary safety end point, target lesion revascularisation the effectiveness end point. Results 35 trials in 3852 people with diabetes and 10 947 people without diabetes contributed to the analyses. Inconsistency of the network was substantial for overall mortality in people with diabetes and seemed to be related to the duration of dual antiplatelet therapy (P value for interaction 0.02). Restricting the analysis to trials with a duration of dual antiplatelet therapy of six months or more, inconsistency was reduced considerably and hazard ratios for overall mortality were near one for all comparisons in people with diabetes: sirolimus eluting stents compared with bare metal stents 0.88 (95% credibility interval 0.55 to 1.30), paclitaxel eluting stents compared with bare metal stents 0.91 (0.60 to 1.38), and sirolimus eluting stents compared with paclitaxel eluting stents 0.95 (0.63 to 1.43). In people without diabetes, hazard ratios were unaffected by the restriction. Both drug eluting stents were associated with a decrease in revascularisation rates compared with bare metal stents in people both with and without diabetes. Conclusion In trials that specified a duration of dual antiplatelet therapy of six months or more after stent implantation, drug eluting stents seemed safe and effective in people both with and without diabetes.
Resumo:
Clinical text understanding (CTU) is of interest to health informatics because critical clinical information frequently represented as unconstrained text in electronic health records are extensively used by human experts to guide clinical practice, decision making, and to document delivery of care, but are largely unusable by information systems for queries and computations. Recent initiatives advocating for translational research call for generation of technologies that can integrate structured clinical data with unstructured data, provide a unified interface to all data, and contextualize clinical information for reuse in multidisciplinary and collaborative environment envisioned by CTSA program. This implies that technologies for the processing and interpretation of clinical text should be evaluated not only in terms of their validity and reliability in their intended environment, but also in light of their interoperability, and ability to support information integration and contextualization in a distributed and dynamic environment. This vision adds a new layer of information representation requirements that needs to be accounted for when conceptualizing implementation or acquisition of clinical text processing tools and technologies for multidisciplinary research. On the other hand, electronic health records frequently contain unconstrained clinical text with high variability in use of terms and documentation practices, and without commitmentto grammatical or syntactic structure of the language (e.g. Triage notes, physician and nurse notes, chief complaints, etc). This hinders performance of natural language processing technologies which typically rely heavily on the syntax of language and grammatical structure of the text. This document introduces our method to transform unconstrained clinical text found in electronic health information systems to a formal (computationally understandable) representation that is suitable for querying, integration, contextualization and reuse, and is resilient to the grammatical and syntactic irregularities of the clinical text. We present our design rationale, method, and results of evaluation in processing chief complaints and triage notes from 8 different emergency departments in Houston Texas. At the end, we will discuss significance of our contribution in enabling use of clinical text in a practical bio-surveillance setting.
Resumo:
The access to medical literature collections such as PubMed, MedScape or Cochrane has been increased notably in the last years by the web-based tools that provide instant access to the information. However, more sophisticated methodologies are needed to exploit efficiently all that information. The lack of advanced search methods in clinical domain produce that even using well-defined questions for a particular disease, clinicians receive too many results. Since no information analysis is applied afterwards, some relevant results which are not presented in the top of the resultant collection could be ignored by the expert causing an important loose of information. In this work we present a new method to improve scientific article search using patient information for query generation. Using federated search strategy, it is able to simultaneously search in different resources and present a unique relevant literature collection. And applying NLP techniques it presents semantically similar publications together, facilitating the identification of relevant information to clinicians. This method aims to be the foundation of a collaborative environment for sharing clinical knowledge related to patients and scientific publications.
Resumo:
Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.
Resumo:
In ubiquitous data stream mining applications, different devices often aim to learn concepts that are similar to some extent. In these applications, such as spam filtering or news recommendation, the data stream underlying concept (e.g., interesting mail/news) is likely to change over time. Therefore, the resultant model must be continuously adapted to such changes. This paper presents a novel Collaborative Data Stream Mining (Coll-Stream) approach that explores the similarities in the knowledge available from other devices to improve local classification accuracy. Coll-Stream integrates the community knowledge using an ensemble method where the classifiers are selected and weighted based on their local accuracy for different partitions of the feature space. We evaluate Coll-Stream classification accuracy in situations with concept drift, noise, partition granularity and concept similarity in relation to the local underlying concept. The experimental results show that Coll-Stream resultant model achieves stability and accuracy in a variety of situations using both synthetic and real world datasets.
Resumo:
Los sistemas de recomendación son un tipo de solución al problema de sobrecarga de información que sufren los usuarios de los sitios web en los que se pueden votar ciertos artículos. El sistema de recomendación de filtrado colaborativo es considerado como el método con más éxito debido a que sus recomendaciones se hacen basándose en los votos de usuarios similares a un usuario activo. Sin embargo, el método de filtrado de colaboración tradicional selecciona usuarios insuficientemente representativos como vecinos de cada usuario activo. Esto significa que las recomendaciones hechas a posteriori no son lo suficientemente precisas. El método propuesto en esta tesis realiza un pre-filtrado del proceso, mediante el uso de dominancia de Pareto, que elimina los usuarios menos representativos del proceso de selección k-vecino y mantiene los más prometedores. Los resultados de los experimentos realizados en MovieLens y Netflix muestran una mejora significativa en todas las medidas de calidad estudiadas en la aplicación del método propuesto. ABSTRACTRecommender systems are a type of solution to the information overload problem suffered by users of websites on which they can rate certain items. The Collaborative Filtering Recommender System is considered to be the most successful approach as it make its recommendations based on votes of users similar to an active user. Nevertheless, the traditional collaborative filtering method selects insufficiently representative users as neighbors of each active user. This means that the recommendations made a posteriori are not precise enough. The method proposed in this thesis performs a pre-filtering process, by using Pareto dominance, which eliminates the less representative users from the k-neighbor selection process and keeps the most promising ones. The results from the experiments performed on Movielens and Netflix show a significant improvement in all the quality measures studied on applying the proposed method.
Resumo:
Fission product yields are fundamental parameters for several nuclear engineering calculations and in particular for burn-up/activation problems. The impact of their uncertainties was widely studied in the past and valuations were released, although still incomplete. Recently, the nuclear community expressed the need for full fission yield covariance matrices to produce inventory calculation results that take into account the complete uncertainty data. In this work, we studied and applied a Bayesian/generalised least-squares method for covariance generation, and compared the generated uncertainties to the original data stored in the JEFF-3.1.2 library. Then, we focused on the effect of fission yield covariance information on fission pulse decay heat results for thermal fission of 235U. Calculations were carried out using different codes (ACAB and ALEPH-2) after introducing the new covariance values. Results were compared with those obtained with the uncertainty data currently provided by the library. The uncertainty quantification was performed with the Monte Carlo sampling technique. Indeed, correlations between fission yields strongly affect the statistics of decay heat. Introduction Nowadays, any engineering calculation performed in the nuclear field should be accompanied by an uncertainty analysis. In such an analysis, different sources of uncertainties are taken into account. Works such as those performed under the UAM project (Ivanov, et al., 2013) treat nuclear data as a source of uncertainty, in particular cross-section data for which uncertainties given in the form of covariance matrices are already provided in the major nuclear data libraries. Meanwhile, fission yield uncertainties were often neglected or treated shallowly, because their effects were considered of second order compared to cross-sections (Garcia-Herranz, et al., 2010). However, the Working Party on International Nuclear Data Evaluation Co-operation (WPEC)
Resumo:
In this paper we provide a method that allows the visualization of similarity relationships present between items of collaborative filtering recommender systems, as well as the relative importance of each of these. The objective is to offer visual representations of the recommender system?s set of items and of their relationships; these graphs show us where the most representative information can be found and which items are rated in a more similar way by the recommender system?s community of users. The visual representations achieved take the shape of phylogenetic trees, displaying the numerical similarity and the reliability between each pair of items considered to be similar. As a case study we provide the results obtained using the public database Movielens 1M, which contains 3900 movies.
Resumo:
Virtual and remote laboratories (VRLs) are e-learning resources that enhance the accessibility of experimental setups providing a distance teaching framework which meets the student's hands-on learning needs. In addition, online collaborative communication represents a practical and a constructivist method to transmit the knowledge and experience from the teacher to students, overcoming physical distance and isolation. This paper describes the extension of two open source tools: (1) the learning management system Moodle, and (2) the tool to create VRLs Easy Java Simulations (EJS). Our extension provides: (1) synchronous collaborative support to any VRL developed with EJS (i.e., any existing VRL written in EJS can be automatically converted into a collaborative lab with no cost), and (2) support to deploy synchronous collaborative VRLs into Moodle. Using our approach students and/or teachers can invite other users enrolled in a Moodle course to a real-time collaborative experimental session, sharing and/or supervising experiences at the same time they practice and explore experiments using VRLs.
Resumo:
Management of collaborative business processes that span multiple business entities has emerged as a key requirement for business success. These processes are embedded in sets of rules describing complex message-based interactions between parties such that if a logical expression defined on the set of received messages is satisfied, one or more outgoing messages are dispatched. The execution of these processes presents significant challenges since each contentrich message may contribute towards the evaluation of multiple expressions in different ways and the sequence of message arrival cannot be predicted. These challenges must be overcome in order to develop an efficient execution strategy for collaborative processes in an intensive operating environment with a large number of rules and very high throughput of messages. In this paper, we present a discussion on issues relevant to the evaluation of such expressions and describe a basic query-based method for this purpose, including suggested indexes for improved performance. We conclude by identifying several potential future research directions in this area. © 2010 IEEE. All rights reserved
Resumo:
Congestion control is critical for the provisioning of quality of services (QoS) over dedicated short range communications (DSRC) vehicle networks for road safety applications. In this paper we propose a congestion control method for DSRC vehicle networks at road intersection, with the aims of providing high availability and low latency channels for high priority emergency safety applications while maximizing channel utilization for low priority routine safety applications. In this method a offline simulation based approach is used to find out the best possible configurations of message rate and MAC layer backoff exponent (BE) for a given number of vehicles equipped with DSRC radios. The identified best configurations are then used online by an roadside access point (AP) for system operation. Simulation results demonstrated that this adaptive method significantly outperforms the fixed control method under varying number of vehicles. The impact of estimation error on the number of vehicles in the network on system level performance is also investigated.
Resumo:
In this paper we propose an adaptive power and message rate control method for safety applications at road intersections. The design objectives are to firstly provide guaranteed QoS support to both high priority emergency safety applications and low priority routine safety applications and secondly maximize channel utilization. We use an offline simulation based approach to find out the best possible configurations of transmit power and message rate for given numbers of vehicles in the network with certain safety QoS requirements. The identified configurations are then used online by roadside access points (AP) adaptively according to estimated number of vehicles. Simulation results show that this adaptive method could provide required QoS support to safety applications and it significantly outperforms a fixed control method. © 2013 International Information Institute.
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2016