911 resultados para Case Based Computing


Relevância:

40.00% 40.00%

Publicador:

Resumo:

In French Polynesia, the aquaculture of P. margaritifera is carried out in numerous grow-out sites, located over three archipelagos (Gambier, Society and Tuamotu). To evaluate the impact of macro-geographical effects of these growing sites on pearl quality traits, five hatcheries produced families were used as homogeneous donor oysters in an experimental graft. The molluscs were then reared in two commercial locations: Tahaa island (Society) and Rangiroa atoll (Tuamotu). At harvest, eight pearl quality traits were recorded and compared: surface defects, lustre, grade, circles, shape categories, darkness level, body and secondary colour and visual colour categories. Overall inter-site comparison revealed that: 1) all traits were affected by grow-out location except for lustre and round shape, and 2) a higher mean rate of valuable pearls was produced in Rangiroa. Indeed, for pearl grade, Rangiroa showed twice as many A-B and less reject samples than Tahaa. This was related to the number of surface defects (grade component): in Rangiroa, twice as many pearls had no defects and less pearls had up to 10 defects. Concerning pearl shape, more circled and baroque pearls were found in Tahaa (+10%). For colour variation, 10% more pearls have an attractive green overtone in Rangiroa than in Tahaa, where more grey bodycolor were harvested. Lustre does not seem to be affected by these two culture site (except at a family scale). This is the first time P. margaritifera donor family have been shown to vary in the quality of pearls they produce depending on their grow-out location.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we use concepts from graph theory and cellular biology represented as ontologies, to carry out semantic mining tasks on signaling pathway networks. Specifically, the paper describes the semantic enrichment of signaling pathway networks. A cell signaling network describes the basic cellular activities and their interactions. The main contribution of this paper is in the signaling pathway research area, it proposes a new technique to analyze and understand how changes in these networks may affect the transmission and flow of information, which produce diseases such as cancer and diabetes. Our approach is based on three concepts from graph theory (modularity, clustering and centrality) frequently used on social networks analysis. Our approach consists into two phases: the first uses the graph theory concepts to determine the cellular groups in the network, which we will call them communities; the second uses ontologies for the semantic enrichment of the cellular communities. The measures used from the graph theory allow us to determine the set of cells that are close (for example, in a disease), and the main cells in each community. We analyze our approach in two cases: TGF-β and the Alzheimer Disease.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Recent developments in automation, robotics and artificial intelligence have given a push to a wider usage of these technologies in recent years, and nowadays, driverless transport systems are already state-of-the-art on certain legs of transportation. This has given a push for the maritime industry to join the advancement. The case organisation, AAWA initiative, is a joint industry-academia research consortium with the objective of developing readiness for the first commercial autonomous solutions, exploiting state-of-the-art autonomous and remote technology. The initiative develops both autonomous and remote operation technology for navigation, machinery, and all on-board operating systems. The aim of this study is to develop a model with which to estimate and forecast the operational costs, and thus enable comparisons between manned and autonomous cargo vessels. The building process of the model is also described and discussed. Furthermore, the model’s aim is to track and identify the critical success factors of the chosen ship design, and to enable monitoring and tracking of the incurred operational costs as the life cycle of the vessel progresses. The study adopts the constructive research approach, as the aim is to develop a construct to meet the needs of a case organisation. Data has been collected through discussions and meeting with consortium members and researchers, as well as through written and internal communications material. The model itself is built using activity-based life cycle costing, which enables both realistic cost estimation and forecasting, as well as the identification of critical success factors due to the process-orientation adopted from activity-based costing and the statistical nature of Monte Carlo simulation techniques. As the model was able to meet the multiple aims set for it, and the case organisation was satisfied with it, it could be argued that activity-based life cycle costing is the method with which to conduct cost estimation and forecasting in the case of autonomous cargo vessels. The model was able to perform the cost analysis and forecasting, as well as to trace the critical success factors. Later on, it also enabled, albeit hypothetically, monitoring and tracking of the incurred costs. By collecting costs this way, it was argued that the activity-based LCC model is able facilitate learning from and continuous improvement of the autonomous vessel. As with the building process of the model, an individual approach was chosen, while still using the implementation and model building steps presented in existing literature. This was due to two factors: the nature of the model and – perhaps even more importantly – the nature of the case organisation. Furthermore, the loosely organised network structure means that knowing the case organisation and its aims is of great importance when conducting a constructive research.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

People, animals and the environment can be exposed to multiple chemicals at once from a variety of sources, but current risk assessment is usually carried out based on one chemical substance at a time. In human health risk assessment, ingestion of food is considered a major route of exposure to many contaminants, namely mycotoxins, a wide group of fungal secondary metabolites that are known to potentially cause toxicity and carcinogenic outcomes. Mycotoxins are commonly found in a variety of foods including those intended for consumption by infants and young children and have been found in processed cereal-based foods available in the Portuguese market. The use of mathematical models, including probabilistic approaches using Monte Carlo simulations, constitutes a prominent issue in human health risk assessment in general and in mycotoxins exposure assessment in particular. The present study aims to characterize, for the first time, the risk associated with the exposure of Portuguese children to single and multiple mycotoxins present in processed cereal-based foods (CBF). Portuguese children (0-3 years old) food consumption data (n=103) were collected using a 3 days food diary. Contamination data concerned the quantification of 12 mycotoxins (aflatoxins, ochratoxin A, fumonisins and trichothecenes) were evaluated in 20 CBF samples marketed in 2014 and 2015 in Lisbon; samples were analyzed by HPLC-FLD, LC-MS/MS and GC-MS. Daily exposure of children to mycotoxins was performed using deterministic and probabilistic approaches. Different strategies were used to treat the left censored data. For aflatoxins, as carcinogenic compounds, the margin of exposure (MoE) was calculated as a ratio of BMDL (benchmark dose lower confidence limit) to the aflatoxin exposure. The magnitude of the MoE gives an indication of the risk level. For the remaining mycotoxins, the output of exposure was compared to the dose reference values (TDI) in order to calculate the hazard quotients (ratio between exposure and a reference dose, HQ). For the cumulative risk assessment of multiple mycotoxins, the concentration addition (CA) concept was used. The combined margin of exposure (MoET) and the hazard index (HI) were calculated for aflatoxins and the remaining mycotoxins, respectively. 71% of CBF analyzed samples were contaminated with mycotoxins (with values below the legal limits) and approximately 56% of the studied children consumed CBF at least once in these 3 days. Preliminary results showed that children exposure to single mycotoxins present in CBF were below the TDI. Aflatoxins MoE and MoET revealed a reduced potential risk by exposure through consumption of CBF (with values around 10000 or more). HQ and HI values for the remaining mycotoxins were below 1. Children are a particularly vulnerable population group to food contaminants and the present results point out an urgent need to establish legal limits and control strategies regarding the presence of multiple mycotoxins in children foods in order to protect their health. The development of packaging materials with antifungal properties is a possible solution to control the growth of moulds and consequently to reduce mycotoxin production, contributing to guarantee the quality and safety of foods intended for children consumption.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Title of Thesis: Thesis directed by: ABSTRACT EXAMINING THE IMPLEMENTATION CHALLENGES OF PROJECT-BASED LEARNING: A CASE STUDY Stefan Frederick Brooks, Master of Education, 2016 Professor and Chair Francine Hultgren Teaching and Learning, Policy and Leadership Department Project-based learning (PjBL) is a common instructional strategy to consider for educators, scholars, and advocates who focus on education reform. Previous research on PjBL has focused on its effectiveness, but a limited amount of research exists on the implementation challenges. This exploratory case study examines an attempted project- based learning implementation in one chemistry classroom at a private school that fully supports PjBL for most subjects with limited use in mathematics. During the course of the study, the teacher used a modified version of PjBL. Specifically, he implemented some of the elements of PjBL, such as a driving theme and a public presentation of projects, with the support of traditional instructional methods due to the context of the classroom. The findings of this study emphasize the teacher’s experience with implementing some of the PjBL components and how the inherent implementation challenges affected his practice.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study is about the comparison of simulation techniques between Discrete Event Simulation (DES) and Agent Based Simulation (ABS). DES is one of the best-known types of simulation techniques in Operational Research. Recently, there has been an emergence of another technique, namely ABS. One of the qualities of ABS is that it helps to gain a better understanding of complex systems that involve the interaction of people with their environment as it allows to model concepts like autonomy and pro-activeness which are important attributes to consider. Although there is a lot of literature relating to DES and ABS, we have found none that focuses on exploring the capability of both in tackling the human behaviour issues which relates to queuing time and customer satisfaction in the retail sector. Therefore, the objective of this study is to identify empirically the differences between these simulation techniques by stimulating the potential economic benefits of introducing new policies in a department store. To apply the new strategy, the behaviour of consumers in a retail store will be modelled using the DES and ABS approach and the results will be compared. We aim to understand which simulation technique is better suited to human behaviour modelling by investigating the capability of both techniques in predicting the best solution for an organisation in using management practices. Our main concern is to maximise customer satisfaction, for example by minimising their waiting times for the different services provided.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

With the ever-growing amount of connected sensors (IoT), making sense of sensed data becomes even more important. Pervasive computing is a key enabler for sustainable solutions, prominent examples are smart energy systems and decision support systems. A key feature of pervasive systems is situation awareness which allows a system to thoroughly understand its environment. It is based on external interpretation of data and thus relies on expert knowledge. Due to the distinct nature of situations in different domains and applications, the development of situation aware applications remains a complex process. This thesis is concerned with a general framework for situation awareness which simplifies the development of applications. It is based on the Situation Theory Ontology to provide a foundation for situation modelling which allows knowledge reuse. Concepts of the Situation Theory are mapped to the Context Space Theory which is used for situation reasoning. Situation Spaces in the Context Space are automatically generated with the defined knowledge. For the acquisition of sensor data, the IoT standards O-MI/O-DF are integrated into the framework. These allow a peer-to-peer data exchange between data publisher and the proposed framework and thus a platform independent subscription to sensed data. The framework is then applied for a use case to reduce food waste. The use case validates the applicability of the framework and furthermore serves as a showcase for a pervasive system contributing to the sustainability goals. Leading institutions, e.g. the United Nations, stress the need for a more resource efficient society and acknowledge the capability of ICT systems. The use case scenario is based on a smart neighbourhood in which the system recommends the most efficient use of food items through situation awareness to reduce food waste at consumption stage.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: There are limited data concerning endoscopist-directed endoscopic retrograde cholangiopancreatography deep sedation. The aim of this study was to establish the safety and risk factors for difficult sedation in daily practice. Patients and methods: Hospital-based, frequency matched case-control study. All patients were identified from a database of 1,008 patients between 2014 and 2015. The cases were those with difficult sedations. This concept was defined based on the combination of the receipt of high-doses of midazolam or propofol, poor tolerance, use of reversal agents or sedation-related adverse events. The presence of different factors was evaluated to determine whether they predicted difficult sedation. Results: One-hundred and eighty-nine patients (63 cases, 126 controls) were included. Cases were classified in terms of high-dose requirements (n = 35, 55.56%), sedation-related adverse events (n = 14, 22.22%), the use of reversal agents (n = 13, 20.63%) and agitation/discomfort (n = 8, 12.7%). Concerning adverse events, the total rate was 1.39%, including clinically relevant hypoxemia (n = 11), severe hypotension (n = 2) and paradoxical reactions to midazolam (n = 1). The rate of hypoxemia was higher in patients under propofol combined with midazolam than in patients with propofol alone (2.56% vs. 0.8%, p < 0.001). Alcohol consumption (OR: 2.674 [CI 95%: 1.098-6.515], p = 0.030), opioid consumption (OR: 2.713 [CI 95%: 1.096-6.716], p = 0.031) and the consumption of other psychoactive drugs (OR: 2.015 [CI 95%: 1.017-3.991], p = 0.045) were confirmed to be independent risk factors for difficult sedation. Conclusions: Endoscopist-directed deep sedation during endoscopic retrograde cholangiopancreatography is safe. The presence of certain factors should be assessed before the procedure to identify patients who are high-risk for difficult sedation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In our research we investigate the output accuracy of discrete event simulation models and agent based simulation models when studying human centric complex systems. In this paper we focus on human reactive behaviour as it is possible in both modelling approaches to implement human reactive behaviour in the model by using standard methods. As a case study we have chosen the retail sector, and here in particular the operations of the fitting room in the women wear department of a large UK department store. In our case study we looked at ways of determining the efficiency of implementing new management policies for the fitting room operation through modelling the reactive behaviour of staff and customers of the department. First, we have carried out a validation experiment in which we compared the results from our models to the performance of the real system. This experiment also allowed us to establish differences in output accuracy between the two modelling methods. In a second step a multi-scenario experiment was carried out to study the behaviour of the models when they are used for the purpose of operational improvement. Overall we have found that for our case study example both, discrete event simulation and agent based simulation have the same potential to support the investigation into the efficiency of implementing new management policies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

With hundreds of millions of users reporting locations and embracing mobile technologies, Location Based Services (LBSs) are raising new challenges. In this dissertation, we address three emerging problems in location services, where geolocation data plays a central role. First, to handle the unprecedented growth of generated geolocation data, existing location services rely on geospatial database systems. However, their inability to leverage combined geographical and textual information in analytical queries (e.g. spatial similarity joins) remains an open problem. To address this, we introduce SpsJoin, a framework for computing spatial set-similarity joins. SpsJoin handles combined similarity queries that involve textual and spatial constraints simultaneously. LBSs use this system to tackle different types of problems, such as deduplication, geolocation enhancement and record linkage. We define the spatial set-similarity join problem in a general case and propose an algorithm for its efficient computation. Our solution utilizes parallel computing with MapReduce to handle scalability issues in large geospatial databases. Second, applications that use geolocation data are seldom concerned with ensuring the privacy of participating users. To motivate participation and address privacy concerns, we propose iSafe, a privacy preserving algorithm for computing safety snapshots of co-located mobile devices as well as geosocial network users. iSafe combines geolocation data extracted from crime datasets and geosocial networks such as Yelp. In order to enhance iSafe's ability to compute safety recommendations, even when crime information is incomplete or sparse, we need to identify relationships between Yelp venues and crime indices at their locations. To achieve this, we use SpsJoin on two datasets (Yelp venues and geolocated businesses) to find venues that have not been reviewed and to further compute the crime indices of their locations. Our results show a statistically significant dependence between location crime indices and Yelp features. Third, review centered LBSs (e.g., Yelp) are increasingly becoming targets of malicious campaigns that aim to bias the public image of represented businesses. Although Yelp actively attempts to detect and filter fraudulent reviews, our experiments showed that Yelp is still vulnerable. Fraudulent LBS information also impacts the ability of iSafe to provide correct safety values. We take steps toward addressing this problem by proposing SpiDeR, an algorithm that takes advantage of the richness of information available in Yelp to detect abnormal review patterns. We propose a fake venue detection solution that applies SpsJoin on Yelp and U.S. housing datasets. We validate the proposed solutions using ground truth data extracted by our experiments and reviews filtered by Yelp.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: This study is part of an interactive improvement intervention aimed to facilitate empowerment-based chronic kidney care using data from persons with CKD and their family members. There are many challenges to implementing empowerment-based care, and it is therefore necessary to study the implementation process. The aim of this study was to generate knowledge regarding the implementation process of an improvement intervention of empowerment for those who require chronic kidney care. Methods: A prospective single qualitative case study was chosen to follow the process of the implementation over a two year period. Twelve health care professionals were selected based on their various role(s) in the implementation of the improvement intervention. Data collection comprised of digitally recorded project group meetings, field notes of the meetings, and individual interviews before and after the improvement project. These multiple data were analyzed using qualitative latent content analysis. Results: Two facilitator themes emerged: Moving spirit and Encouragement. The healthcare professionals described a willingness to individualize care and to increase their professional development in the field of chronic kidney care. The implementation process was strongly reinforced by both the researchers working interactively with the staff, and the project group. One theme emerged as a barrier: the Limitations of the organization. Changes in the organization hindered the implementation of the intervention throughout the study period, and the lack of interplay in the organization most impeded the process. Conclusions: The findings indicated the complexity of maintaining a sustainable and lasting implementation over a period of two years. Implementing empowerment-based care was found to be facilitated by the cooperation between all involved healthcare professionals. Furthermore, long-term improvement interventions need strong encouragement from all levels of the organization to maintain engagement, even when it is initiated by the health care professionals themselves.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Analog In-memory Computing (AIMC) has been proposed in the context of Beyond Von Neumann architectures as a valid strategy to reduce internal data transfers energy consumption and latency, and to improve compute efficiency. The aim of AIMC is to perform computations within the memory unit, typically leveraging the physical features of memory devices. Among resistive Non-volatile Memories (NVMs), Phase-change Memory (PCM) has become a promising technology due to its intrinsic capability to store multilevel data. Hence, PCM technology is currently investigated to enhance the possibilities and the applications of AIMC. This thesis aims at exploring the potential of new PCM-based architectures as in-memory computational accelerators. In a first step, a preliminar experimental characterization of PCM devices has been carried out in an AIMC perspective. PCM cells non-idealities, such as time-drift, noise, and non-linearity have been studied to develop a dedicated multilevel programming algorithm. Measurement-based simulations have been then employed to evaluate the feasibility of PCM-based operations in the fields of Deep Neural Networks (DNNs) and Structural Health Monitoring (SHM). Moreover, a first testchip has been designed and tested to evaluate the hardware implementation of Multiply-and-Accumulate (MAC) operations employing PCM cells. This prototype experimentally demonstrates the possibility to reach a 95% MAC accuracy with a circuit-level compensation of cells time drift and non-linearity. Finally, empirical circuit behavior models have been included in simulations to assess the use of this technology in specific DNN applications, and to enhance the potentiality of this innovative computation approach.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The scientific success of the LHC experiments at CERN highly depends on the availability of computing resources which efficiently store, process, and analyse the amount of data collected every year. This is ensured by the Worldwide LHC Computing Grid infrastructure that connect computing centres distributed all over the world with high performance network. LHC has an ambitious experimental program for the coming years, which includes large investments and improvements both for the hardware of the detectors and for the software and computing systems, in order to deal with the huge increase in the event rate expected from the High Luminosity LHC (HL-LHC) phase and consequently with the huge amount of data that will be produced. Since few years the role of Artificial Intelligence has become relevant in the High Energy Physics (HEP) world. Machine Learning (ML) and Deep Learning algorithms have been successfully used in many areas of HEP, like online and offline reconstruction programs, detector simulation, object reconstruction, identification, Monte Carlo generation, and surely they will be crucial in the HL-LHC phase. This thesis aims at contributing to a CMS R&D project, regarding a ML "as a Service" solution for HEP needs (MLaaS4HEP). It consists in a data-service able to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. This framework has been updated adding new features in the data preprocessing phase, allowing more flexibility to the user. Since the MLaaS4HEP framework is experiment agnostic, the ATLAS Higgs Boson ML challenge has been chosen as physics use case, with the aim to test MLaaS4HEP and the contribution done with this work.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Global population growth reflects how humans increasingly exploited Earth's resources. Urbanization develops along with anthropization. It is estimated that nearly 60% of the world's population lives in urban areas, which symbolize the denaturalized dimension of current modernity. Cities are artificial ecosystems that suffer most from environmental issues and climate change. The Urban Heat Island (UHI) effect is a common microclimatic phenomenon affecting cities, which causes considerable differences between urban and rural areas temperatures. Among the driving factors, the lack of vegetation in urban settlements can damage both humans and the environment (health diseases, heat waves caused deaths, biodiversity loss, and so on). As the world continues to urbanize, sustainable development increasingly depends on successful management of urban areas. To enhance cities’ resilience, Nature-based Solutions (NbSs), are defined as an umbrella concept that encompasses a wide range of ecosystem-based approaches and actions to climate change adaptation (CCA) and disaster risk reduction (DRR). This paper analyzes a 15-days study on air temperature trends carried out in Isla, a small locality in the Maltese archipelago, and proposes Nature-based Solutions-characterized scenarios to mitigate the Urban Heat Island effect the Mediterranean city is affected by. The results demonstrates how in some areas where vegetation is present, lower temperatures are recorded than in areas where vegetation is absent or scarce. It also appeared that in one location, the specific type of vegetation does not contribute to high temperature mitigation, whereas in another one, different environmental parameters can influence the measurements. Among the case-specific Nature-based Solutions proposed there are vertical greening (green wall, façades, ground based greening, etc.), tree lines, green canopy, and green roofs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A retrospective case-control study based on craniometrical evaluation was performed to evaluate the incidence of basilar invagination (BI). Patients with symptomatic tonsillar herniation treated surgically had craniometrical parameters evaluated based on CT scan reconstructions before surgery. BI was diagnosed when the tip of the odontoid trespassed the Chamberlain's line in three different thresholds found in the literature: 2, 5 or 6.6 mm. In the surgical group (SU), the mean distance of the tip of the odontoid process above the Chamberlain's line was 12 mm versus 1.2 mm in the control (CO) group (p<0.0001). The number of patients with BI according to the threshold used (2, 5 or 6.6 mm) in the SU group was respectively 19 (95%), 16 (80%) and 15 (75%) and in the CO group it was 15 (37%), 4 (10%) and 2 (5%).