978 resultados para Aggregate Programming Spatial Computing Scafi Alchemist


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Soil is a complex heterogeneous system comprising of highly variable and dynamic micro-habitats that have significant impacts on the growth and activity of resident microbiota. A question addressed in this research is how soil structure affects the temporal dynamics and spatial distribution of bacteria. Using repacked microcosms, the effect of bulk-density, aggregate sizes and water content on growth and distribution of introduced Pseudomonas fluorescens and Bacillus subtilis bacteria was determined. Soil bulk-density and aggregate sizes were altered to manipulate the characteristics of the pore volume where bacteria reside and through which distribution of solutes and nutrients is controlled. X-ray CT was used to characterise the pore geometry of repacked soil microcosms. Soil porosity, connectivity and soil-pore interface area declined with increasing bulk-density. In samples that differ in pore geometry, its effect on growth and extent of spread of introduced bacteria was investigated. The growth rate of bacteria reduced with increasing bulk-density, consistent with a significant difference in pore geometry. To measure the ability of bacteria to spread thorough soil, placement experiments were developed. Bacteria were capable of spreading several cm’s through soil. The extent of spread of bacteria was faster and further in soil with larger and better connected pore volumes. To study the spatial distribution in detail, a methodology was developed where a combination of X-ray microtopography, to characterize the soil structure, and fluorescence microscopy, to visualize and quantify bacteria in soil sections was used. The influence of pore characteristics on distribution of bacteria was analysed at macro- and microscales. Soil porosity, connectivity and soil-pore interface influenced bacterial distribution only at the macroscale. The method developed was applied to investigate the effect of soil pore characteristics on the extent of spread of bacteria introduced locally towards a C source in soil. Soil-pore interface influenced spread of bacteria and colonization, therefore higher bacterial densities were found in soil with higher pore volumes. Therefore the results in this showed that pore geometry affects the growth and spread of bacteria in soil. The method developed showed showed how thin sectioning technique can be combined with 3D X-ray CT to visualize bacterial colonization of a 3D pore volume. This novel combination of methods is a significant step towards a full mechanistic understanding of microbial dynamics in structured soils.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While programming in a relational framework has much to offer over the functional style in terms of expressiveness, computing with relations is less efficient, and more semantically troublesome. In this paper we propose a novel blend of the functional and relational styles. We identify a class of "causal relations", which inherit some of the bi-directionality properties of relations, but retain the efficiency and semantic foundations of the functional style.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

By considering the spatial character of sensor-based interactive systems, this paper investigates how discussions of seams and seamlessness in ubiquitous computing neglect the complex spatial character that is constructed as a side-effect of deploying sensor technology within a space. Through a study of a torch (`flashlight') based interface, we develop a framework for analysing this spatial character generated by sensor technology. This framework is then used to analyse and compare a range of other systems in which sensor technology is used, in order to develop a design spectrum that contrasts the revealing and hiding of a system's structure to users. Finally, we discuss the implications for interfaces situated in public spaces and consider the benefits of hiding structure from users.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate estimation of road pavement geometry and layer material properties through the use of proper nondestructive testing and sensor technologies is essential for evaluating pavement’s structural condition and determining options for maintenance and rehabilitation. For these purposes, pavement deflection basins produced by the nondestructive Falling Weight Deflectometer (FWD) test data are commonly used. The nondestructive FWD test drops weights on the pavement to simulate traffic loads and measures the created pavement deflection basins. Backcalculation of pavement geometry and layer properties using FWD deflections is a difficult inverse problem, and the solution with conventional mathematical methods is often challenging due to the ill-posed nature of the problem. In this dissertation, a hybrid algorithm was developed to seek robust and fast solutions to this inverse problem. The algorithm is based on soft computing techniques, mainly Artificial Neural Networks (ANNs) and Genetic Algorithms (GAs) as well as the use of numerical analysis techniques to properly simulate the geomechanical system. A widely used pavement layered analysis program ILLI-PAVE was employed in the analyses of flexible pavements of various pavement types; including full-depth asphalt and conventional flexible pavements, were built on either lime stabilized soils or untreated subgrade. Nonlinear properties of the subgrade soil and the base course aggregate as transportation geomaterials were also considered. A computer program, Soft Computing Based System Identifier or SOFTSYS, was developed. In SOFTSYS, ANNs were used as surrogate models to provide faster solutions of the nonlinear finite element program ILLI-PAVE. The deflections obtained from FWD tests in the field were matched with the predictions obtained from the numerical simulations to develop SOFTSYS models. The solution to the inverse problem for multi-layered pavements is computationally hard to achieve and is often not feasible due to field variability and quality of the collected data. The primary difficulty in the analysis arises from the substantial increase in the degree of non-uniqueness of the mapping from the pavement layer parameters to the FWD deflections. The insensitivity of some layer properties lowered SOFTSYS model performances. Still, SOFTSYS models were shown to work effectively with the synthetic data obtained from ILLI-PAVE finite element solutions. In general, SOFTSYS solutions very closely matched the ILLI-PAVE mechanistic pavement analysis results. For SOFTSYS validation, field collected FWD data were successfully used to predict pavement layer thicknesses and layer moduli of in-service flexible pavements. Some of the very promising SOFTSYS results indicated average absolute errors on the order of 2%, 7%, and 4% for the Hot Mix Asphalt (HMA) thickness estimation of full-depth asphalt pavements, full-depth pavements on lime stabilized soils and conventional flexible pavements, respectively. The field validations of SOFTSYS data also produced meaningful results. The thickness data obtained from Ground Penetrating Radar testing matched reasonably well with predictions from SOFTSYS models. The differences observed in the HMA and lime stabilized soil layer thicknesses observed were attributed to deflection data variability from FWD tests. The backcalculated asphalt concrete layer thickness results matched better in the case of full-depth asphalt flexible pavements built on lime stabilized soils compared to conventional flexible pavements. Overall, SOFTSYS was capable of producing reliable thickness estimates despite the variability of field constructed asphalt layer thicknesses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In contemporary societies higher education must shape individuals able to solve problems in a workable and simpler manner and, therefore, a multidisciplinary view of the problems, with insights in disciplines like psychology, mathematics or computer science becomes mandatory. Undeniably, the great challenge for teachers is to provide a comprehensive training in General Chemistry with high standards of quality, and aiming not only at the promotion of the student’s academic success, but also at the understanding of the competences/skills required to their future doings. Thus, this work will be focused on the development of an intelligent system to assess the Quality-of-General-Chemistry-Learning, based on factors related with subject, teachers and students.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Restoring the native vegetation is the most effective way to regenerate soil health. Under these conditions, vegetation cover in areas having degraded soils may be better sustained if the soil is amended with an external source of organic matter. The addition of organic materials to soils also increases infiltration rates and reduces erosion rates; these factors contribute to an available water increment and a successful and sustainable land management. The goal of this study was to analyze the effect of various organic amendments on the aggregate stability of soils in afforested plots. An experimental paired-plot layout was established in southern of Spain (homogeneous slope gradient: 7.5%; aspect: N170). Five amendments were applied in an experimental set of plots: straw mulching; mulch with chipped branches of Aleppo Pine (Pinus halepensis L.); TerraCotten hydroabsobent polymers; sewage sludge; sheep manure and control. Plots were afforested following the same spatial pattern, and amendments were mixed with the soil at the rate 10 Mg ha-1. The vegetation was planted in a grid pattern with 0.5 m between plants in each plot. During the afforestation process the soil was tilled to 25 cm depth from the surface. Soil from the afforested plots was sampled in: i) 6 months post-afforestation; ii) 12 months post-afforestation; iii) 18 months post-afforestation; and iv) 24 months post-afforestation. The sampling strategy for each plot involved collection of 4 disturbed soil samples taken from the surface (0–10 cm depth). The stability of aggregates was measured by wet-sieving. Regarding to soil aggregate stability, the percentage of stable aggregates has increased slightly in all the treatments in relation to control. Specifically, the differences were recorded in the fraction of macroaggregates (≥ 0.250 mm). The largest increases have been associated with straw mulch, pinus mulch and sludge. Similar results have been registered for the soil organic carbon content. Independent of the soil management, after six months, no significant differences in microaggregates were found regarding to the control plots. These results showed an increase in the stability of the macroaggregates when soil is amended with sludge, pinus mulch and straw much. This fact has been due to an increase in the number cementing agents due to: (i) the application of pinus, straw and sludge had resulted in the release of carbohydrates to the soil; and thus (ii) it has favored the development of a protective vegetation cover, which has increased the number of roots in the soil and the organic contribution to it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With hundreds of millions of users reporting locations and embracing mobile technologies, Location Based Services (LBSs) are raising new challenges. In this dissertation, we address three emerging problems in location services, where geolocation data plays a central role. First, to handle the unprecedented growth of generated geolocation data, existing location services rely on geospatial database systems. However, their inability to leverage combined geographical and textual information in analytical queries (e.g. spatial similarity joins) remains an open problem. To address this, we introduce SpsJoin, a framework for computing spatial set-similarity joins. SpsJoin handles combined similarity queries that involve textual and spatial constraints simultaneously. LBSs use this system to tackle different types of problems, such as deduplication, geolocation enhancement and record linkage. We define the spatial set-similarity join problem in a general case and propose an algorithm for its efficient computation. Our solution utilizes parallel computing with MapReduce to handle scalability issues in large geospatial databases. Second, applications that use geolocation data are seldom concerned with ensuring the privacy of participating users. To motivate participation and address privacy concerns, we propose iSafe, a privacy preserving algorithm for computing safety snapshots of co-located mobile devices as well as geosocial network users. iSafe combines geolocation data extracted from crime datasets and geosocial networks such as Yelp. In order to enhance iSafe's ability to compute safety recommendations, even when crime information is incomplete or sparse, we need to identify relationships between Yelp venues and crime indices at their locations. To achieve this, we use SpsJoin on two datasets (Yelp venues and geolocated businesses) to find venues that have not been reviewed and to further compute the crime indices of their locations. Our results show a statistically significant dependence between location crime indices and Yelp features. Third, review centered LBSs (e.g., Yelp) are increasingly becoming targets of malicious campaigns that aim to bias the public image of represented businesses. Although Yelp actively attempts to detect and filter fraudulent reviews, our experiments showed that Yelp is still vulnerable. Fraudulent LBS information also impacts the ability of iSafe to provide correct safety values. We take steps toward addressing this problem by proposing SpiDeR, an algorithm that takes advantage of the richness of information available in Yelp to detect abnormal review patterns. We propose a fake venue detection solution that applies SpsJoin on Yelp and U.S. housing datasets. We validate the proposed solutions using ground truth data extracted by our experiments and reviews filtered by Yelp.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Code patterns, including programming patterns and design patterns, are good references for programming language feature improvement and software re-engineering. However, to our knowledge, no existing research has attempted to detect code patterns based on code clone detection technology. In this study, we build upon the previous work and propose to detect and analyze code patterns from a collection of open source projects using NiPAT technology. Because design patterns are most closely associated with object-oriented languages, we choose Java and Python projects to conduct our study. The tool we use for detecting patterns is NiPAT, a pattern detecting tool originally developed for the TXL programming language based on the NiCad clone detector. We extend NiPAT for the Java and Python programming languages. Then, we try to identify all the patterns from the pattern report and classify them into several different categories. In the end of the study, we analyze all the patterns and compare the differences between Java and Python patterns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acute Coronary Syndrome (ACS) is transversal to a broad and heterogeneous set of human beings, and assumed as a serious diagnosis and risk stratification problem. Although one may be faced with or had at his disposition different tools as biomarkers for the diagnosis and prognosis of ACS, they have to be previously evaluated and validated in different scenarios and patient cohorts. Besides ensuring that a diagnosis is correct, attention should also be directed to ensure that therapies are either correctly or safely applied. Indeed, this work will focus on the development of a diagnosis decision support system in terms of its knowledge representation and reasoning mechanisms, given here in terms of a formal framework based on Logic Programming, complemented with a problem solving methodology to computing anchored on Artificial Neural Networks. On the one hand it caters for the evaluation of ACS predisposing risk and the respective Degree-of-Confidence that one has on such a happening. On the other hand it may be seen as a major development on the Multi-Value Logics to understand things and ones behavior. Undeniably, the proposed model allows for an improvement of the diagnosis process, classifying properly the patients that presented the pathology (sensitivity ranging from 89.7% to 90.9%) as well as classifying the absence of ACS (specificity ranging from 88.4% to 90.2%).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that human resources play a valuable role in a sustainable organizational development. Indeed, this work will focus on the development of a decision support system to assess workers’ satisfaction based on factors related to human resources management practices. The framework is built on top of a Logic Programming approach to Knowledge Representation and Reasoning, complemented with a Case Based approach to computing. The proposed solution is unique in itself, once it caters for the explicit treatment of incomplete, unknown, or even self-contradictory information, either in terms of a qualitative or quantitative setting. Furthermore, clustering methods based on similarity analysis among cases were used to distinguish and aggregate collections of historical data or knowledge in order to reduce the search space, therefore enhancing the cases retrieval and the overall computational process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The AntiPhospholipid Syndrome (APS) is an acquired autoimmune disorder induced by high levels of antiphospholipid antibodies that cause arterial and veins thrombosis, as well as pregnancy-related complications and morbidity, as clinical manifestations. This autoimmune hypercoagulable state, usually known as Hughes syndrome, has severe consequences for the patients, being one of the main causes of thrombotic disorders and death. Therefore, it is required to be preventive; being aware of how probable is to have that kind of syndrome. Despite the updated of antiphospholipid syndrome classification, the diagnosis remains difficult to establish. Additional research on clinically relevant antibodies and standardization of their quantification are required in order to improve the antiphospholipid syndrome risk assessment. Thus, this work will focus on the development of a diagnosis decision support system in terms of a formal agenda built on a Logic Programming approach to knowledge representation and reasoning, complemented with a computational framework based on Artificial Neural Networks. The proposed model allows for improving the diagnosis, classifying properly the patients that really presented this pathology (sensitivity higher than 85%), as well as classifying the absence of APS (specificity close to 95%).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Internet of Things systems are pervasive systems evolved from cyber-physical to large-scale systems. Due to the number of technologies involved, software development involves several integration challenges. Among them, the ones preventing proper integration are those related to the system heterogeneity, and thus addressing interoperability issues. From a software engineering perspective, developers mostly experience the lack of interoperability in the two phases of software development: programming and deployment. On the one hand, modern software tends to be distributed in several components, each adopting its most-appropriate technology stack, pushing programmers to code in a protocol- and data-agnostic way. On the other hand, each software component should run in the most appropriate execution environment and, as a result, system architects strive to automate the deployment in distributed infrastructures. This dissertation aims to improve the development process by introducing proper tools to handle certain aspects of the system heterogeneity. Our effort focuses on three of these aspects and, for each one of those, we propose a tool addressing the underlying challenge. The first tool aims to handle heterogeneity at the transport and application protocol level, the second to manage different data formats, while the third to obtain optimal deployment. To realize the tools, we adopted a linguistic approach, i.e.\ we provided specific linguistic abstractions that help developers to increase the expressive power of the programming language they use, writing better solutions in more straightforward ways. To validate the approach, we implemented use cases to show that the tools can be used in practice and that they help to achieve the expected level of interoperability. In conclusion, to move a step towards the realization of an integrated Internet of Things ecosystem, we target programmers and architects and propose them to use the presented tools to ease the software development process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A High-Performance Computing job dispatcher is a critical software that assigns the finite computing resources to submitted jobs. This resource assignment over time is known as the on-line job dispatching problem in HPC systems. The fact the problem is on-line means that solutions must be computed in real-time, and their required time cannot exceed some threshold to do not affect the normal system functioning. In addition, a job dispatcher must deal with a lot of uncertainty: submission times, the number of requested resources, and duration of jobs. Heuristic-based techniques have been broadly used in HPC systems, at the cost of achieving (sub-)optimal solutions in a short time. However, the scheduling and resource allocation components are separated, thus generates a decoupled decision that may cause a performance loss. Optimization-based techniques are less used for this problem, although they can significantly improve the performance of HPC systems at the expense of higher computation time. Nowadays, HPC systems are being used for modern applications, such as big data analytics and predictive model building, that employ, in general, many short jobs. However, this information is unknown at dispatching time, and job dispatchers need to process large numbers of them quickly while ensuring high Quality-of-Service (QoS) levels. Constraint Programming (CP) has been shown to be an effective approach to tackle job dispatching problems. However, state-of-the-art CP-based job dispatchers are unable to satisfy the challenges of on-line dispatching, such as generate dispatching decisions in a brief period and integrate current and past information of the housing system. Given the previous reasons, we propose CP-based dispatchers that are more suitable for HPC systems running modern applications, generating on-line dispatching decisions in a proper time and are able to make effective use of job duration predictions to improve QoS levels, especially for workloads dominated by short jobs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern scientific discoveries are driven by an unsatisfiable demand for computational resources. High-Performance Computing (HPC) systems are an aggregation of computing power to deliver considerably higher performance than one typical desktop computer can provide, to solve large problems in science, engineering, or business. An HPC room in the datacenter is a complex controlled environment that hosts thousands of computing nodes that consume electrical power in the range of megawatts, which gets completely transformed into heat. Although a datacenter contains sophisticated cooling systems, our studies indicate quantitative evidence of thermal bottlenecks in real-life production workload, showing the presence of significant spatial and temporal thermal and power heterogeneity. Therefore minor thermal issues/anomalies can potentially start a chain of events that leads to an unbalance between the amount of heat generated by the computing nodes and the heat removed by the cooling system originating thermal hazards. Although thermal anomalies are rare events, anomaly detection/prediction in time is vital to avoid IT and facility equipment damage and outage of the datacenter, with severe societal and business losses. For this reason, automated approaches to detect thermal anomalies in datacenters have considerable potential. This thesis analyzed and characterized the power and thermal characteristics of a Tier0 datacenter (CINECA) during production and under abnormal thermal conditions. Then, a Deep Learning (DL)-powered thermal hazard prediction framework is proposed. The proposed models are validated against real thermal hazard events reported for the studied HPC cluster while in production. This thesis is the first empirical study of thermal anomaly detection and prediction techniques of a real large-scale HPC system to the best of my knowledge. For this thesis, I used a large-scale dataset, monitoring data of tens of thousands of sensors for around 24 months with a data collection rate of around 20 seconds.