977 resultados para Full spatial domain computation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent evidence has highlighted the important role that number ordering skills play in arithmetic abilities (e.g., Lyons & Beilock, 2011). In fact, Lyons et al. (2014) demonstrated that although at the start of formal mathematics education number comparison skills are the best predictors of arithmetic performance, from around the age of 10, number ordering skills become the strongest numerical predictors of arithmetic abilities. In the current study we demonstrated that number comparison and ordering skills were both significantly related to arithmetic performance in adults, and the effect size was greater in the case of ordering skills. Additionally, we found that the effect of number comparison skills on arithmetic performance was partially mediated by number ordering skills. Moreover, performance on comparison and ordering tasks involving the months of the year was also strongly correlated with arithmetic skills, and participants displayed similar (canonical or reverse) distance effects on the comparison and ordering tasks involving months as when the tasks included numbers. This suggests that the processes responsible for the link between comparison and ordering skills and arithmetic performance are not specific to the domain of numbers. Finally, a factor analysis indicated that performance on comparison and ordering tasks loaded on a factor which included performance on a number line task and self-reported spatial thinking styles. These results substantially extend previous research on the role of order processing abilities in mental arithmetic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inverse heat conduction problems (IHCPs) appear in many important scientific and technological fields. Hence analysis, design, implementation and testing of inverse algorithms are also of great scientific and technological interest. The numerical simulation of 2-D and –D inverse (or even direct) problems involves a considerable amount of computation. Therefore, the investigation and exploitation of parallel properties of such algorithms are equally becoming very important. Domain decomposition (DD) methods are widely used to solve large scale engineering problems and to exploit their inherent ability for the solution of such problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim: The European Commission Cooperation in Science and Technology (COST) Action FA1203 “SMARTER” aims to make recommendations for the sustainable management of Ambrosia across Europe and for monitoring its efficiency and cost effectiveness. The goal of the present study is to provide a baseline for spatial and temporal variations in airborne Ambrosia pollen in Europe that can be used for the management and evaluation of this noxious plant . Location: The full range of Ambrosia artemisiifolia L. distribution over Europe (39oN-60oN; 2oW-45oE). Methods: Airborne Ambrosia pollen data for the principal flowering period of Ambrosia (August-September) recorded during a 10-year period (2004-2013) were obtained from 242 monitoring sites. The mean sum of daily average airborne Ambrosia pollen and the number of days that Ambrosia pollen was recorded in the air were analysed. The mean and Standard Deviation (SD) were calculated regardless of the number of years included in the study period, while trends are based on those time series with 8 or more years of data. Trends were considered significant at p < 0.05. Results: There were few significant trends in the magnitude and frequency of atmospheric Ambrosia pollen (only 8% for the mean sum of daily average Ambrosia pollen concentrations and 14% for the mean number of days Ambrosia pollen was recorded in the air). Main conclusions: The direction of any trends varied locally and reflect changes in sources of the pollen, either in size or in distance from the monitoring station. Pollen monitoring is important for providing an early warning of the expansion of this invasive and noxious plant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose three research problems to explore the relations between trust and security in the setting of distributed computation. In the first problem, we study trust-based adversary detection in distributed consensus computation. The adversaries we consider behave arbitrarily disobeying the consensus protocol. We propose a trust-based consensus algorithm with local and global trust evaluations. The algorithm can be abstracted using a two-layer structure with the top layer running a trust-based consensus algorithm and the bottom layer as a subroutine executing a global trust update scheme. We utilize a set of pre-trusted nodes, headers, to propagate local trust opinions throughout the network. This two-layer framework is flexible in that it can be easily extensible to contain more complicated decision rules, and global trust schemes. The first problem assumes that normal nodes are homogeneous, i.e. it is guaranteed that a normal node always behaves as it is programmed. In the second and third problems however, we assume that nodes are heterogeneous, i.e, given a task, the probability that a node generates a correct answer varies from node to node. The adversaries considered in these two problems are workers from the open crowd who are either investing little efforts in the tasks assigned to them or intentionally give wrong answers to questions. In the second part of the thesis, we consider a typical crowdsourcing task that aggregates input from multiple workers as a problem in information fusion. To cope with the issue of noisy and sometimes malicious input from workers, trust is used to model workers' expertise. In a multi-domain knowledge learning task, however, using scalar-valued trust to model a worker's performance is not sufficient to reflect the worker's trustworthiness in each of the domains. To address this issue, we propose a probabilistic model to jointly infer multi-dimensional trust of workers, multi-domain properties of questions, and true labels of questions. Our model is very flexible and extensible to incorporate metadata associated with questions. To show that, we further propose two extended models, one of which handles input tasks with real-valued features and the other handles tasks with text features by incorporating topic models. Our models can effectively recover trust vectors of workers, which can be very useful in task assignment adaptive to workers' trust in the future. These results can be applied for fusion of information from multiple data sources like sensors, human input, machine learning results, or a hybrid of them. In the second subproblem, we address crowdsourcing with adversaries under logical constraints. We observe that questions are often not independent in real life applications. Instead, there are logical relations between them. Similarly, workers that provide answers are not independent of each other either. Answers given by workers with similar attributes tend to be correlated. Therefore, we propose a novel unified graphical model consisting of two layers. The top layer encodes domain knowledge which allows users to express logical relations using first-order logic rules and the bottom layer encodes a traditional crowdsourcing graphical model. Our model can be seen as a generalized probabilistic soft logic framework that encodes both logical relations and probabilistic dependencies. To solve the collective inference problem efficiently, we have devised a scalable joint inference algorithm based on the alternating direction method of multipliers. The third part of the thesis considers the problem of optimal assignment under budget constraints when workers are unreliable and sometimes malicious. In a real crowdsourcing market, each answer obtained from a worker incurs cost. The cost is associated with both the level of trustworthiness of workers and the difficulty of tasks. Typically, access to expert-level (more trustworthy) workers is more expensive than to average crowd and completion of a challenging task is more costly than a click-away question. In this problem, we address the problem of optimal assignment of heterogeneous tasks to workers of varying trust levels with budget constraints. Specifically, we design a trust-aware task allocation algorithm that takes as inputs the estimated trust of workers and pre-set budget, and outputs the optimal assignment of tasks to workers. We derive the bound of total error probability that relates to budget, trustworthiness of crowds, and costs of obtaining labels from crowds naturally. Higher budget, more trustworthy crowds, and less costly jobs result in a lower theoretical bound. Our allocation scheme does not depend on the specific design of the trust evaluation component. Therefore, it can be combined with generic trust evaluation algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Soil is a complex heterogeneous system comprising of highly variable and dynamic micro-habitats that have significant impacts on the growth and activity of resident microbiota. A question addressed in this research is how soil structure affects the temporal dynamics and spatial distribution of bacteria. Using repacked microcosms, the effect of bulk-density, aggregate sizes and water content on growth and distribution of introduced Pseudomonas fluorescens and Bacillus subtilis bacteria was determined. Soil bulk-density and aggregate sizes were altered to manipulate the characteristics of the pore volume where bacteria reside and through which distribution of solutes and nutrients is controlled. X-ray CT was used to characterise the pore geometry of repacked soil microcosms. Soil porosity, connectivity and soil-pore interface area declined with increasing bulk-density. In samples that differ in pore geometry, its effect on growth and extent of spread of introduced bacteria was investigated. The growth rate of bacteria reduced with increasing bulk-density, consistent with a significant difference in pore geometry. To measure the ability of bacteria to spread thorough soil, placement experiments were developed. Bacteria were capable of spreading several cm’s through soil. The extent of spread of bacteria was faster and further in soil with larger and better connected pore volumes. To study the spatial distribution in detail, a methodology was developed where a combination of X-ray microtopography, to characterize the soil structure, and fluorescence microscopy, to visualize and quantify bacteria in soil sections was used. The influence of pore characteristics on distribution of bacteria was analysed at macro- and microscales. Soil porosity, connectivity and soil-pore interface influenced bacterial distribution only at the macroscale. The method developed was applied to investigate the effect of soil pore characteristics on the extent of spread of bacteria introduced locally towards a C source in soil. Soil-pore interface influenced spread of bacteria and colonization, therefore higher bacterial densities were found in soil with higher pore volumes. Therefore the results in this showed that pore geometry affects the growth and spread of bacteria in soil. The method developed showed showed how thin sectioning technique can be combined with 3D X-ray CT to visualize bacterial colonization of a 3D pore volume. This novel combination of methods is a significant step towards a full mechanistic understanding of microbial dynamics in structured soils.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present new methodologies to generate rational function approximations of broadband electromagnetic responses of linear and passive networks of high-speed interconnects, and to construct SPICE-compatible, equivalent circuit representations of the generated rational functions. These new methodologies are driven by the desire to improve the computational efficiency of the rational function fitting process, and to ensure enhanced accuracy of the generated rational function interpolation and its equivalent circuit representation. Toward this goal, we propose two new methodologies for rational function approximation of high-speed interconnect network responses. The first one relies on the use of both time-domain and frequency-domain data, obtained either through measurement or numerical simulation, to generate a rational function representation that extrapolates the input, early-time transient response data to late-time response while at the same time providing a means to both interpolate and extrapolate the used frequency-domain data. The aforementioned hybrid methodology can be considered as a generalization of the frequency-domain rational function fitting utilizing frequency-domain response data only, and the time-domain rational function fitting utilizing transient response data only. In this context, a guideline is proposed for estimating the order of the rational function approximation from transient data. The availability of such an estimate expedites the time-domain rational function fitting process. The second approach relies on the extraction of the delay associated with causal electromagnetic responses of interconnect systems to provide for a more stable rational function process utilizing a lower-order rational function interpolation. A distinctive feature of the proposed methodology is its utilization of scattering parameters. For both methodologies, the approach of fitting the electromagnetic network matrix one element at a time is applied. It is shown that, with regard to the computational cost of the rational function fitting process, such an element-by-element rational function fitting is more advantageous than full matrix fitting for systems with a large number of ports. Despite the disadvantage that different sets of poles are used in the rational function of different elements in the network matrix, such an approach provides for improved accuracy in the fitting of network matrices of systems characterized by both strongly coupled and weakly coupled ports. Finally, in order to provide a means for enforcing passivity in the adopted element-by-element rational function fitting approach, the methodology for passivity enforcement via quadratic programming is modified appropriately for this purpose and demonstrated in the context of element-by-element rational function fitting of the admittance matrix of an electromagnetic multiport.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intertidal flats of the estuarine macro-intertidal Baie des Veys (France) were investigated to identify spatial features of sediment and microphytobenthos (MPB) in April 2003. Gradients occurred within the domain, and patches were identified close to vegetated areas or within the oyster-farming areas where calm physical conditions and biodeposition altered the sediment and MPB landscapes. Spatial patterns of chl a content were explained primarily by the influence of sediment features, while bed elevation and compaction brought only minor insights into MPB distribution regulation. The smaller size of MPB patches compared to silt patches revealed the interplay between physical structure defining the sediment landscape, the biotic patches that they contain, and that median grain-size is the most important parameter in explaining the spatial pattern of MPB. Small-scale temporal dynamics of sediment chl a content and grain-size distribution were surveyed in parallel during 2 periods of 14 d to detect tidal and seasonal variations. Our results showed a weak relationship between mud fraction and MPB biomass in March, and this relationship fully disappeared in July. Tidal exposure was the most important parameter in explaining the summer temporal dynamics of MPB. This study reveals the general importance of bed elevation and tidal exposure in muddy habitats and that silt content was a prime governing physical factor in winter. Biostabilisation processes seemed to behave only as secondary factors that could only amplify the initial silt accumulation in summer rather than primary factors explaining spatial or long-term trends of sediment changes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Includes a Report of the Exerciese in the Salt Lake Assembly Hall.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Secure Multi-party Computation (MPC) enables a set of parties to collaboratively compute, using cryptographic protocols, a function over their private data in a way that the participants do not see each other's data, they only see the final output. Typical MPC examples include statistical computations over joint private data, private set intersection, and auctions. While these applications are examples of monolithic MPC, richer MPC applications move between "normal" (i.e., per-party local) and "secure" (i.e., joint, multi-party secure) modes repeatedly, resulting overall in mixed-mode computations. For example, we might use MPC to implement the role of the dealer in a game of mental poker -- the game will be divided into rounds of local decision-making (e.g. bidding) and joint interaction (e.g. dealing). Mixed-mode computations are also used to improve performance over monolithic secure computations. Starting with the Fairplay project, several MPC frameworks have been proposed in the last decade to help programmers write MPC applications in a high-level language, while the toolchain manages the low-level details. However, these frameworks are either not expressive enough to allow writing mixed-mode applications or lack formal specification, and reasoning capabilities, thereby diminishing the parties' trust in such tools, and the programs written using them. Furthermore, none of the frameworks provides a verified toolchain to run the MPC programs, leaving the potential of security holes that can compromise the privacy of parties' data. This dissertation presents language-based techniques to make MPC more practical and trustworthy. First, it presents the design and implementation of a new MPC Domain Specific Language, called Wysteria, for writing rich mixed-mode MPC applications. Wysteria provides several benefits over previous languages, including a conceptual single thread of control, generic support for more than two parties, high-level abstractions for secret shares, and a fully formalized type system and operational semantics. Using Wysteria, we have implemented several MPC applications, including, for the first time, a card dealing application. The dissertation next presents Wys*, an embedding of Wysteria in F*, a full-featured verification oriented programming language. Wys* improves on Wysteria along three lines: (a) It enables programmers to formally verify the correctness and security properties of their programs. As far as we know, Wys* is the first language to provide verification capabilities for MPC programs. (b) It provides a partially verified toolchain to run MPC programs, and finally (c) It enables the MPC programs to use, with no extra effort, standard language constructs from the host language F*, thereby making it more usable and scalable. Finally, the dissertation develops static analyses that help optimize monolithic MPC programs into mixed-mode MPC programs, while providing similar privacy guarantees as the monolithic versions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increasing in resolution of numerical weather prediction models has allowed more and more realistic forecasts of atmospheric parameters. Due to the growing variability into predicted fields the traditional verification methods are not always able to describe the model ability because they are based on a grid-point-by-grid-point matching between observation and prediction. Recently, new spatial verification methods have been developed with the aim of show the benefit associated to the high resolution forecast. Nested in among of the MesoVICT international project, the initially aim of this work is to compare the newly tecniques remarking advantages and disadvantages. First of all, the MesoVICT basic examples, represented by synthetic precipitation fields, have been examined. Giving an error evaluation in terms of structure, amplitude and localization of the precipitation fields, the SAL method has been studied more thoroughly respect to the others approaches with its implementation in the core cases of the project. The verification procedure has concerned precipitation fields over central Europe: comparisons between the forecasts performed by the 00z COSMO-2 model and the VERA (Vienna Enhanced Resolution Analysis) have been done. The study of these cases has shown some weaknesses of the methodology examined; in particular has been highlighted the presence of a correlation between the optimal domain size and the extention of the precipitation systems. In order to increase ability of SAL, a subdivision of the original domain in three subdomains has been done and the method has been applied again. Some limits have been found in cases in which at least one of the two domains does not show precipitation. The overall results for the subdomains have been summarized on scatter plots. With the aim to identify systematic errors of the model the variability of the three parameters has been studied for each subdomain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With hundreds of millions of users reporting locations and embracing mobile technologies, Location Based Services (LBSs) are raising new challenges. In this dissertation, we address three emerging problems in location services, where geolocation data plays a central role. First, to handle the unprecedented growth of generated geolocation data, existing location services rely on geospatial database systems. However, their inability to leverage combined geographical and textual information in analytical queries (e.g. spatial similarity joins) remains an open problem. To address this, we introduce SpsJoin, a framework for computing spatial set-similarity joins. SpsJoin handles combined similarity queries that involve textual and spatial constraints simultaneously. LBSs use this system to tackle different types of problems, such as deduplication, geolocation enhancement and record linkage. We define the spatial set-similarity join problem in a general case and propose an algorithm for its efficient computation. Our solution utilizes parallel computing with MapReduce to handle scalability issues in large geospatial databases. Second, applications that use geolocation data are seldom concerned with ensuring the privacy of participating users. To motivate participation and address privacy concerns, we propose iSafe, a privacy preserving algorithm for computing safety snapshots of co-located mobile devices as well as geosocial network users. iSafe combines geolocation data extracted from crime datasets and geosocial networks such as Yelp. In order to enhance iSafe's ability to compute safety recommendations, even when crime information is incomplete or sparse, we need to identify relationships between Yelp venues and crime indices at their locations. To achieve this, we use SpsJoin on two datasets (Yelp venues and geolocated businesses) to find venues that have not been reviewed and to further compute the crime indices of their locations. Our results show a statistically significant dependence between location crime indices and Yelp features. Third, review centered LBSs (e.g., Yelp) are increasingly becoming targets of malicious campaigns that aim to bias the public image of represented businesses. Although Yelp actively attempts to detect and filter fraudulent reviews, our experiments showed that Yelp is still vulnerable. Fraudulent LBS information also impacts the ability of iSafe to provide correct safety values. We take steps toward addressing this problem by proposing SpiDeR, an algorithm that takes advantage of the richness of information available in Yelp to detect abnormal review patterns. We propose a fake venue detection solution that applies SpsJoin on Yelp and U.S. housing datasets. We validate the proposed solutions using ground truth data extracted by our experiments and reviews filtered by Yelp.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The wave energy industry is entering a new phase of pre-commercial and commercial deployments of full-scale devices, so better understanding of seaway variability is critical to the successful operation of devices. The response of Wave Energy Converters to incident waves govern their operational performance and for many devices, this is highly dependent on spectral shape due to their resonant properties. Various methods of wave measurement are presented, along with analysis techniques and empirical models. Resource assessments, device performance predictions and monitoring of operational devices will often be based on summary statistics and assume a standard spectral shape such as Pierson-Moskowitz or JONSWAP. Furthermore, these are typically derived from the closest available wave data, frequently separated from the site on scales in the order of 1km. Therefore, variability of seaways from standard spectral shapes and spatial inconsistency between the measurement point and the device site will cause inaccuracies in the performance assessment. This thesis categorises time and frequency domain analysis techniques that can be used to identify changes in a sea state from record to record. Device specific issues such as dimensional scaling of sea states and power output are discussed along with potential differences that arise in estimated and actual output power of a WEC due to spectral shape variation. This is investigated using measured data from various phases of device development.