967 resultados para Secure Multi-Party Computation
Resumo:
Large-scale simulations of parts of the brain using detailed neuronal models to improve our understanding of brain functions are becoming a reality with the usage of supercomputers and large clusters. However, the high acquisition and maintenance cost of these computers, including the physical space, air conditioning, and electrical power, limits the number of simulations of this kind that scientists can perform. Modern commodity graphical cards, based on the CUDA platform, contain graphical processing units (GPUs) composed of hundreds of processors that can simultaneously execute thousands of threads and thus constitute a low-cost solution for many high-performance computing applications. In this work, we present a CUDA algorithm that enables the execution, on multiple GPUs, of simulations of large-scale networks composed of biologically realistic Hodgkin-Huxley neurons. The algorithm represents each neuron as a CUDA thread, which solves the set of coupled differential equations that model each neuron. Communication among neurons located in different GPUs is coordinated by the CPU. We obtained speedups of 40 for the simulation of 200k neurons that received random external input and speedups of 9 for a network with 200k neurons and 20M neuronal connections, in a single computer with two graphic boards with two GPUs each, when compared with a modern quad-core CPU. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
We present a variable time step, fully adaptive in space, hybrid method for the accurate simulation of incompressible two-phase flows in the presence of surface tension in two dimensions. The method is based on the hybrid level set/front-tracking approach proposed in [H. D. Ceniceros and A. M. Roma, J. Comput. Phys., 205, 391400, 2005]. Geometric, interfacial quantities are computed from front-tracking via the immersed-boundary setting while the signed distance (level set) function, which is evaluated fast and to machine precision, is used as a fluid indicator. The surface tension force is obtained by employing the mixed Eulerian/Lagrangian representation introduced in [S. Shin, S. I. Abdel-Khalik, V. Daru and D. Juric, J. Comput. Phys., 203, 493-516, 2005] whose success for greatly reducing parasitic currents has been demonstrated. The use of our accurate fluid indicator together with effective Lagrangian marker control enhance this parasitic current reduction by several orders of magnitude. To resolve accurately and efficiently sharp gradients and salient flow features we employ dynamic, adaptive mesh refinements. This spatial adaption is used in concert with a dynamic control of the distribution of the Lagrangian nodes along the fluid interface and a variable time step, linearly implicit time integration scheme. We present numerical examples designed to test the capabilities and performance of the proposed approach as well as three applications: the long-time evolution of a fluid interface undergoing Rayleigh-Taylor instability, an example of bubble ascending dynamics, and a drop impacting on a free interface whose dynamics we compare with both existing numerical and experimental data.
Resumo:
Poverty in Brazil has been gradually reduced. Among the main reasons, there are public policies for universalization of rights. On the other hand, the municipalities' Human Development Index indicates scenarios of growing inequality. In other words, some regions, basically of rural character, were left behind in that process of development. In 2008, the “Territórios da Cidadania” (Territories of Citizenship) Program was launched by the federal government, under high expectations. It was proposed to develop those regions and to prioritize the arrival of ongoing federal public policies where they were most demanded. The program has shown an innovative arrangement which included dozens of ministries and other federal agencies, state governments, municipalities and collegialities to the palliative management and control of the territory. In this structure, both new and existing jurisdictions came to support the program coordination. This arrangement was classified as an example of multi-level governance, whose theory has been an efficient instrument to understand the intra- and intergovernmental relations under which the program took place. The program lasted only three years. In Vale do Ribeira Territory – SP, few community leaderships acknowledge it, although not having further information about its actions and effects. Against this background, the approach of this research aims to study the program coordination and governance structure (from Vale Territory, considered as the most local level, until the federal government), based on the hypothesis that, beyond the local contingencies in Vale do Ribeira, the layout and implementation of the Territories of Citizenship Program as they were formulated possess fundamental structural issues that hinder its goals of reducing poverty and inequality through promoting the development of the territory. Complementing the research, its specific goal was to raise the program layout and background in order to understand how the relations, predicted or not in its structure, were formulated and how they were developed, with special attention to Vale do Ribeira-SP. Generally speaking, it was concluded that the coordination and governance arrangement of the Territories of Citizenship Program failed for not having developed qualified solutions to deal with the challenges of the federalist Brazilian structure, party politics, sectorized public actions, or even the territory contingencies and specificities. The complexity of the program, the poverty problem proposed to be faced, and the territorial strategy of development charged a high cost of coordination, which was not accomplished by the proposal of centralization in the federal government with internal decentralization of the coordination. As the presidency changed in 2011, the program could not present results that were able to justify the arguments for its continuation, therefore it was paralyzed, lost its priority status, and the resources previously invested were redirected.
Resumo:
We have developed a method to compute the albedo contrast between dust devil tracks and their surrounding regions on Mars. It is mainly based on Mathematical Morphology operators and uses all the points of the edges of the tracks to compute the values of the albedo contrast. It permits the extraction of more accurate and complete information, when compared to traditional point sampling, not only providing better statistics but also permitting the analysis of local variations along the entirety of the tracks. This measure of contrast, based on relative quantities, is much more adequate to establish comparisons at regional scales and in multi-temporal basis using imagery acquired in rather different environmental and operational conditions. Also, the substantial increase in the details extracted may permit quantifying differential depositions of dust by computing local temporal fading of the tracks with consequences on a better estimation of the thickness of the top most layer of dust and the minimum value needed to create dust devils tracks. The developed tool is tested on 110 HiRISE images depicting regions in the Aeolis, Argyre, Eridania, Noachis and Hellas quadrangles. As a complementary evaluation, we also performed a temporal analysis of the albedo in a region of Russell crater, where high seasonal dust devil activity was already observed before, comprising the years 2007-2012. The mean albedo of the Russell crater is in this case indicative of dust devil tracks presence and, therefore, can be used to quantify dust devil activity. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
Wireless sensor networks are promising solutions for many applications. However, wireless sensor nodes suffer from many constraints such as low computation capability, small memory, limited energy resources, and so on. Grouping is an important technique to localize computation and reduce communication overhead in wireless sensor networks. In this paper, we use grouping to refer to the process of combining a set of sensor nodes with similar properties. We propose two centralized group rekeying (CGK) schemes for secure group communication in sensor networks. The lifetime of a group is divided into three phases, i.e., group formation, group maintenance, and group dissolution. We demonstrate how to set up the group and establish the group key in each phase. Our analysis shows that the proposed two schemes are computationally efficient and secure.
Resumo:
Three-party password-authenticated key exchange (3PAKE) protocols allow entities to negotiate a secret session key with the aid of a trusted server with whom they share a human-memorable password. Recently, Lou and Huang proposed a simple 3PAKE protocol based on elliptic curve cryptography, which is claimed to be secure and to provide superior efficiency when compared with similar-purpose solutions. In this paper, however, we show that the solution is vulnerable to key-compromise impersonation and offline password guessing attacks from system insiders or outsiders, which indicates that the empirical approach used to evaluate the scheme's security is flawed. These results highlight the need of employing provable security approaches when designing and analyzing PAKE schemes. Copyright (c) 2011 John Wiley & Sons, Ltd.
Resumo:
The sustained demand for faster,more powerful chips has beenmet by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SOC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MPSOC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NOCS) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the on-chip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation focuses on all of the above points, by describing a NoC architectural implementation called ×pipes; a NoC simulation environment within a cycle-accurate MPSoC emulator called MPARM; a NoC design flow consisting of a front-end tool for optimal NoC instantiation, called SunFloor, and a set of back-end facilities for the study of NoC physical implementations. This dissertation proves the viability of NoCs for current and upcoming designs, by outlining their advantages (alongwith a fewtradeoffs) and by providing a full NoC implementation framework. It also presents some examples of additional extensions of NoCs, allowing e.g. for increased fault tolerance, and outlines where NoCsmay find further application scenarios, such as in stacked chips.
Resumo:
This thesis deals with the analytic study of dynamics of Multi--Rotor Unmanned Aerial Vehicles. It is conceived to give a set of mathematical instruments apt to the theoretical study and design of these flying machines. The entire work is organized in analogy with classical academic texts about airplane flight dynamics. First, the non--linear equations of motion are defined and all the external actions are modeled, with particular attention to rotors aerodynamics. All the equations are provided in a form, and with personal expedients, to be directly exploitable in a simulation environment. This has requited an answer to questions like the trim of such mathematical systems. All the treatment is developed aiming at the description of different multi--rotor configurations. Then, the linearized equations of motion are derived. The computation of the stability and control derivatives of the linear model is carried out. The study of static and dynamic stability characteristics is, thus, addressed, showing the influence of the various geometric and aerodynamic parameters of the machine and in particular of the rotors. All the theoretic results are finally utilized in two interesting cases. One concerns the design of control systems for attitude stabilization. The linear model permits the tuning of linear controllers gains and the non--linear model allows the numerical testing. The other case is the study of the performances of an innovative configuration of quad--rotor aircraft. With the non--linear model the feasibility of maneuvers impossible for a traditional quad--rotor is assessed. The linear model is applied to the controllability analysis of such an aircraft in case of actuator block.
Resumo:
An integrated approach for multi-spectral segmentation of MR images is presented. This method is based on the fuzzy c-means (FCM) and includes bias field correction and contextual constraints over spatial intensity distribution and accounts for the non-spherical cluster's shape in the feature space. The bias field is modeled as a linear combination of smooth polynomial basis functions for fast computation in the clustering iterations. Regularization terms for the neighborhood continuity of intensity are added into the FCM cost functions. To reduce the computational complexity, the contextual regularizations are separated from the clustering iterations. Since the feature space is not isotropic, distance measure adopted in Gustafson-Kessel (G-K) algorithm is used instead of the Euclidean distance, to account for the non-spherical shape of the clusters in the feature space. These algorithms are quantitatively evaluated on MR brain images using the similarity measures.
Resumo:
There is great demand for easily-accessible, user-friendly dietary self-management applications. Yet accurate, fully-automatic estimation of nutritional intake using computer vision methods remains an open research problem. One key element of this problem is the volume estimation, which can be computed from 3D models obtained using multi-view geometry. The paper presents a computational system for volume estimation based on the processing of two meal images. A 3D model of the served meal is reconstructed using the acquired images and the volume is computed from the shape. The algorithm was tested on food models (dummy foods) with known volume and on real served food. Volume accuracy was in the order of 90 %, while the total execution time was below 15 seconds per image pair. The proposed system combines simple and computational affordable methods for 3D reconstruction, remained stable throughout the experiments, operates in near real time, and places minimum constraints on users.
Resumo:
The paper revives a theoretical definition of party coherence as being composed of two basic elements, cohesion and factionalism, to propose and apply a novel empirical measure based on spin physics. The simultaneous analysis of both components using a single measurement concept is applied to data representing the political beliefs of candidates in the Swiss general elections of 2003 and 2007, proposing a connection between the coherence of the beliefs party members hold and the assessment of parties being at risk of splitting. We also compare our measure with established polarization measures and demonstrate its advantage with respect to multi-dimensional data that lack clear structure. Furthermore, we outline how our analysis supports the distinction between bottom-up and top-down mechanisms of party splitting. In this way, we are able to turn the intuition of coherence into a defined quantitative concept that, additionally, offers a methodological basis for comparative research of party coherence. Our work serves as an example of how a complex systems approach allows to get a new perspective on a long-standing issue in political science.